comp.lang.labview - Labview Software Discussion Group.
I am having a very strange problem when using LabVIEW to acquire audio data via the Windows API from a Creative Professional E-MU 1616m sound card. The goal is to acquire sound in 24-bit resolution. When capturing sound in 16-bit mode (as set in the LabVIEW software), the E-MU 1616m behaves as expected, with a 105dB SNR and approximately -130dB noise floor, after dithering. However, when switching to 24-bit capture mode, a very severe truncation occurs. This sends the harmonic distortion and noise through the roof. After investigating fairly deeply in our LabVIEW code, I am wondering what might be the problem. I have compiled a number of screenshots which showcase the problem in more detail. Here is a background of the experiment: For these tests, both the analog and digital audio was generated by an Audio Precision System Two system, and was passed directly into the respective line-level or digital audio inputs. Digital audio was tested using both coax and optical cable. In the sound card control software, the audio was sent directly from the input channel into the WAVE IN L/R (via the Windows API, I assume). The sampling rate for the profile was 96kHz. The sampling rate in all LabVIEW functions was set to 96kHz. The sample rate set in the AP Digital generator was 96kHz. ANALOG 20dBu 96kHz 16bit.jpg In this test, everything looks fine. The audio input is at Full-scale for the E-mu's ADCs. It is exhibiting expected 16-bit performance (with dithering). ANALOG 20dBu 96kHz 24bit.jpg Now we instruct the driver to capture sound in 24 bits. Notice that the noise floor and THD+N go up considerably. Effects of truncation become visible on the time-domain display. ANALOG -20dBu 96kHz 16bit.jpg Now we drop the input level to -20dBu. The performance starts to look a little messy but is still acceptable. Note, however, the high peaks on the odd harmonics. ANALOG -20dBu 96kHz 24bit.jpg Now we try to capture at 24 bits. The effects of truncation are extreme at this low signal level. ANALOG -60dBu 96kHz 16bit.jpg Now we are at extremely low signal levels. Individual quantization levels can be seen on the signal. Dither is also present. Performance is still good. ANALOG -60dBu 96kHz 24bit.jpg However, when increasing the resolution to 24 (which should increase the number of quantization levels), our signal is reduced to a square wave. Obviously something is wrong. DIGITAL 0dB 16bit 96kHz 16bit.jpg Now on to the digital tests. We start with full-scale. We used an AP outputting a properly dithered 16-bit signal over an optical cable. The soundcard is instructed to receive in 16 bit mode. It looks good. DIGITAL 0dB 16bit 96kHz 24bit.jpg Using the same input, we change to 24 bit receive mode. DIGITAL 0dB 24bit 96kHz 16bit.jpg Now we set up the AP to output a properly dithered 24-bit signal at full-scale. The dips in the frequency domain show us that something is wrong. DIGITAL 0dB 24bit 96kHz 24bit.jpg Receiving in 24-bit mode. Same story as before. DIGITAL -90dB 16bit 96kHz 16bit.jpg Now we decrease the amplitude to a low level. Well-implemented dither is shown here clearly. DIGITAL -90dB 16bit 96kHz 24bit.jpg However, receiving in 24-bit mode reduces the signal to a dithered square wave. DIGITAL -90dB 24bit 96kHz 16bit.jpg Here is the low-level signal with the AP generating a 24-bit signal. Dither is applied, but vanishes in the e-mu 1616m. It seems the dither level has been changed. This is the cause of our dips from before. DIGITAL -90dB 24bit 96kHz 24bit.jpg And finally, we transmit and receive in 24-bits. Here are the results. We have achieved similar results using several of your breakout boxes and soundcards. Attached are a
Hi i work with movie and i need to analyse frame by frame, so i save the frame like .bmp, theese pictures are U32. When i analyse the pictures, the function "Unflatten pixmap" has 1-bit, 4-bit, 8-bit, 24-bit. I would like know if the 24-bit is the same to U32 RGB. Thank you for yor help Tonio
Hello, I would to know what is the best solution for an application of continues sound acquisition at Fs=44.1 kHz, signal processing (transform, autocorrelation, pitch processing?), and generation at Fs=44.1KHz. Actually, all is implemented under LabView, using the internal PC sound card inputs/outputs, (with the LabView Sound Palette), Windows XP, and a 3GHz quadcore, but the real-time is not guaranteed after Fs=8KHz. My interrogations are: - I think that full-duplex sound cards with an internal driver allow high frequency simultaneous input/output, but is this feature manageable with the LabView Sound Palette. If no, is VISA allows that and is there any VISA drivers for full-duplex sound cards already coded? - Under LabView RT (Real Time), can I drive the internal sound card for my needs and how? DAQ-mX enables it to control inputs/outputs and how can I call it? - Is a solution to use National Instruments voltage input/output devices with DAQ-mX under LabView RT? If yes, are there some card models with pre processing for sound? Bonus question : Many of sound treatments are performed with Matlab code (using the Matlab Nodes). Under LabView RT, is the only solution to create DLL of the Matlab code and call them with LabView nodes ? Thank you in advance for help.