Importing RAW data: Getting only a square signal!

Dear Audicity Forum,

I am very noob with Audio stuff and Audio ADCs, but I am progressing step by step.

Let me explain the situation:

I have a function generator which is generating a sine Wave of 1kHz, Vpk-pk is around 0.5mV. This sine wave goes to the input of my Audio ADC (it goes to both channel inputs CH1 and CH2), which is collecting Data according to the following settings:

  • Sampling Frequency = 384kHz (I get 4 bytes of data in each cycle → 2bytes/channel)
  • Channels: 2 (Stereo)
  • Bit depth: 16 bit signed per channel
  • Data saved as Little Endian → In the raw data file: LSB(CH1) MSB(CH1) LSB(CH2) MSB(CH2) LSB(CH1) MSB(CH1) LSB(CH2) MSB(CH2) LSB(CH1) …
  • I save in a file 4096 bytes of information (2048 bytes/channel).

Considering the sampling frequency (384kHz), It means that the duration of the “sound” is 2.67ms approx. ((1/384kHz) * (4096bytes / (4 bytes/cycle))) = 2.67ms. So, as far as I understand, the length of the raw data file should be 2.67ms in Audacity. Am I right?

I have the following issues with Audacity:

  • When I try to import the raw data file to Audacity, if I select “Signed 16-bit PCM”, I get a “weird” track of 2.67ms*2 duration; Then, if I select “Signed 32-bit PCM”, I get a square signal in both channels (with the same frequency of my sine wave at the input → 1kHz) and the correct duration 2.67ms, but it is a square wave, I expected a sine wave.
    (Note: For some reason, despite the selected settings, I get on the left of the track “32-bit float”; no matter if I manually change it, the track does not change at all.
    I am trying to see where is the issue, I am not sure if I am saturating the input of my Audio ADC (and getting a square wave in consequence?).

Also, why if I select 32bit I get the expected sound duration (2.67ms)? It means that “32bit” + “stereo” options = 16 bit/channel?

Please, find the raw data file below for your reference.
2CH_16bitCH_Signed_LittleEndian_384kHz.txt (8 KB)
Thanks in advance for your help.


That’s normal. By default, Audacity will create a track capable of supporting 32-bit float samples. The number shown in the panel on the left only shows the “default” format for the track, but the data in the track can have a different format. This is an intended feature and is very useful for “audio”, though it does look a bit strange when working with other kinds of signals.

I’ll look at your data shortly and post again.

Your data look wrong.

Let’s consider at what we would expect the data to look like:

A 1 kHz sine wave rises from 0 to maximum, falls to negative maximum, then back up to 0, in 0.001 seconds.

At 384 kHz, 0.001 seconds = 384 samples

Looking at 16-bit values (hex 0000 to FFFF), we would expect to see alternate values to rise gradually from 0 to maximum over 384/4 = 96 samples (1/4 cycle).

Here I’ve highlighted alternate 2-byte pairs of an actual 1 kHz stereo sine wave, which shows the expected gradual increase:

Converting to decimal we have:

  • 0000 = 0
  • 01AD = 429
  • 035A = 858
  • 0507 = 1287

which is as expected.

Now let’s look at your data:

Converting to decimal we have:

  • 3641 = 13889
  • 3539 = 13625
  • 3441 = 13377
  • 3441 = 13377

which is not what would be expected.

The file that you posted is 8,192 bytes (2 channels of 4096 bytes).
16 bits = 2 bytes.

We can calculate for 4096 bytes in one channel with 2 bytes per sample:
(4096 / 384000) / 2 = 0.005333333

or we can calculate for 8,192 bytes in 2 channels with 4 bytes per sample (2 bytes per channel):
(8192 / 384000) / 4 = 0.005333333

Either way, the expected duration is 5.333 ms, which is what I see when importing at 384 kHz, stereo, 16 bits per sample .

What exactly is that? If it’s hardware, what’s the make / model?

Make and model?

How are you feeding the output of the function generator into the A/D converter?

How exactly are you saving the data from your A/D converter?

Hi Steve,

Thanks for your reply.

Sorry, What do you mean when you said I posted a 8192 file? I have posted a 4096 bytes file (2048 bytes/channel).

The last address of my file is 4092 (0xFFC).
So it should be 4096 bytes in 2 channels with 4 bytes per sample (2 bytes per channel):
(4096 / 384000) / 4 = 0.0026666666 s

Please, let me know if this is correct.



Hi Steve,

Please, can you say me the address of those values? I can not find them in my file.


Hi Steve,

Please, find the information below:

  • Function Generator: “Jupiter 500” from Black*Start
    -Audio ADC Converter: “TLV320ADC6120” from Texas Instruments

The output from the function generator goes directly to the ADC Ch1 and Ch2 Inputs. Single-Ended for both and DC Coupling.



I mean that the file size of “2CH_16bitCH_Signed_LittleEndian_384kHz.txt” is 8,192 bytes.
(Look at the file properties).

That’s not what I’m seeing.
Opening “2CH_16bitCH_Signed_LittleEndian_384kHz.txt” in a hex editor is showing the final address as 1FFF, meaning a total of hex 2000 bytes = dec 8192 bytes, which agrees exactly with the file properties.


Address Value
0000    41
0001    36

0004    39
0005    35

0008    41
0009    34

000C    41
000D    34

Hi Steve,

This is very interesting. The point is that I saved the data from UART in a .txt file, and yes, you are right, according to the properties, it says 8192 (the double of them!). What I did:

I copied again the raw data into an hex editor and I have saved it. Now, the file looks different. If I set as PCM_16bit_Signed_Little_Endian, I get the correct duration for 4092 bytes ( 2046 bytes/channel), 2.67ms. I don’t see the 1kHz sine wave though…
Please, can you have a look to the file again and let me know your thoughts?
2CH_16bitCH_Signed_LittleEndian_384kHz_2.txt (4 KB)
Thanks in advance.


Still not expected values for a sine wave.

Perhaps try feeding some DC voltages into one channel of the DAC to see if you get expected values.

Audio ADC Converter: “TLV320ADC6120” from Texas Instruments

That’s just a chip! I assume you have some kind of A/D board…

And, you’re using special data acquisition software? You’re not recording with Audacity, right?

Can it read negative voltages? I think a lot of ADCs have be biased and then the bias can be subtracted-out. That doesn’t appear to be a problem but as you may know, bad things can happen when you mix & match signed & unsigned integers. And bias would add another layer of confusion.

I wonder if you’re “missing data”… In audio we call that a [u]dropout[/u] or a “glitch” and it’s related to the multitasking operating system.

Is there a buffer (hardware or software)?

Do you have an oscilloscope? Are you sure the function generator is working?

A couple of suggested experiments -

Try a lower sample rate. That can help with dropouts.

Try DC (positive & negative). Just use a battery if you don’t have a power supply handy.

Try a square wave. DC or a square wave is a lot easier to visualize from the data. A sine wave “looks random” when the signal is not synchronized or correlated to the sample clock. You won’t get a perfect square wave… There will be noise and probably ringing but it’s still better than a sine wave.

Try one channel (“mono”). Again, that’s just a lot easier to visualize than interleaved stereo.

A lot of people doing research use MATLAB, or I believe there are free MATLAB clones. You can do all kinds of data analysis and graphing without being locked-into an audio editing paradigm.

Hi Steve,

Thanks for your reply.

Just to understand how Audacity imports the Audio Data. Please, Let me know if the file structure below is correct:

PCM_16bit_Signed_LittleEndian_Stereo (Sample Rate was 384kHz)
File Codification to Audacity Plots.JPG
Thanks in advance.



*Last sample must be 1024, not 1023.


Audacity RAW import assumes that stereo data is interleaved:

Left sample, right sample, left sample, right sample, …

Little Endian:

Least significant byte, most significant byte.

So for stereo, little endian:
Llsb = left, least significant byte.
Rlsb = right, least significant byte.
Lmsb = left, most significant byte.
Rmsb = right, most significant byte.

Llsb, Lmsb, Rlsb, Rmsb, Llsb, Lmsb, Rlsb, Rmsb, …

Example, 4 byte sequence:
BA EE 00 00
Left channel: -4422
Right channel: 0

As is usual for representing audio, the signed 16-bit range -32,768 through +32,767 is normalized to a range -1 to (almost) +1 by dividing the signed integer value by 32,768.

Converting the signed 16-bit value to dB is:

20 * log | (signed-16-bit-val / 32768) |

hex CB B1 = signed integer -20021
20 * log | -20021 / 32768 |
= 20 * log 0.610992432
= -4.279 dB