Here’s the thing: I changed Audacity’s Default Bit Depth from 32 Bit Float to 16 Bit (I did it not knowing what I was doing), and then I edited a FLAC File which had a Bit Depth of 24 bit (while editing I noticed that in the track it said 16 Bit) and later exported the audio as 24 Bit.
Did I mess up? I mean the audio was 24 Bit originally, but my settings might had downgraded it to 16 Bit and while exporting it it Upscaled to 24 Bit. So i guess the real question is: Does Audacity change the bit depth while editing?
P.S. sorry if my english is bad, it’s not my native language.
Audacity works internally in extreme high quality 32-bit float.
When you set the project’s bit format to 16 or 24 bit, then Audacity converts the audio after processing to 16 / 24 bit as you told it to. In the case that you describe, the round trip from 24-bit → 16-bit → 24-bit will have truncated the sample values from 24-bit to 16-bit and added a tiny amount of dither noise. Theoretically this would slightly reduce the sound quality, but in practice it will probably not be noticeable (or hardly noticeable).
I would highly recommend using the default 32-bit float format for all projects so as to achieve the best sound quality.
As per Steve above, 32-floating inside Audacity doesn’t overload or clip. If you apply an effect that causes the blue waves to go over 100% (too loud), it’s a snap to apply another effect to bring everything back down to normal, no harm done. Audiobook Mastering Suite works this way.
If you tried that with the other formats, you would get permanent sound damage.
I don’t know, but if you read somewhere else that tells you how, post back.
Remember, this is a forum. Users helping each other.
Maybe, but if you used dither it will put data (noise) in some of those bits.
Or if you do any processing (even change the volume) there will be “rounding” and those extra bits will be filled.
If you make a FLAC file it can give you a hint because it doesn’t waste space on zeros…
A 24-bit WAV is exactly 50% bigger than the 16-bit file (50% higher bitrate).
(Assuming no embedded artwork.)
FLAC typically gives you a file 60-70% of the WAV file size.
FLAC isn’t as predictable but the ratio should be similar. (A 24-bit FLAC is usually about 50% bigger than a 16-bit FLAC.)
A “fake” 24-bit FLAC that’s actually 16-bits (with no dither) will be the same size as a FLAC made from a the 16-bit WAV.
So… If the 24-bit FLAC is less than half the size of the WAV, it’s probably “fake” with only 16-bits of real data.
If the “fake” FLAC is dithered, it should be somewhere in-between. It’s easy to see that in a careful-scientific experiment, but since you can’t predict how much compression you’re going to get with FLAC, the results may not be so clear if you don’t have all of these files to compare.
I have a plug-in called bitter but it doesn’t seem to be working in the current version of Audacity. it’s showing the wrong bit depth. And even it works, if you have dither it will be using all of the bits and you’d get the “wrong answer”.
As DVDdoug mentioned, exporting as 24-bit from Audacity will usually add dither, and because the output format is 24-bit, the dither will be 24-bit. That will make the expected “padding” bits to be non-zero.
However, if a 16-bit file is converted to 24-bit without dither, then yes you can see the zeros if you open a 24-bit WAV file in a hex editor.
The start of a WAV file contains the WAV “header”. After the header you will notice that the audio data is in groups of 3 bytes (8 bits to a byte), where the third byte is zero (written as 00 in hexadecimal):
Thanks guys for the replies. My problems then were simply bad parameters, I’ll change the default Bit Depth back to 32 Bit Float.
I found somewhere this software called Music Scope (I’m using the free, test version which only inspects the first 30 seconds of the audio file). So to tell if an audio file has unused bits just load the audio file (you can drag and drop it ), click “Bit Monitor” and then click “Play”; the unused bits will appear as blue rectangles.
Now, I realized that every. single. Audio file that I exported from Audacity has Dithering (Didn’t know what it was but never thought why spectograms looked filled with tiny dots everywhere), so I have a new question: Does dithering reduce the quality of a FLAC file?
I can feel you building up to the assumption that dither is always bad and you should always turn it off. First, I understand the last dither design was chosen so the “noise” didn’t occur in the places of most ear sensitivity. So you’re not likely to hear it, even though it’s doing its job.
But you shouldn’t turn it off (you can. It’s a setting) because going from a high precision sound (32-float) to a lower one (16) can have errors. If you offended the sound gods, the errors line up and make new, clearly audible, annoying sounds that are not part of your show.