Just read the User manual and have one question I’d like to ask. I am importing 32 bit audio samples (using Macro) I process them and then export them as 16 bit. Not a problem, I simply set Audacity to 16bit and it will export these samples as 16 bit. So far so good.
The thing is that I am doing Normalization and would like to preserve every bit of data. So the question is, during the macro command when the wavs are being imported does Audacity FIRST convert them to 16 bit and then applies normalization or does 32->16 bit conversion happen during the export phase?
My macro is two commands: 1) normalize to 0dB, 2) export wav
Someone will definitely point it out if I am wrong.
When Audacity imports audio, it converts that audio into 32-bit float format, and uses that for all the audio manipulation. When you export, it exports in the format you specify.
Right! Audacity uses 32-bit floating-point “Internally”. Any conversion from 16 or 24-bits to 32-bit float and back is lossless.
Do you know about dither? (I believe it’s enabled by default.) Dither is added noise that’s supposed to sound better than the “natural” quantization noise.
The “rule is” to dither whenever you downsample the bit depth. If you record or open a 16-bit file and export as 16-bits, it should (usually) be disabled.
It’s not a big deal because you can’t hear dither or quantization noise at 16 or 24-bits under normal listening conditions. Of course you can experiment if you wish.
If you want to know what quantization noise sounds like, export as 8-bits with no dither. It’s like a “fuzz” on top of the signal and like regular analog noise it’s worse (more noticeable) at low levels. But unlike regular nose, it goes–away completely with digital silence. This is what “low resolution” sounds like.
If something was recorded with a microphone (or almost any analog source) the acoustic or analog noise is usually worse than quantization noise or dither so it can be considered “self dithered” and there’s no need to add dither.