About 32 bit float

Okay, I have a King Bee 2 mic, a Zoom P4 (16 bit) as my interface, and Audacity as my DAW. Is there a bottleneck to keep me from recording vocal audio at 32 bit float?

Any reason you would want that? The microphone system is going to deliver 16-bit. If you pull it into Audacity, it will automatically convert to 32-float, unless you tell it not to.

You can then File > Export at 32-bit floating. What you really did was surgically perfectly preserve the performance’s 16-bit errors. They didn’t go away.

And now you have enormous sound files only four people on earth can open.

Recording studios use plain 24-bit.


Not concerned about the size. I want 32 bit float for editing. Once i export, the product will be 16 or 24 bit.

The default sample format in Audacity is 32-bit float. When you import the file into Audacity, it will be treated as 32-bit float automatically (assuming default settings).

Oh… even imports will be 32 bit float? I would imagine that there’s no “unclippable” benefits in such a case, eh?

Conversion to 32 Float doesn’t fix anything, and it can even cause some problems.

Sooner or later you’re going to want to convert your show back to 16-bit and the conversion can generate distortion. So Audacity adds a tiny bit of dither (noise) to the conversion process.

32 Floating inside Audacity is insanely handy because it doesn’t overload.

To take Audiobook Mastering as an example, The middle step sets the loudness (RMS) of the voice and this could overload peaks. The third step is a gentle peak squasher to bring them back down.

You’d never get away with that outside of 32-Float.


I get the suspicion that you really don’t like 32 bit float.

Dithering on 16 bit audio is mostly academic in nature. Using 32 bit float for editing is perfectly fine; if you use anything else, every time you apply an effect, Audacity needs to convert 16 to 32 bit, apply the effect, and convert back to 16 bit.

Leaving the default the default - ie everything at 32 bit float - is fine.

This issue is nuanced.

Other than absolute silence, (PCM encoded) digital audio always has some amount of noise by virtue of the fact that analog amplitude values are quantized to discrete binary values. The deviation of the binary value from the theoretically correct analog value is “noise”.

The more bits per sample, the closer the binary values can be to the theoretical analog values they represent. More bits means greater accuracy, thus less error, thus less quantization noise.

Converting from a high number of bits per sample to a smaller number of bits per value reduces the accuracy, thus increases the rounding errors and thus increases the noise compared to the higher resolution version.

Considering the case of converting 32-bit float to 16-bit:
32-bit float is so precise that can say that the amount of error is effectively zero (very very very small), whereas 16-bit values are accurate to the nearest 16-bit value (around 0.0015% or -96 dB). This noise is spread across the entire frequency from 0Hz to half the sample rate.

Applying dither adds a tiny amount of randomization to the least significant bits, which has the effect of pushing the noise away from the most easily heard frequency range into the upper frequency range where human hearing is less sensitive. Thus the amount of digital noise in the range 0 to 5kHz is dramatically reduced, at the expense of a slight increase in noise is the very high frequency range (mostly above 12 kHz when using shaped dither).

Leaving the default the default - ie everything at 32 bit float + shaped dither - is fine.

This topic was automatically closed after 30 days. New replies are no longer allowed.