How to export a 32 bit float without losing quality?

Hi, it seemed when I edited in 32 bit then exported to 16 bit (wav) it lost some sound quality, especially noticed comparing a 32 bit export to a 16 bit but it seemed to be better when I changed the (sample) format in the Audacity project first to 16 bit, then exported it. Is this true or are my ears playing tricks?

I would just use 32 bit but I don’t think it’s used online? Not sure though, when wav is accepted they probably want 16 bit I guess and then probably format it to 16 bit in the upload process?

Also, what is the format for MP3s? I have no way to tell and there is never an option when exporting. Thanks.

I’m using Audacity 3.0 with windows 7

That’s possible. Pros, and especially audiophiles, fool themselves all the time! What is a blind ABX test?

“CD quality” (16-bit, 44.1kHz) is usually better than human hearing in blind ABX tests. If you start with a high-resolution file and down-sample to CD quality you probably won’t hear a difference.

MP3 is lossy compression. Data is thrown away to make a smaller file. A good-quality (high-bitrate) MP3 can often sound identical to the original. It’s “smart” an it tries to throw away details that are masked (drowned-out) by other sounds, but it is imperfect. And if you open and edit an MP3, “damage” accumulates with every generation so if you want MP3 you should compress ONCE as the last step.

The main practical difference/advantage of 32-bit floating-point is that it can go over 0dB without is that you an go over 0dB without clipping (distorting). So for example, you boost the bass in Audacity and Audacity might “show red” for potential clipping. But as long as you then lower the volume, the waveform is not clipped and everything will be OK. It also turns-out that digital signal processing is “easier” in floating point (for the programmers).

If your file/data goes over 0dB and you export as 16 or 24-bit you’ll get clipping. Then you are likely to hear a difference.

With regular integer formats, 0dB represents the maximum (1) you can “count to” so you can’t go over 0dB. The numbers in a 24-bit file are bigger than those in an 8-bit file but at playback time everything is automatically scaled to match your DAC (digital-to-analog converter) so the 24-bit file is not louder.

If you want to know what “low resolution” sounds like you can export-as an 8-bit WAV. (If you do that, go into preferences and turn-off dither (2)) You’ll hear quantization noise which is a “fuzz” on top of the signal. It’s a lot like regular analog noise in-that it’s most noticeable with quiet sounds. But unlike analog noise it goes-away completely with pure-digital silence.

With VERY FEW exceptions your DAC (playback) and ADC (recording) are integer devices so they are limited to 0dB.

Your “final production” shouldn’t go over 0dB because the listener will clip their DAC if they play it a full digital volume. There’s also no point in making a file with more than 24-bit resolution because it makes a bigger file that gets down-sampled when played.

Audio CDs are 16-bit integer.

MP3s use a kind of floating point but they don’t store individual samples so there is no defined “bit depth”. They are capable of going over 0dB without clipping (how far over may depend on the nature of the sound/data) and they can go quieter than CDs. When 16-bits “counts down” to zero and there is digital silence, MP3 can still contain information… They have more dynamic range than CDs.

(1) Digital dB levels are dBFS (decibels full scale). The 0dBFS reference is the “digital maximum” so digital dB levels are normally negative.

Sound levels in the air are measured in dB SPL (sound pressure level) so dB SPL is positive.

There is no standard calibration between the two, but there is a direct correlation. If you reduce the digital level by 3dB (3dB more negative) the loudness also drops by 3dB, etc.

(2) Dither is added noise that’s supposed to sound better than quantization noise.

Assuming you don’t turn the volume up enough just so you can hear the noise floor of CD audio, which is loud enough to cause discomfort or pain to your ears if a full-scale waveform started while you’re listening to the noise.

The threshold of hearing is considered to be 0 dB SPL. CD audio, with proper shaped noise dithering, can reach 120 dB Signal-to-Noise Ratio (SNR). This means if you play back CD audio where you can just hear the noise floor, the loudest possible sound from that audio will be at 120 dB SPL.

But wait, there’s more! A “very quiet” home listening environment is around 20 dB SPL, so to hear the CD audio noise floor the full-scale audio would be around 140 dB SPL, the high end for the threshold of pain according to Sound pressure - Wikipedia. But most home listening environments are at least 30 dB SPL, putting a full-scale waveform at 150 dB SPL, the same as a jet engine at 1m if your volume is loud enough to hear the CD noise floor in such a room. That should be just about loud enough to play the cannons in the 1812 Overture at their actual loudness. :slight_smile:

I appreciate the reply/info DVD Doug but that aside I did not seem to get my inquiry across good’ enough or it was addressed regardless of referencing it directly. Anyway I now format a song to 16-bit in the project before exporting it (to 16-bit) just in case there’s a difference, as opposed to exporting it directly to 16-bit from 32-bit float. I did some testing and could find no difference except a .0001 difference in the RMS value (on the one test I did). So I dunno, there may be a difference or the RMS tool or the reload into Audacity may have created the difference? - Thanks, Ron.

WHAT I TESTED: a 32-bit float exported to 16-bit versus a 32-bit float formatted to 16-bit in the project then exported to 16-bit.

Both exports were wav. Properties > size > bytes were exactly the same. Then I imported them back into an Audacity project and:

  • Spectrum analysis screenshots seemed exactly the same.

  • RMS values had a .0001 difference.