The Amplify effect has a side effect

When I use Audacity 2.0.6 to amplify a piece of music, the bitrate goes up (I checked it in Foobar2000). For instance, a piece of music (flac) went from 700 to 1419 kbps after an amplification by a factor of 0.8 dB. Moreover, if I import the same flac file (700 kbps) and then export it without modifying it, the bitrate goes to 1407 kbps.

Is there a reason for that ?

Who made the original FLAC? Given there isn’t something broken, FLAC is a compressed format and they pay attention to show content. Part of the show content is the dither signal that Audacity adds to the show. Audacity works internally at 32-floating, not 16. When you export, assuming you come back to 16, Audacity adds a tiny noise signal to hide the data errors and this can affect the export.

Also, assuming you exported at 32 floating, the show is very much larger than it was.

Audacity isn’t a WAV editor. It’s an audio production editor. It’s goal is to sound good, not to have every bit in the right place.


1419 kbps is very high for FLAC format and suggests that you are probably using 24 bit FLAC. In the Export dialog, click on the Options button to see if it is set for 16 bit or 24 bit. 24 bit FLAC files are typically about double the size of 16 bit FLAC.

You’re right, it was set to 24 bit. I changed it to 16.
Thank you.

About that file that went from 700 at 16 bit to 1419 kbps at 24 bit, I have deleted the 16 bit version. If I convert the 24 bit version back to 16 bit, will there be a loss in quality ?

Well obviously you lose 8 bits per sample, but those are the “least significant” bits - in other words, those bits contribute least to the sound.
Being that the recording was previously a 16 bit Flac file, there will be virtually no meaningful data in those bottom 8 bits. When converting from a high bit depth to a lower bit depth (24 bits to 16 bits), there will be rounding errors in the least significant bit (the 16th bit) of the new format. Depending on how the conversion is done, the rounding errors may be averaged out over the least significant 2 (or even 3) bits so as to minimise harmonic distortion, but regardless of the method, the rounding errors will be very small.

I would guess that if you convert the 24 bit Flac files to 16 bit, then compare the two files (by listening) that you will not be able to hear any difference unless you turn the volume up really loud during very quiet parts of the recording. You may notice a slight difference in background noise if there are parts of the recording that are sufficiently quiet (such as at the very tail end of a fade-out). That difference may not be “worse” - it may even sound “better”, because all that you are really losing is the bottom 8 bits of the 24 bit version, which never existed in the original 16 bit version. My advice is to try it, and then listen very critically to the difference before deciding whether it’s worth the effort to re-convert.

I’ll have to check all my files, but I’ll take my time. Many thanks for the infos. Best regards.