(Hopefully this isn’t a well-worn topic–wasn’t obvious when I searched…if it is, just point me to the thread, thanks.)
Been using Audacity forever, and occasionally when I import a file from a video and use the amplify function, the default that’s set is a negative dB value…implying that somehow the file as imported has levels above clipping. How is this possible?
This is produced by the uncompression process during the import cycle, and will be clipped during export (unless modified)
Audacity has some sort of “safety buffer” for volume, and the “100% scale” of the waveform is actually 95% (or whatever) of the theoretical max level
Some sort of a bug or feature
Thanks in advance for any pointers or explanation that you can provide.
Lossy compression changes the wave shape with some peaks ending-up higher and some peaks ending-up lower. If you rip a CD to MP3, it’s not uncommon to have a CD normalized for 0dB peaks and have the MP3 end-up going slightly over 0dB.
If the file goes over 0dB and you play it at “full digital volume” your DAC will clip. But, I’ve never heard of a case where that slight clipping was audible. (If you hear compression artifacts you are probably hearing something else and lowering the volume won’t help.) Some people do normalize to -1dB or so before making an MP3.
•Audacity has some sort of “safety buffer” for volume, and the “100% scale” of the waveform is actually 95% (or whatever) of the theoretical max level
Audacity uses floating point internally so there is virtually no upper or lower limit.
When Audacity shows clipping it’s showing potential clipping. So for example, you can boost the bass or apply some other effect that pushes the levels into clipping, but it’s not actually clipped (yet). So, you can run the Amplify or Normalize effect to bring the levels down “safely” below 0dB. Or if you have a file that’s actually clipped and you lower the volume, Audacity won’t show clipping even though the distortion remains.
DACs (playback), ADCs (recording), regular (integer) WAV files, and CDs are all hard-limited to 0dB.
+5dB isn’t JUST from MP4 compression. I might believe +2dB but normally I’d expect less than 1dB of peak change… So that’s from an amateur going overboard with loudness.
Couldn’t agree more about the “if it’s louder, it must be better” idiocy…but these aren’t amateurs: these are “official videos” on YouTube, produced by what used to be called record companies.
So this is what has got me confused: how is the +5 dB represented in the MP4 encoding – is there some sort of “this one goes to 11” elegant overload or something??? The MP4 has to have some max value for the sound encoding – maybe there’s some sort of “volume offset” parameter in the format???
Or is Audacity somehow misrepresenting the dB level???
“MP4” is a container format. Inside the MP4 there may be an audio stream (and/or a video stream). MP4 supports a wide range of audio codecs, including AAC, MP3 and others. Some codecs use floating point representation, which, like Audacity’s 32-bit float PCM format, can go over 0 dB.
The files in question were all encoded with AAC audio streams.
After a bit of Googling, I found evidence that in both MP3 and AAC streams there’s a gain setting – changing that parameter evidently tells the CODEC to add/subtract dB from the output stream. So even if there’s a saturation point in the actual representation of the compressed signal, it appears that you can have that value come out as -5 dB in the PCM or as +5 dB…