Does loudness normalization affect dynamics/quality of sound?

Hi all, so i have been using contrast then Loudness normalization RMS to volume match some stems for projects im working on, and i was wondering as it can be quite hard to hear if there is any difference, is that does loudnesss normalization change dynamic/tonality/quality (i dont know how to say it) of the audio file that it is applied to? or is it simply like changing the volume of stem in your DAW?

Is it correct for me to try match two tracks volume this way or not correct also? im still not sure on this matter.

Thanks for reading.

Should i also use LUFS instead of RMS? or RMS is ok for this purpose. Thanks

Audacity’s loudness normalization does not objectively change the frequency content, or dynamic range, it’s just like changing the volume, however subjectively the frequency response of the human ear is dependent on loudness.

1 Like

it’s a linear volume change, just like adjusting the volume control before the track starts playing. And if it makes a 3dB change, the RMS, LUFS, and peak are all changed by 3dB, etc.

But, you have to be careful of clipping so make sure you have show clipping turned-on. You can have quiet-sounding tracks with high peaks (pretty the whole idea of loudness normalization rather than regular peak normalization.)

Audacity uses floating-point internally so it won’t actually clip and it’s showing potential clipping but most file formats are limited to 0dB (peaks) or if you export to a format that can go over 0dB the listener can clip their DAC at playback time. If it’s “showing red” you’ll be OK as long as you lower the volume before exporting,

If you are matching two tracks and you get clipping, use Amplify and adjust-down both tracks by the same amount.

LUFS should be better. RMS is a simple mathematical calculation, but perceived loudness depends on the short-term average/RMS and the frequency content and LUFS takes frequency content into-account.

But in the end, your ears may tell the songs aren’t volume-matched either way so you might want to tweak by-ear. And, two different people may not agree when two songs are equally-loud.

1 Like

thank you guys you answered my question/s in a very understandable manner. Cheers.

Loudness normalization was developed in part to ensure that one’s content matches the work of others. This is essential for streaming, podcasts, assembled albums, etc. The Audio Engineering Society (AES ) just built a web site (https://aes2.org/audio-topics/loudness-2/) to explain how it works and to help use the tools in Audacity and other editing systems.

There are even standards for your loudness targets. AES recommends the following loudness normalizations for tracks in “album” collections (measured or “integrated” over a full sound element):

  • Speech: -18 LUFS
  • Music: -16 LUFS (is 2 LU higher to psyco-acoustically match speech)
  • Album normalization: -14 LUFS for the loudest track (others naturally less loud)

The AES also has loudness recommendations for streams and podcasts, as well as handling content with wider dynamic range. Browsing the web site is great way to understand audio loudness and professional mixing.

This topic was automatically closed after 30 days. New replies are no longer allowed.