I’ve been using the LUFS normalization in Audacity, and it works as expected for similar types of content. For example, it’s fine when comparing the perceived volume of a bunch of podcasts, a bunch of orchestral classical, a bunch disco tracks, etc. But when comparing different types of music – particularly comparing heavily compressed pop vs “uncompressed” classical, the perceived volume levels are still pretty different.
Does the Audacity normalization algorithm for calculating required gain include the K-weighting to compensate for the difference in perceived volume by frequency?
Or…is the defect here my understanding of what LUFS normalization is able to achieve??
I don’t know if it’s K-weighting but YES it does take frequency content into account. EBU R128 LUFS.
I think it’s just the complexity of of loudness perception. If you asked two people to adjust two different recordings for equal loudness you’d probable get two different results… And if you don’t like the particular song or the particular type of music you might feel like it’s too loud.
And it THINK it’s trying to match the loud parts of the song… if you have a song with a quiet first-half and a loud second-half I think it’s adjusting-matching the louder-half.
If you have a limited number of tracks (like if you are burning a CD) it’s better to do it by-ear. The technique is to peak-normalize (“maximize”) all of the tracks. Then if they don’t match, choose the quietest-sounding one as your reference and adjust the others down to match.
I use ReplayGain with WinAmp (or Apple Sound Check on my iPod) which are similar concepts but different algorithms and it’s better with than without. I rarely have to adjust the volume because a song is too loud or too quiet.
The speakers/headphones used in your comparison could bias the results.
e.g. if your speakers did not reproduce sub-bass the disco music will not sound as loud as the classical when both have been normalized to the same LUFS value.
The LUFS normalization “hears” the sub-bass, even if the speakers don’t reproduce it. https://www.howtogeek.com/866820/what-are-reference-headphones-and-speakers/
Yeah, I get your point about the speakers/headphones AND (what you didn’t mention) one’s ears. Unfortunately, I’m deaf above 7 kHz…but there isn’t that much musical information up there anyway.
That said, I’ve got examples of different genres of music (with no deep bass, and almost nothing above 7 kHz) that really sound at different volume levels – and I thought that’s what LUFS was supposed to handle. Of course there’s some subjectivity to this, when I get other people to listen to the same songs on the same speakers, they agree about the uneven loudness levels. While it’s much better than just VU or RMS, I have to adjust manually.
Is there something better than LUFS, or is that the state of the art??
I don’t think that EBU R128 is “state of the art”, but it a standard that has largely been accepted in the industry. It is relatively easy to implement in hardware, and is a legal requirement in some industries such as TV broadcasting (in some countries).
Personally I think that Replay Gain was better, but I don’t know any software that uses it. Foobar2000 used it before version 1.1.6, but then changed to the EBU R128 algorithm.
There’s also Apple’s version, called “Sound Check”, which I think they still use on iPhones.