Curious, why did the Audacity dev team decide to change the default to make when normalize, that all tracks would be independently normalized to the same peak level to what we have now in version 2.x? - where each track does not know the peak of the other track. I think what is the default is better however I wanted know why the dev team thinks it is better not to default where both tracks sync up the each others peak levels.
For all earlier versions of Audacity if you select multiple tracks, or tracks with more than one channel (stereo tracks) and apply the Normalize effect, then all audio channels within the selected tracks will be independently normalized to the same peak level.
Lets say you have a stereo recording of a flute recital. There is a piano on the left and the flautist is standing to the right.
Both the flute and the piano will register in both left and right channels, but the piano, due to its position in the room, will be more prominent on the left channel and the flute will be more prominent on the right channel. Because the piano has a more percussive sound, it will probably have higher peaks than the flute.
So now you have a perfectly balanced recording of piano and flute. The peaks from the piano are a little higher in the left channel than any peaks in the right channel, but it sounds “just right”. The only problem is that the recording is a little quieter than you want for the final export (because you sensibly allowed some “headroom” when recording). So now you Normalize to -1 dB and export as a WAV file ready to burn onto a CD.
If you were using Audacity 1.3.13 or earlier, you have just spoilt your perfect recording
The left and right channels would independently be normalized to -1 dB, making the right channel relatively louder than the left channel.
If you were using the current version of Audacity, you now have a perfectly balanced recording in which the highest peak (which happens to be in the left channel) is at -1 dB. The right channel has been amplified by exactly the same amount as the left channel, so the balance between left and right is maintained.
Note that if for any reason you need to normalize the left and right channels independently, there is an option to do so.
Yes I understand your scenario. I guess I am having a problem understanding the text which I quoted: “Normalize effect, then all audio channels within the selected tracks will be independently normalized to the same peak level.”
Does this imply that each track would be normalized to highest peak level found in either track? Which would mean to me that it would then balance out the flute and piano as the default settings for normalization in version 1.x.
ref source: Audacity 1.3.13 and earlier
'“For all earlier versions of Audacity if you select multiple tracks, or tracks with more than one channel (stereo tracks) and apply the Normalize effect, then all audio channels within the selected tracks will be independently normalized to the same peak level.”
Possible revision: “. . . then all audio channels within the selected tracks will be independently normalized to the same peak level of each respective track.”
Yes, “Normalize” brings all tracks up to the same level.
All previous instrument balance will be destroyed.
The best solution is to use “Amplify” instead.
Procedure with multiple tracks:
Select all
Mix and render to new track (Ctrl-Shift M)
Select all
Amplify (Click Ok or subtract a head room value, eg proposed -3 dB -2 dB head room =-5)
Delete the extra track.
“Normalize” is rather intended for single tracks or newly recorded ones (especially with the DC removal possibility).
You can alternatively balance the tracks with the gain sliders, “Normalize” won’t touch these settings.
The behaviour in 1.3 x versions is purely academic as those versions are obsolete and no longer supported. The wiki will no doubt be updated at some point to move confusing information about obsolete versions out of the way.
Thank you Robert and Steve for you time explaining this matter.
Do you understand the behaviour in the current version of Audacity?
I now understand better the difference between Amplify and Normalization in general and in particular, with respect to Audacity. My question for this thread is why did the dev decide to change the default for normalizing. Was it perhaps because the normalizing algorithms or process changed?
It was as I described in my first post: Users wanted to be able to nomalize without messing up the stereo balance, so the effect was updated to make that behaviour the default.