Sequence of Normalize and Limiter

Normally we Normalize (pun intended) as a last step. Does anyone have a scenario where you would normalize the audio after doing whatever other compression, EQ etc you did during the editing, and there would still be a few dB peaks that you would want to apply a limiter to?

I can think of a reason to use Normalize a lot sooner in the process. Normalize has the Remove DC Offset tools. DC Offset can make editing and production a complete nightmare and if it’s not severe, it can be heard, but difficult to spot.

You can’t apply correction after the fact. Once you start editing with a clip damaged like this, you’re dead.


Thanks Koz. I agree about using Normalize at the beginning of the editing process too, to remove the DC offset.

For my question I should have been more specific. After the editing process is done and I apply the full Normalize effect at the end, does it make sense that the audio would still have a few peaks that I would apply a Limiter to? Or, if there are some straggly peaks after the final Normalize effect, does that mean I didn’t do a good enough job of editing in the first place?

It depends (almost entirely) on the type of recorded material. If you have recorded something with a relatively small dynamic range (such as copying a cassette tape), then you probably won’t need to use a limiter. On the other hand, for something like a close-mic’d acoustic guitar, some degree of dynamic range compression / limiting will probably essential, otherwise high transients will force the overall level to be very quiet.

When I need to apply limiting before exporting, I generally use “Amplify” first (brings the peak level up to 0 dB), then apply the limiter. When using Audacity’s “Limiter” effect, appropriate settings will produce the desired peak output level, so normalizing should not be necessary after limiting. The main reason to amplify (or normalize) to 0 dB first is that it makes it easier to set the levels on the Limiter.

Thanks Koz. That is helpful.

My recording is a podcast. I already compressed it once and set a fairly high noise floor to bring up the vocal dB levels without also raising the background noise. Now the issue is, the audio tracks still have a fairly wide dynamic range. Some of the peaks are close to 0 dB while some of the talking is down near -9 or -12. (My fault in where the mics were positioned.)

Is it crazy to compress it a second time now, setting the noise floor around -15 dB at the softer dB level of the voices, in order to further bring up the dB level and make the vocal volume more consistent?

Instead of using a suite of individual tools, you might consider Chris’s Compressor.