Difference between Compressor and Normalize?

Newbie here.

I’m watching various tutorials on how to clean up, boost and improve some spoken word voice tracks, because I’ve got a dynamic mic (SM58) and a tiny interface (UR12) and that seems to make for a pretty small waveform in the recording.

I’ve seen a few people use Compressor to crank up the volume, and I’ve seen others use Normalize. Sometimes that’s even used before or after Compressor in the same video.

I’ve taken a raw track I recorded of myself doing ad copy, duplicated it, and I’m just applying different suggested effects from these videos to each copy, and listing to each one solo, in order to hear and learn what each effect does. And while I’m seeing a slight difference in the waveform between using Compressor on the track and using Normalize on the same track, I’m not able to distinguish a difference in the audio. If it’s there, as I assume it must be, it’s too subtle for my (Roland RK-5) headphones to pick up.

Can someone clarify what Compressor does to a waveform vs. what Normalize does? Is one particularly better for what I need here vs. the other?

Bonus questions: What effects, in what order, would you recommend to boost and enrich a small waveform but without getting too high on levels?
And, can macros be set up in order to apply those steps in order to a track rather than my doing it manually for every recording?

Thanks for any insight you can offer.

Normalization is a simple-linear volume adjustment. There is another audio editor that calls it “maximizing” which is a better word for it, but “normalize” is the correct audio terminology.

Your file is pre-scanned to find the peak level. Then the volume is adjusted (up or down) for a new peak level. Usually, it’s adjusted for 0dB peaks (the “digital maximum” (1)) but by-default, Audacity normalizes to -1dB peaks.

The Amplify effect defaults to whatever gain (or attenuation) is needed for 0dB peaks so it can also be used for normalization.

Peaks don’t correlate very well with perceived loudness… If you normalize all of your music some tracks will still be louder than others.

…Sometimes loudness-matching is called “loudness normalization” and Audacity also has that effect. There is a bit of a danger because loudness matching can push the peaks into clipping.

Dynamic compression reduces the loud parts. Then “make-up gain” can be used to bring-up the overall volume. Limiting is a kind of “fast compression” and automatic volume control (or automatic gain control) is a kind of “slow compression”.

The recommended audiobook procedure uses limiting. (I don’t know where that procedure is, now that they’ve killed the wiki.)

It depends on the settings… Since compression normally works on the loud parts it might not have much effect on your quiet recording unless you Amplify or Normalize first.

…Since make-up gain is usually used with compression (or limiting) and the quiet parts end-up louder, an unfortunate side-effect is that it brings-up the background noise and it makes the signal-to-noise ratio worse. Amplifying and Normalizing also bring up the background noise (just like turning-up the volume knob) but they don’t change the signal-to-noise ratio.

Most interfaces work better with condenser mics which have higher output.

The low level is not a problem unless amplifying later brings-out preamp hiss. A “hotter” mic helps with that. Usually, acoustic noise room noise is a bigger problem and a more sensitive mic doesn’t affect that… The picked-up room noise is higher but the signal is higher too so the acoustic signal-to-noise ration isn’t changed.

(1) Audacity (and some audio formats) can go over 0dB but it should be considered the maximum because you can clip (distort) your rendered file when you export, or clip your DAC when you play, etc.

1 Like

Thanks for all that. There’s a ton I’ll need to unpack in your feedback and go over a few times until I’m sure I get it all, but that looks like it definitely covers the differences in effects I was asking about.

At risk of going to the well too many times, do you have any recommended steps from your experience for what to do with a small wave form that sounds good/good enough to your ear that it would sufficiently pass for a solid demo track, or even final edits for V.O work? I know that less can be more, and a user can edit something to death and have it sounding overproduced. But I also wouldn’t want to overlook something that could help.

What steps would you suggest would be key to try out?

There’s a multi-step video I’ve bookmarked for reference that has several steps to apply to a track to help enrich and thicken the sound. I believe in order, they’re normalize, compressor, treble boost, bass boost, then normalize again (and then noise reduction as a last step).
Does that sound like it makes sense? (No sarcasm here, I’m genuinely curious if all those combined would work well as a standard practice).

There’s a faster tutorial that just does noise reduction off the top, then bass boost, treble boost, low rolloff for speech, compressor, and hard limit (default). But in the video those end result levels peak at well into the red range and are close to 0dB, which my early understanding suggests isn’t great. Would maybe tweaking the compressor defaults rein in those levels a bit?

So much to learn about…

This topic was automatically closed after 30 days. New replies are no longer allowed.