Having some trouble applying compression and normalization

Hello all. I watched some tutorials on YT about how to apply these effects and I think I understand the concepts more or less. However, when I actually apply them to my recording what happens is that the waveform becomes really huge and distorted.

I’m not sure why this is happening? I tried different settings on both amplify, normalize and compression but that’s all I get. It sounds kind of horrible on playback as well.

Does anyone know what might be going on here?

Before we dive into the weeds here, what’s the goal? If you’re reading for audiobooks or voice-over work, we publish a simplified voice processing collection of tools. If it’s something else it would be good to know what.

Koz

Normalization and Amplification are simply volume adjustments. They are perfectly safe with their defaults and they won’t affect sound quality unless you normalize/amplify the peaks over 0dB (the “digital maximum”).

Amplify can be be used for normalization. Audacity has pre-scanned your file and Amplify will default to whatever gain (or attenuation) is needed for normalized 0dB peaks. The Normalization effect adds a couple of other options.

Note that (regular) Normalization is NOT “loudness normalization” or “loudness matching”. It’s a volume adjustment that targets a particular peak level (usually 0dB = 1.0 = 100%) and peaks do not correlate well with perceived loudness.

…Actually, Audacity itself can go over 0dB without clipping so you may not get [u]clipping[/u]/distortion unless you play-back at full-digital volume (clipping your digital-to-analog converter) or until you export to WAV or some other format that’s limited to 0dB. So for example, you can boost the volume or boost the bass, etc. to the point where Audacity shows (potential) and the wave won’t actually be clipped (yet). After that, you can apply the Normalize or Amplify effect to bring down the volume below clipping. (In that case, you’d enter a negative “amplification” which is attenuation, or just accept the default negative.)

What are you trying to accomplish with compression? Compression is more complex, it’s non-linear, and there are lots of settings so it can easily damage the audio. In general, compression makes the loud parts quieter and/or quiet parts louder. In practice it “pushes down” the loud parts and then make-up gain is used to bring-up the overall loudness, while trending everything toward the same volume to make “everything loud” or “louder”. IMO - Modern [u]Loudness War[/u] music is over-compressed and over-limited to the point where the constant-loudness is boring and I just want to turn-down the volume.

First of all thanks for the assistance and support, I really appreciate it. The goal here is simply to learn more about mixing in general, but I understand it’s a very large field. My plan is to do covers on YT, but I just want to learn more about the mixing process so I can make the end result sound better.
I do do a little voice work on the side so it would be good to learn about that too.

I’ve heard that it’s actually more important to get a good recording than it is to mix, because mixing can only go so far? Would you agree with that?

If my recording is already ok, then would I not need to actually bother compressing and normalizing it?

Step one. Get or make a studio.

By far the worst problem home readers have is the competition with a bad room or environment. Behold Ian who holds the current record for longest post on the forum. All he wanted to do was record audiobooks in his apartment in Hollywood (a real place). We beat up his mechanical and acoustic problems for over a year and 39 forum chapters.

Contrast that with a sound test I shot in my super quiet, echo-free bedroom. I set up a stand-alone sound recorder on a roll of paper towels on my desk, announced a technically perfect track, pressed stop and went to make coffee. I think it was fifteen minutes including setting up the roll of paper towels.

Once you get your studio set up, you can do whatever you want.

Koz

I’ve heard that it’s actually more important to get a good recording than it is to mix, because mixing can only go so far? Would you agree with that?

Yes, it’s “links in the chain” and it all starts with good recordings.

“Mixing” is “blending” so if you have a separate vocal track, guitar track, drum track, etc., you mix them together. If you are recording a live performance the mixing is happening acoustically so you can just set-up a microphone (two for stereo) and record. (Pro live recording is typically multi-tracked with multiple microphones and mixed later.)

But, with most modern-professional recordings there are effects & editing done at the same time as mixing so that’s done by the mixing engineer and considered part of “mixing”.

If my recording is already ok, then would I not need to actually bother compressing and normalizing it?

You should only compress if it makes it sound better. :wink: It is one of the most common effects but mostly it’s just used to make the recording louder without clipping/distortion. It’s not “necessary” and your recording may sound better without it. Live music isn’t compressed and that’s one reason that it usually sounds better than recorded music.

Limiting is a kind of (fast) compression and it’s easier to experiment with because there are fewer settings to mess-up. So, maybe try that first (with make-up gain or normalize after limiting to bring-up the loudness).

If you’re going to upload your recordings to YouTube, they try to standardize the volume so if you over-do the compression/limiting to make your recording louder they may adjust it down anyway. (Since they don’t apply dynamic compression they may not be able to bring your recording up to their standard volume if it’s too quiet.)

Normalization is usually a good idea. It’s just a volume adjustment so it won’t hurt sound quality.

And, if you apply effects that push your peaks over 0dB, normalization can bring your levels safely down so you won’t get clipping. Normalization is a good last step.

Speaking of levels & normalization - Mixing is done by summation so if you are mixing two or more tracks that approach 0dB, the mix will exceed 0dB and you can get clipping. One solution is to export as 32-bit floating-point WAV which can go over 0dB without clipping. Then, re-import the WAV file, normalize to bring down the levels, then export to your desired format. (You shouldn’t distribute or upload a floating-point file that goes over 0dB because your listeners can get clipping/distortion when they play it.)

Wow this is all pretty complex. Thanks so much for helping though!

Ok I had better give you all more information to go on. I have a sort of home studio set up?

DAW : UR Steinberg 22
Microphone : Jts pdm 3

I use a Kaotix Eyeball with a pop filter to cut out excess noise and it works pretty good. I just want to learn more about mixing because well, we can all always grow and learn right? I usually sing covers and I just want them to sound good.

So far I am adjusting the volume by using the gain meters in each track and it’s working out ok. Maybe I don’t need to compress or normalize?

Maybe I don’t need to compress or normalize

Like I said, normalization is usually a good idea and it’s harmless.

DAW : UR Steinberg 22

Just FYI - The Steinberg is an audio interface.

A [u]DAW[/u] (digital audio workstation) is software.* Usually “DAW” refers to a “bigger” multi-track recording/mixing/editing application that also supports MIDI. Some people call Audacity a DAW but I consider it an “audio editor” or an “audio editor/recorder.”

If you are doing lots of mixing (many separate vocal and instrument tracks) a full DAW is probably worthwhile but it would be at least 10 times as complex as Audacity.

\

  • DAW can also refer to the whole hardware/software recording system or an all-in-one [u]Portastudio[/u].

My plan is to do covers on YT

I’ve read conflicting information about that… You might get taken-down for a copyright violation, or maybe the songwriter/copyright owner collects any advertising money from your videos… I’m not sure…

Here in the U.S., you can play covers live and it’s up to the venue (not the performer) to have a license. The fees go into a pool so again the copyright holder gets paid. The same goes for DJs. It’s up to the venue to have a license

For CDs or MP3s there is something called a “compulsory mechanical license” which means nobody can stop you from distributing covers as long as you pay the licensing fee of about 5-10 cents song-per per-copy distributed (It depends on the playing time). So, if you make a CD with 10 covers, the songwriter(s) get about $1 from each CD you distribute. It doesn’t matter if you give-away the CDs or downloads… You have to pay even if you’re not making money.

Yes I’ve read a lot of conflicting information as well. I have decided to just go ahead and do it because I want to.

I’m only doing the one track vocal and one track voice and that’s it. How much mixing do I need?

I don’t really have the time or energy to do more at present…I think my setup is not fully professional but it’s pretty decent?

I uploaded a cover which has been mixed…do you think I’m doing it ok?

How much mixing do I need?

Maybe none. If you set up and use overdubbing…

https://manual.audacityteam.org/man/tutorial_recording_multi_track_overdubs.html

…that will give you a series of instruments and voice tracks one over the other all in sync. They will all play at the same time unless you tell them not to with Select, Mute, and Solo buttons on the left and you can apply effects, filters and corrections to each one individually. If you simply export the work, Audacity will mix and push everything into one sound file. There are some steps in the middle there to make sure you don’t overload anything, but that’s essentially it.

You are warned that there are no “Studio” filters. There is no convenient way to get rid of “recording in the kitchen” sound. Once you perform with that room echo in your voice, it’s forever.

Koz

I read the articles on overdubbing but I am kind of confused. What do you mean by setting it up? I am just recording one vocal track over a karaoke track.

https://manual.audacityteam.org/man/tutorial_recording_multi_track_overdubs.html

Audacity doesn’t automatically record your voice in sync (in time) with the backing track (the instruments). That’s computer specific and has to be adjusted.

There is a setting that prevents the backing track from recording twice by accident. If you like recording streaming or internet, those Audacity settings will have to be changed for overdubbing.

Headphones are required for overdubbing, but you can’t hear yourself very well in your headphones. Many “digital performers” wear only half of their headphones so they can live mix during their performance.

Screen Shot 2020-03-07 at 6.34.01.png
You can hear yourself and the backing track in the headphones if you have a microphone or microphone interface which allows that.

Koz

You can avoid the sync adjustments and setup if you’re willing to do post-production editing later—push the out-of-time voice track in sync with the backing track with the Time Shift Tool (two sideways back arrows). This can work OK if you’re only singing one track over the music.

This is less convenient if you’re trying for multi-track harmony because you may need to edit and correct each time you sing. If you go through the sync setup before you start, you can sing or play as many tracks as you want one right after the other.

Koz

This sounds a little more complex than I need…my setup is just one track over the karaoke. Do I technically even need overdubbing and/or mixing? I mixed one track and I think it sounds a little better than unmixed but not by that much.

This sounds a little more complex than I need

There are simpler ways to do it. The problem is getting Audacity to record and play at the same time. It doesn’t much like doing that.

So play the backing track in Windows Media to your headphones and sing into an Audacity recording. Move the backing track to the same Audacity Project and you’ll get two timelines one above the other. One backing track and one your voice. Adjust the track timings as needed. Audacity will mix everything into one show when you export.


I don’t think there is a good way to sing into a live mix. I’m beginning to think maybe that’s what you’re aiming for.

You could, I guess, play Windows Media into your computer speakers and then sing. Record the whole room. That should sound terrible, like recording in a kitchen with a cellphone playing the music, but that does work, and that will give you a theatrical mix in one pass. No overdubbing or split sound files. You’ll have to adjust everything during the performance, Microphone spacing, speaker volume, and tone. There’s no going back and fixing anything later.

Koz

Let me know if I get close.

Koz

What I am doing is that I have a karaoke track. I play the track and then sing while it’s playing - it’s a karaoke track so I provide the vocals. Then I export the finished product. Makes sense?

Do I compress overly loud parts in my recording? Parts which are very loud sometimes sound really unpleasant. Is this what compression is for?

don’t believe those power specifications though, that’s marketing lies I’m afraid,

“Unpleasant” could mean clipping(distortion) or maybe you are straining your voice. Compression or adjusting the levels generally won’t help unless the vocals are not clipped before mixing.

Compression may help a little and it’s worth trying.

Adjusting “manually” with the [u]Envelope Tool[/u] might be better, but that can be tedious.* The trick with the Envelope Tool is to fade-up and fade-down without changing the end-points so there are no sudden-unnatural up or down jumps in volume.



\

  • Pros using DAW software use something similar called “Automation”. DAWs are designed for this so it’s a little “easier”, but a DAW is a LOT more complicated than a “little audio editor”, like Audacity.