Audio cut off after Loudness Normalization

Hello, I’m new to Audacity and audio in general.

I’m following the Audiobook mastering guide here ( https://wiki.audacityteam.org/wiki/Audiobook_Mastering ) for a podcast with multiple mics. After RMS loudness normalization followed by a limiter, below is what I am left with - the audio appears chopped off for several tracks.

Is this normal, is this ok (will my audio sound bad), and if not how to I fix it or prevent it from happening during those steps? Thank you.
Audio.png

Audiobook Mastering is designed to gently process a single voice reading a story. It does that very well.

I don’t think Mastering put the holes in the performance, though. That’s much more like processing during the recording.

This is Windows, right? Do you use Zoom or Skype (or both)?

Koz

below is what I am left with - the audio appears chopped off for several tracks.

Are you talking about the wave height? That’s the result of limiting. The ACX audiobook requirement is a peak of -3dB (about 75%).

It’s normal that you would need limiting to meet that spec after normalizing, which usually increases the overall level/volume.

The flat tops and bottoms on all the waves? That’s what happens when you apply processing to multiple tracks that have wild volume variations through the course of the show. You can get the same thing with an audiobook if you decide to lean into the microPHONE FOR ONE SENTENCE ANd then sit back as normal.

The other thing that’s going to cause problems is Loudness Normalization itself. Its job is to average out everything on a vocal track. If the performer only said three words in the whole show, those three words will be thermonuclear. Again nothing like an audiobook.

So you picked the wrong processing. Stop doing that.


I used to process an on-line show with problems similar to this one. Keep your original sound files in a safe place in case it doesn’t work. I’m not kidding. Shovel everything off to a thumb drive and work on copies.

See if you can install and run Chris’s Compressor.

https://theaudacitytopodcast.com/chriss-dynamic-compressor-plugin-for-audacity/

I change the first value, Compression Ratio from the default 0.5 to 0.77. The other settings are factory default.

I don’t know if it will work with a multi-track show (I never tried it) but it works a treat on a stereo or mono “radio” show.

So you may need to mix down to mono for the experiment (keep those original sound files).

–Warning-- The only oddity is the two ends of the show. Chris doesn’t like running off the end of the track. So put 30 seconds of “Something” on both ends for Chris to chew on and you can cut off later.

Chris will take whatever odd, shifting, funny levels, and damage you created, and turn it into a level radio show.

Koz

Note if you overloaded a microphone in the course of recording the show—flat-topping or clipping—red lights on your mixer—you’re dead. No processing can rescue that. Go record it again.

Koz

Thanks for the replies. Clarifications below:

This is a podcast with multiple participants/speakers. Each one recorded an audio file on his/her own machine in Audacity and sent the .wav file to me. Each track in my screenshots is the audio from an individual participant.

My goal when using loudness normalization is getting the loudness up to -16 lufs, which from what I understand is an industry standard. Performing this step on any individual track increases the db so much that the upper and lower portions of the waves are chopped off / or go above and below the 1 / -1 measure. Is there a better way of getting the audio to a specific amount of loudness? Alternatively, if I combined all the separate tracks into one so the vocals are constant, then applied loudness normalization, would this accomplish my goal?

See below for what the project looks like before any processing is applied.
Full episode.png

This is a podcast

In that case you don’t have meet the audiobook specs and you can limit to 0dB instead of-3dB.

My goal when using loudness normalization is getting the loudness up to -16 lufs, which from what I understand is an industry standard.

YouTube, and some of the other streaming services are adjusting to -14dB but they won’t boost the levels into clipping so some streams won’t actually hit that target. Audiobooks are quieter at -18 to -23dB.

Performing this step on any individual track increases the db so much that the upper and lower portions of the waves are chopped off / or go above and below the 1 / -1 measure.

That’s pretty normal. When you boost to hit your -16dB LUFS target you’ll usually push the peaks into clipping unless you use limiting or some other kind of dynamic compression (limiting is a fast kind of compression).

The “digital maximum” is 0dB (= +/-1 = 100%). Audacity can go over 0dB so it’s OK to temporarily go over before limiting, but regular WAV files will clip (distort) if you try to go over and if you use a format that can go over 0dB the DAC will clip when you play back.

Audicity’s limiter uses “look ahead” so it can “push down” the peaks without distorting the wave shape.

Is there a better way of getting the audio to a specific amount of loudness?

Loudness Normalization can set the LUFS loudness level.

Compression and limiting are the tools used to bring-up the overall loudness without clipping/distorting the peaks, or to bring-down the peaks with little effect on the loudness. (But, if you over-do them they can sound like distortion or you can get side-effects). Limiting is easier to use without getting side effects.

Alternatively, if I combined all the separate tracks into one so the vocals are constant, then applied loudness normalization, would this accomplish my goal?

Mixing is done by summation so you may need to do both. Individually to match the different participants and then again in the mix to hit your loudness goal and to prevent clipping in the mix.

I think I recognize the recommendation that multi-point podcast recordings should be done at each location instead of trying to do it after transmission. It gives you “clean” voice tracks assuming everyone did a reasonable job of recording.

It does give you one oddball problem that live recording doesn’t. The start times will be wonky.

Did you manage a click, clap, or other sync signal or sound?

==============

If someone had a gun and forced me to mix this:

– Make two independent protection copies of everything.

– I see from the illustration that these are all “really” mono tracks. Mix everybody down to mono. Much easier to perceive single mono tracks than a screen full of redundant information.

– Find a terrifically high quality speaker or headphone system. Nobody is going to mix this on laptop speakers.

– Play the show and watch the bouncing sound meter. I Expand the meter display so it goes across the whole Audacity window and increase the meter range to 96dB. That will measure the total show signal no matter what the blue waves are doing.

– Critically listen to the presentation and play it. This might be the time to Time Shift each actor so the show times line up.

– As each actor speaks listen and judge the loudness with one eye on the sound meter. Volumes can be adjusted up or down with the (-) -----0----- (+) adjuster to the left of the track. Note this doesn’t change the blue waves, only the playback volume. Listen through multiple passes until the overall volume is pleasant to listen to and the sound meter never hits maximum.

This will be interesting if someone cracks a good joke and everybody in the known universe laughs at the same time. Those may have to be individually adjusted.

– I’m unsure what happens to an Audacity Project after these adjustments, but Save a Lossless Project (the most stable kind) and Export a WAV (Microsoft) 16-bit sound file. The WAV is your Edit Master. We should be crystal clear, this master sounds perfect and doesn’t overload.

– Make two independent copies of the Edit Master WAV. A word about that. Be able to point to two different places for the two copies. Internal drive is one. Thumb drives work. External hard drives work and cloud storage works. Two folders on your internal drive does not work.


– Open up the WAV in Audacity. Apply the first step in Audiobook Mastering: Effect > Filter Curve: Low Rolloff for Speech > OK.

– Adjust to the loudness specification (LUFS, etc) of your choice.

– Export whatever format you wish (probably MP3). Note you can’t edit an MP3 without causing sound damage, so don’t lose that Edit Master WAV.

It’s not at all unusual for editing to take five or more times the length of the show—and that’s for an experienced editor.

I don’t know of a one-button-push way to do this.

Good luck.

Koz

If that’s too horrifying, there is a one-button-push way to do this show. Do it on Zoom and get Zoom to filter, process, and mix the show for you and send you the sound file later. You might do that anyway and compare the two masters. Contrast the quality difference with the hours you spent making your edit.

Koz

There is a publishing note.

You might be able to apply the complete Audiobook Mastering Suite of tools to your Edit Master WAV, only use LUFS for Loudness Normalization instead of the audiobook standard RMS. You might be able to rejigger the Limiter step to keep your show out of overload. I know 0.00 about the LUFS standard.

ACX audiobook standard submission format is MP3.

I expect that to work. Audiobook Mastering was chosen to have minimal effect on voice quality while still meeting publication standards.

Koz

This is what my -96dB meters look like in real life.

MeterRange-96dB.png
We note that the original Cool Edit sound meters did this across the bottom of the screen instead of the top.

But wait. There’s more!

The playback meters are set to display Peak and RMS, Green and Chartreuse.

The Recording meters are adjusted to display Green, Yellow, Orange, and Angry Red overload as your volume increases.

Koz

Yell if you get lost. I just go until somebody stops me.

For example, you can convert a stereo (two blue wave) track to mono (one blue wave) with Tracks > Mix > Mix Stereo to Mono.

Screen Shot 2021-01-23 at 10.26.05 AM.png
Koz

Thanks very much for all the advice and information. This is far and away more than I was hoping for. I’ll proceed with the steps you’ve recommended, Koz.

Our first attempt at this was via Zoom and I wound up scrapping it. Everyone’s sound quality is vastly different so each track requires it own approach to mastering. I’m expecting a lot of work. So far I’m around 15 hours in - silencing sniffles, and overlapping voices, cutting out sections, etc (the original audio was nearly 3 hours long before slicing into ~50-minute chunks, which will each serve as one episode). I know it won’t sound spectacular, but I’d like it to sound as good as I can make it given the constraints.

We did a countdown to hit the record button, and I snapped intermittently (when there was a portion I knew I wanted to discard) so syncing the audio up hasn’t proven to be much of an issue.

Everyone’s sound quality is vastly different so each track requires it own approach to mastering

News shows are running into this constantly now. It used to be when they “cut to the guy at the building fire,” you were expecting fire-engine sounds and chaos. Now, it’s recording in the kitchen echoes and refrigerator sounds from everybody in the news team.

This is far and away more than I was hoping for.

Any opportunity to be obsessive.

Audacity, unless prevented by the MUTE and SOLO buttons will play everything top to bottom and give you a good idea what the finished show is going to sound like. Select one performer > SOLO > and apply any corrections needed. UnSOLO and they will return to the mix.

syncing the audio up hasn’t proven to be much of an issue.

You win. Nobody ever thinks about that.

silencing sniffles

That’s obsessive. It’s supposed to sound natural like people meeting in a room. Unless it’s a theater presentation. Is it?

There is a correction to my posts. You can’t simply save a Lossless Project. Save a regular one. It seems I uncovered an unintended feature [cough-bug].

So far I’m around 15 hours in…

That’s the leading sentence to the forum help post that continues: …and my computer/Audacity crashed. The show is silent now. Is there any way to retrieve my work? It’s really important!

You will note I stressed twice in the posts that you be able to recover from catastrophic failure with, at most, annoying inconvenience.

Let us know how it goes.

I published notes for a kitchen table sound studio.

https://forum.audacityteam.org/t/too-compressed-rejection/52825/22

Koz

Thanks Koz. I used to work on important websites for a living, so setting up automated backups are a standard part of my routine.

Sniffles from one performer are fine. From 5 it’s overwhelming. IMO

I’ll let you know how it goes.

There is one other oddity it’s good to know about. You can’t overload Audacity by an action or effect. Once the show is on a timeline, it’s possible for the waveforms to exceed 100% (0dB) with no damage. Yes, if you have clipping damage warning set, you will get red lines…

… but it’s metaphorical damage. The sound isn’t really gone and you can apply another effect or correction to bring it back. This also means you can experience everybody laughing at the same time and fix it later just before exporting the edit master with Soft Limiter. That’s the third step in Audiobook Mastering.

Soft Limiting gracefully, gently-but-firmly reduces the volume of a sound so it’s almost impossible to tell what it did, but “for some reason,” the sound peaks never seem to go over the set limiting value.

As with most effects, you can go nuts and create damage. That segment where everybody laughs at once is not going to sound natural, but hopefully, it will be past quickly before anybody notices what happened.

Oh, this overload thing? That’s why Audacity runs internally at 32-bit floating instead of the more normal 16-bit data, and that’s why you have to “convert” formats when you Export a sound file.

Overload immunity vanishes the instant you make a normal sound file. Overload in a WAV file will sound like gritty trash.

Koz