Advice on Chris's Dynamic Compression: use INSTEAD of RMS Normalize? When to do the De-Noise?

Hi,

I’m attempting my first Audiobook having used these forums and played with Audacity for a few months.

I have a reasonably good setup, mic wise (though can still be beset a little by traffic noise in the middle of the day).

My recordings mostly pass the ACX test - but sometimes, after compression / normalisation, I find the background noise annoyingly (worse still, unevenly) over-amplified.

Now, I know there are good standard tools for helping out with all of these things, but I’m not sure which is the best order to run them.

I have hunted around to answers for various of my concerns on these forums. Quite often, the answers to one relevant query conflict with answers to another. So, I hope you don’t mind me putting all my queries into one post?
Please let me know if I should ask them separately.

  1. Am I right in thinking that it’s best to run the standard filters mentioned here on my raw clips BEFORE my main edit (ie. deleting, cutting, splicing, etc).

  2. Using Chris’s Dynamic Compressor
    I heard much mention of this, tried it, and like it.
    (Before that, I tried Levlator - which was good, but occasionally sounded distorted.)
    It seems to preserve warmth well by adjusting the bass midrange and treble separately. Am I understanding that right?

Specific Compression questions:

a) RMS Normalize. Compress Dynamics seems to sort out RMS anyway, as part of compression — so do I even need to bother with running a Normalize filter separately? Or does that dedicated RMS Normalize filter do a subtler job?
If so — which is the latest version that you’d recommend?

b) I do find that the tail end of my paragraphs can fade out after Compress Dynamics (as though I were turning down the mic volume). Am I doing something wrong? — is that something that using RMS Normalize first would prevent?
— might the “Hardness” setting stop this (could someone explain to me how “Hardness” works - what it actually does to the sound?)

c) I am running the 1.2.6 version.
— Has anyone tried Compress Dynamics 1.2.7 (beta in 2012)? Does that work better?
Do the “compress bright sounds” or “boost bass” settings make a useful difference (or make it worse)?

d) Has anyone actually updated Chris’s Dynamic Compressor since his death?
Is there maybe a better tool now to recommend for all and sundry?

e) Noise Gate. I find that Compress Dynamics can boost background noise patchily — but that, when I use noise gate to prevent this, room-tone gets replaced with eerie muffled silence. Obviously, what I want is a natural room tone, which simply doesn’t get amplified when my voice does. How to keep the room tone without amplifying it? Would a zero setting on “Noise gate falloff” help — or does it actually need to be a positive number to prevent the room-tone being amplified?

  1. So, Noise Gate filter. Since Compress Dynamics applies a noise gate, need I even bother with this dedicated filter? Does it perhaps give more subtle or controllable results?
    If yes, should I run it before or after normalisation and compression?

  2. Noise Reduction.
    Should I do this before or after Normalizing/Compressing?
    Obviously, I want to keep this to a minimum so that my voice doesn’t get too artifacty (and natural breathing start to sound like Darth Vader).

  3. De-clicker
    I don’t suffer too much from popping, mouth clicks and lipsmacks but, having been warned to watch out for these, I fear I may be becoming obsessive. My mic does, occasionally, introduce the odd tiny electronic “nit.” Should I worry about that? The de-clicker filter does an excellent job of eliminating almost all of these — but it can also muffle the 'b’s, 'p’s, 'd’s and, especially, 't’s that you want to hear at the beginning and ends of words. So, is it even neccessary? If so, is there any sense in going through the file and selectively declicking only the parts without consonants and with clicks/pops/smacks - or, if these were never too bad, is that (on an hour long chapter) just unnecessary hard work?
    Is the slight loss of articulate consonants worth it just for ease, peace of mind, and a slightly more click free audiobook? OR, if the clicks aren’t too intrusive is better just to live with them and keep the clarity of speech?

  4. Lastly, LF Rolloff.
    Should I just run this on all raw files right at the start?
    I don’t much like it; it takes some warmth out of the voice
    — but do tell me if everyone on ACX applies it as standard, so that my files would sound weirdly amateur if I don’t.


    I’m grateful for anyone and everyone’s input on this.

If you have a standard chain that you follow, I would love to know the best order to apply the filters - and/or which ones I can afford to leave out.

Thanks for all your help.





Eric

I don’t know that anybody has messed with Chris’s Compressor since he reached end of life. Oddly, that’s not recommended for audiobook production because It is a full-on broadcast compressor and its goals may be different. You can fake out Chris. If you stay silent long enough, Chris will think there’s something wrong and start cranking up the gain…and noise. He designed it so he could listen to opera in the car. Show noise is not an issue.

You need to know Chris has a known bug. It’s a look-ahead processor and doesn’t like sailing off the end of a file. Always leave some work longer than the desirable show so Chris will have something to chew on. Cut off the extra later. In my opinion, the extra steps rule it out for serious production.

You left out a super important consideration although you touched on it. Getting out an audiobook can turn into a career move if you don’t start with a quiet, echo-free room. There’s my silly joke that I can get any microphone to work in my third bedroom with the soundproof walls, ceiling and carpet on the floor. Make ACX, too.

We publish a remarkably successful suite of tools.

You are warned the tools depend on each other and you can’t mix and match, leave any out or add (very many).

The rumble filter (Effect > Equalization: Low Rolloff for Speech) is recommended not necessarily to get rid of the Metrobus going by or breath noises, but get rid of the data errors many microphone have. If all you do is connect many USB microphones, they will produce “noise” down to single digit frequencies. Nobody can hear that, so the manufacturers have no interest in fixing it. And yes, it does throw off the voice processing tools.

after compression / normalisation

That’s the classic process widely published. I designed ours before that process was popular so I did ours in a vacuum. Since Mastering 4 doesn’t have a compressor, it doesn’t have noise pumping within a single chapter. I still have inspecting that classic process in my to-do list, but I assume it achieves ACX “by accident” since all these tools work on sound peaks, not RMS or loudness. Mastering 4 guarantees RMS and Peak and the only variable left is noise. If you fail noise, that’s the second major chapter in the publication.

Anybody can talk into a Yeti. Noise is the college course.

Jury’s out on Noise Gate. Typically you can hear it working and ACX hates that. People claim success and I should investigate.

Yes, I know everybody wants one-click does all. Not so far.

I have no preference or thoughts about editing before or after. You should be consistent. ACX hates differences between chapters or segments of the publication. Oh, there is one. People are obsessed with getting rid of breath noises. I think that’s a waste of time. I don’t know that ACX has ever bounced anyone for normal human noises like breathing. Lip smacking, yes, because that can be distracting. ACX hates distracting.

The ACX model is someone telling you a story in real life. That’s slightly different from the broadcast model.

I need to drop.

Koz

[whoosh effect]

I’m back.

If I didn’t say so in the above monologue, one of my goals is that you should sound like you after Mastering.

The raw reading and the ACX submission should sound almost identical except possibly for volume. “Almost” because the first step, Low Rolloff for Speech does affect vocal tones. If you have a ballsy, low-pitch broadcast voice, you are going to notice we cut off one of your…vocal tones. There was a discussion about that. We decided to leave the effect in because on balance of all the good it does, and it matches what Hollywood has been doing for years.

If you don’t want to sound like you after Mastering, that’s a different task. There is no Announcing filter or effect. If you couldn’t read aloud before, you still can’t.


If you want, you can submit a forum voice test according to this recipe.

http://www.kozco.com/tech/audacity/TestClip/Record_A_Clip.html

If you have troubles making submission standards, it’s highly recommended you be able to hear the work. Good quality sealed headphones or good music quality speakers are required. We may start asking you “can you hear this or can you hear that?” if you’re trying to cut on your laptop speakers, the answer will always be No.

Koz

I use Chris’s compressor.

https://vimeo.com/287374350

Not arguing with Koz, just sayin’.

@christianw

I use Chris’s compressor.

Without cranking through the video, do you use it at the default settings? I never said it doesn’t work, you just have to be aware of its shortcomings—the most serous being that end-of-file thing.

Does it hit RMS and Peak routinely? For just you or for multiple different people?

What’s your studio look like? Microphone?

Koz

IMO Noise-reduction before compressing.
After compression the noise level will no longer be constant.

IMO Noise-reduction before compressing.
After compression the noise level will no longer be constant.

And I believe that’s backed up by at least two different process publications. They all put Noise Reduction at the beginning. But that’s using a compressor. Mastering 4 doesn’t use a compressor and doesn’t have noise pumping. I put Noise Reduction at the end.

With the idea of using as little processing as possible, you won’t know until the end of Mastering 4 if you need Noise Reduction or not, and if so, how much.

Koz

I agree.

All the answers are in the video, which will require cranking through for anybody interested. Can’t sum up this stuff in a few sentences.

My remarks on Chris’s Compressor are after 18:40 in the video.

I use a Shure PG42 USB mic plugged into an i7 computer, in a home office padded with a few cargo mats. Ambient room tone is about -50 decibles.

I would say the learning curve for beginners is steepest when merely listening. Only experience alerts you to the distant sound of an airplane, a truck going by, distance from the mic, or the inevitable accidental change in Audacity settings. If the waveform while recording looks “different,” stop immediately.

Also, be aware of Audacity “history” (in tabs, View/History). Effects applied cannot be removed after the sessions is close–or “saved.” Beware!

The solution (given me by Koz, I recall) is to save recording files in stages, when experimenting with plugins. Before you try a new effect, save any file that is OK so far–because once you close the file it is set in stone and can;t be removed.

The single most important thing to allow sleeping at night is the raw file, preserved untouched forever.

When things go wrong, if you don’t have a copy of the raw recording you’re sunk.

We’ve all been there.

Guys, thank you, thank you!

That is such good advice — and all in one thread.

(Meanwhile, I have got more acquainted with Chris’s Compress Dynamics, RMS Normalize, Audacity’s downloadable Noise Gate, and the very effective, built in Noise Reduction filter.) I have summarised all the answers as I understand them in a fuller essay further below. If you’ve time, I’d be grateful if you could correct me on any misunderstandings.

In a nutshell, though, the lightening answers to my many questions are:

1) Yes. Filter first, then edit. But, before anything, Export a copy of your raw recording.
2) No need to Compress; it might make your reading sound flat and can introduce noise problems.
3/4) Either use Noise Gate or Noise Reduction; perhaps no need for either;
[–]but, if you must Compress, do the Denoise first — not after any other processing.
5) Don’t bother to Declick unless someone tells you to. Mouth noises are natural
[–](no one answered this, so I’m inferring from Koz’s remarks on just keeping processing to a minimum)
6) LF Rolloff is optional — but you’d better be in a very quiet room and not using a USB mic.


This leads me to two further questions:

  • Regarding these “USB Mics”? I use a Heil (practically the opposite to a condenser mic, it is very up-close and directional, enabling musicians to isolate their singing voice from the echo of the band — and me to reduce background noise and reflection without egg boxes and a duvet).
    It has an analogue cable requiring a preamp: I have that cable connected to a red Focusright Scarlett 2i2 box
    — This, in turn, plugs into my computer’s USB socket (from which the Scarlett also draws its power).
    — Does that make it a USB mic?
    — Or, by USB mic, did you just mean something that plugs straight into your computer without preamplicication?
    .
  • One important final question: does RMS Normalize simply raise the volume of everything uniformly (thus keeping background noise absolutely constant)? It affects answers 2a and 3 of my longer summary below, which I may need to correct.


    So here is the detailed summary for the solutions to my queries above.
  1. Best processing order:
  • Export a backup of your fresh raw recording to .WAV
    • Run any effects — esp. those described in Koz’s AudioBook Mastering version 4
      (all tools and instructions on one page; just download and install the filters, and follow steps in the order listed).
    • Run an ACX check to see you’re in the ball park.
    • Export another backup of processed audio at this point
      (since still unedited, should match the raw .WAV file second for second)
    • THEN do your edits (deletes, cuts and pastes).
    • Save, run ACX check again, and Submit.
  1. No need to Compress (sounds more natural without).
  • (That said, my client takes my samples out for a spin in his car and gets agitated if there is too much variation between loud and soft parts — so I’m resigned to some degree of levelling)
    [–]So, if you wish to compress (especially with Chris’s Dynamics):
  1. No harm in running RMS Normalize first, right?
    It’s simple and takes some of the pressure off the compressor
  2. With Chris’s Dynamics, try “running on” for a sec or two at the end of a reading (speaking gobbledygook or “Cut this, cut this, cut this”) which you can clip out in the edit (hopefully stops that volume fade at the end of paragraphs; that said, I found setting the “hardness” control high enough may also reduce this).
  3. I can find no one recently who uses the 1.2.7 beta, nor further improvements — nor, even,
  4. recommendations for any newer compressor. My client loved the sound of what I’d sent through Levelator. This works brilliantly and comes highly recommended, but it is older and pretty basic: there are no controls; if you’re unlucky, just sometimes things can come out a tad fuzzy — take it or leave it; and, since no longer supported, it may eventually become obsolete.
  5. The “Floor” and “Noise gate falloff settings” in Chris’s Compress Dynamics are handy — but less subtle than the downloadable Nyquist one (see top and 3 below); probably best left at 0 (certainly no more than 4); play around a little with the Floor setting so you don’t ever dampen your own speech. Better still, use the Nyquist one below
    If you are compressing with low-level background noise, then do some gentle de-noising beforehand, or there might be unacceptable variation in noise levels (harder to strip at the end). You can use NoiseGate or Noise Reduction:
    .
  6. Noise Gate simply dampens sound quieter than a certain “floor” level, but leaves that sound otherwise unprocessed. This is pretty handy before you Normalize° or Compress°, both of which risk amplifying your background noise:
    __
  • Normalise raises the volume of EVERYTHING by a uniform amount: determined either by the loudest “peaks”, or from taking an average volume across your recording (RMS is short for Root Mean Squared) and boosting that average to a uniform level (which might lead to clipping — then to be pared down by a separate peak “limiter”)
  • Compress boosts the quiet bits and pares back the overly loud bits (Normalizer and Limiter in one package); as Koz points out, it could nullify some of the natural emotion in your reading as well as draw attention to background noise by “pumping” it up and down.
    I tried gently using NoiseGate before Compress dynamics 1.2.6 and (especially when there was background traffic hum) it actually worked much better.
    .
  1. Noise Reduction… seeks specific steady background “noise” to strip out; it comes ready installed with Audacity (you might have to enable it), and does a great job on steady white hiss, traffic hum, fridge buzz, etc., but beware: although it applies itself more heavily in pauses, it may also gently strip some of those tones from your speech. ACX doesn’t like the voice to sound too electronic — also, after processing, your breaths may sound more intrusive — so just be wary of setting it too high; steer clear of double-digits. But, if used sparingly, on steady background noise levels, it’s damn handy.
    .
  2. Declicking. From what I’ve read, I’m going to infer that this is not necessary. Sure, if you have a seriously loud problem with mouth noise, then you may want to run the filter just once (and live with collateral loss of clarity where 't’s and 'd’s get inadvertently damped). And, sure, if you’re Stephen Fry reading Winnie the Pooh or Harry Potter, then they’re going to throw thousands of pounds at some studio bod who will sit for a month and clean up every lip-smack whilst preserving your fine articulation. But ACX wants natural readings at a bargain price. So don’t get bogged down. Right Guys? Same for de-essing? Only if you whistle like the musk rat in Deputy Dawg, right?
    .
  3. LF Rolloff. If I understood correctly, it sounds as though you can get away without the LF Rolloff Equalisation stage (but you’d better be far away from muffled traffic and not using a USB mic, or ACX’s automatic controls might reject you)
    Also, from what I can gather and intuit, it would make no difference whether you use it right at the start — or even after the final edit. Right guys?

But ACX wants natural readings at a bargain price. So don’t get bogged down.

Audible customers rate you publicly on performance. Reviews of sloppy processing or annoying sounds can be brutal. Audiobooks are expensive and intimate. Getting them right is usually a lot of work.

It sounds as if your performance has few problems with mouth sounds, breaths and so on. That makes it easier.

Pieces of that are wrong, but I can’t stick with this right now. Sometimes the Low Rolloff for Speech is needed to correct noise problems inside microphones and interfaces and has nothing to do with your voice. We called it that because that’s roughly what Hollywood and the studios call it and the most people will recognize it and what it does.

I gotta go.

Koz

I need to hit and run. Blitz-Posting.

Koz, is the complete instructions on one webpage — no guidebook .pdf to download?

It’s not even a web page. It’s a forum post. Posting it to the Audacity wiki manual is on my 2-Do list. There is one up-side to the post. It’s a snap to post corrections—not that it’s had that many. But it’s not easily searchable.

“USB Microphones” are understood to be the ones that plug directly into the computer with a USB cable. The G-Track is a USB microphone.

They come with baggage. They almost invariably come with low volume because high volume and overload in the hands of a New User is immediately and permanently fatal to a performance.

You can’t separate the performance location from the noisy computer by more than one USB cable. By contrast, you could put the noisy computer and XLR interface out in the garage.

Technically, that would work, although it’s a little rough to see the Audacity sound meters out there.

I need to go.

Koz

More good advice. Thank you very much.

And thanks, for the offer to have a listen.

Please find a sample of my recording set up as a .WAV file.

It was recorded around 4am, when traffic quiet.

I may send another sample from later in the day - when you’ll hear more intrusive traffic noise.

I personally would not put punctuation marks in filenames. Upper case, lower case, numbers, -dash- and underscore are the only universally acceptable characters. Otherwise, you could submit paid work to someone running an older Windows machine and have some entertaining emails as to why they can’t open your work.

~~

That was easy.


Screen Shot 2018-09-14 at 19.12.28.png

I applied Mastering 4 and Noise Reduction of the beast, 6, 6, 6.

There is a technique to listen. Scroll forward and set your listening volume for normal voice. Then scroll back and listen to the whole thing. Don’t change any settings. If I did it right, you should hear little or no noise at the beginning.

That was the good news.

The awkward news is your sibilance. You have strident “SS” sounds in your words. Many new microphones do that because it sounds “professional.”

There is a “DeEsser” program available about which I know next to nothing and I have had reasonable luck with custom equalizer settings—about which more later.

https://forum.audacityteam.org/t/updated-de-clicker-and-new-de-esser-for-speech/34283/1

I know DeEsser has a very gentle effect after you master a work. I experimented changing the first value to -30dB from where it normally is. I understand that’s the value which sets the amount DeEsser squashes the harsh tones.

I could listen to a story in that voice. I could probably listen to you read the phone book.

Koz

Skip the DeEsser at -30dB. That makes you sound like a cartoon character. I got reasonable results on your sample at -20dB which I believe is the default.


Screen Shot 2018-09-14 at 19.33.56.png
Koz

“Glazing.” That would be “glass,” right?

I am concerned about this being the most quiet time for recording. Not that it didn’t work. It worked fabulously, but if it turns out this is the only time you will be able to work…

Microphone system hiss (fffffffff) was the reason I used Noise Reduction this time. And technically, you didn’t need it, but you passed by the thickness of a playing card, so that’s not repeatable or stable.

Traffic.

Effect > Noise Reduction doesn’t work on moving or changing noises. So if you can arrange for the same cars and buses to go by repeatedly, we may be able to help you. Noise Reduction is a bloodhound. You let it sniff the noise (profile) and it goes howling off looking for that exact same noise in the show. If the noises changes…

Prepare a new test under Difficult Conditions and see how it goes. This is a good deal less of an emergency now because you have a known, good, working technique.

Koz

There’s been a little bell ringing since I heard your test. You’re a professional presenter, right, and you decided to record either your own paper book as an audiobook, or record other people’s books unencumbered by the studio system.

Did I hit it?

Koz

Since you’re this close, you should start to worry about Computer Hygiene.

Export each chapter raw reading as a WAV (Microsoft) 16-bit sound file and copy it somewhere safe. This against the time the laptop fails during an edit and takes the track with it into the bin. You should never be in a position to need to read it again.

And after you finish your edit, make a new WAV (in spite of ACX requiring an MP3) and that becomes your archive. Then make the MP3. Then be able to point to two separate places that contain your work. This is against the time the dog eats your laptop.

That’s no problem, just get the backup thumb drives from the credenza. Cloud storage works here, but I’m leery of corporate shenanigans. “Doobly-Do Corporation declared a hacking intrusion that wiped their servers.”

Darn.

Koz

Thanks again, Koz. All great advice.

No. Not a professional presenter so much as a humble actor with a marketable voice.
The author found me on ACX, because he liked my second rate impression of Lawrence Olivier narrating World at War.

I have done V/Os here and there, over the years and, until I tried this ACX malarky, I had no idea just how much work the poor engineer put in after I went home.

Very sad to hear my sibilance is de trop (I did wonder, but thought I could get away with it — on the grounds that every presenter under forty seems to lisp these days; but that’s actually a trick of the microphones, eh?).

I had been toning down extreme esses by hand, using parametric EQ. Easy to do in seconds, here and there — but a pain if every last ‘s’ actually needs attention. Thank you for the suggestion. I will look at that and various other packages.

I got caught late at work, today, and will send the worst-case noise scenario tomorrow.


ATB, Eric