Learning to use Effects / Macros

Hello! I’m a newbie and I’m trying to learn to use effects to clean up audio recordings of myself speaking. I don’t know a lot about audio manipulation, so most of what I’ve gathered has been from watching YouTube videos and comparing before and after audio samples. Unfortunately, I often can’t hear the difference in before and after comparisons unless it’s something like volume. This has been very frustrating in trying to learn what different effects do. I do have some very minor hearing impairment, but I think this is more me having an untrained ear and not knowing what I’m listening for.

Here is an unedited audio clip just as it was recorded in Audacity:

And here is the result of my current editing knowledge on that same clip:

This is done by using the following macro (shamelessly copied from a YouTube video of someone that knew what they were doing) plus trimming the beginning and end and manually silencing the gaps when I’m not speaking (to cut out the sound of me breathing and background noise):
Macro Settings.png
This works reasonably well for my scenario, but I still don’t find it particularly pleasing to listen to. The best way I can describe it is ‘stiff’ and ‘hollow’, which if I understand correctly might mean I’m missing some of the ‘warmer’ voice frequencies. I think maybe the audio also degrades at the louder moments and the noise reduction isn’t tuned correctly (explaining part of why I have to silence things when I’m not speaking). I also constantly hear my tongue clicking in my own audio, but my friends say they don’t notice it so that might just be an effect of me listening to my own voice, I’m not sure.

Help improving how to process my audio as well as how to identify what’s wrong with it (in correct terminology) are both very much appreciated! I’m currently using a Blue Snowball iCE USB mic and recording at my desk. The environment is usually quiet except for the computer hum.

The problem with using Audacity’s native Noise-Reduction in a macro is the NR does not “know” what the noise is you want removed, that requires human-intervention to select a “noise profile”: i.e. audio which only contains noise, e.g. the fan only, (no speech).

So you’d have to prime NR with a noise profile before activating the macro, (if the macro contained an NR step).

Rather than do that, paste-in some noise floor,
(as absolute silence between words is jarring when there is some room (fan) noise during speech).


BTW a compression-ratio of 10 is high. 2/3/4 is more typical.
Compression will inevitably raise the background (fan) noise.

Thank you Trebor very much for your feedback! I figured out how to get the noise profile working and it has such a dramatic effect, so much better! :smiley:

I recorded a new clip with some delay at the beginning for getting the noise profile:

And then I used that to set the noise profile. I also dropped the compression to 3 instead of 10 and reran the macro. Finally, I used the noise floor you mentioned when editing out the breathing instead of silencing the sections:

This sounds much better, and with the correct noise reduction has more of those warmer tones I was looking for. I think I can hear a difference when I mess with the compressor. A high compressor seems to make the beginning of sentences quieter from what I can tell. My understanding is that this is supposed to help even out the volumes of the different sounds so that speaking remains a consistent volume for easier listening. So if I understand correctly, it’s making the beginning of the sentences quieter to transition slower from the quiet between sentences to the louder speaking.

I’ve also been playing around with it a bit more and I think maybe I should remove the second Normalize step? Or at least remove the gain it’s doing. After doing the above changes, the audio now sounds like I’m a bit too close to the mic, but moving the mic back doesn’t seem to fix this (and just makes background noise worse) and I think it’s the second Normalize gain that’s causing that. But I’m hesitant to remove it entirely because I can’t tell the difference between skipping the step and doing Normalize with no gain.

Here’s it again with the second Normalize step removed completely. This seems to make the breathing quiet enough that I think it’s okay to not edit it out.

It’s the attack & release times, rather than compression-ratio, that makes the Audacity native compressor slow to act.
IMO for speech, the compressor attack should be ~10ms, the release 100-200ms.
Those are not possible with the Audacity native compressor. :cry:

The 3-step workflow for ACX narrations is worth considering if the objective is consistent voice recordings.

Making a macro of those three steps is not possible at the moment, (a bug).

[ OCENaudio is Audacity’s free competitor, it doesn’t have macros,
but it has a native real-time compressor that can be set for speech ].