Most people posting YouTube instruction videos could use lessons in editing. Nobody is interested in hearing or watching your mistakes.
Noise Removal is no longer offered in Audacity. It had some problems and was difficult to use. Noise Reduction is the current tool and the default settings might not be the best for voice. Worse, hard noise reduction can make your voice crisp and harsh which was one of the corrections I did.
Post the test clip and I’ll tell you the values to try.
This any better? I took out the popping and booming sounds because you’re too close to the microphone. Then I applied a De-Esser filter to get rid of the super crisp S S S sounds. I also applied the audiobook volume controls, but I probably didn’t need them.
We need two clips: One raw recording. Don’t do anything to it. No filters, effects or processing.
If you use this as a guide, you can post a sound clip right on the forum.
As above, you don’t appear to have any serious room noises. Room reverb and echo just kills people when they try to read for theatrical performance.
What distance should i be with my Blue Yeti Microphone.
That can change a little if you have a pop and blast filter. Two people have made first-pass corrections on your sound and we both suppressed your pop and rumble sounds. A blast filter will help with that. That’s the round thing between the performer and microphone.
If you’re using your Yeti on your desk, using a pop filter may be rough to do. This is where the convenience of a desk USB microphone may not be good. If it has trouble with your show, there is no easy upgrade. I don’t know of a good way to put a Yeti on a microphone stand.
As a fuzzy rule, you should use the Hawaiian Shaka “Hang Loose” spacing.
I’m going to inspect your posting. I can’t do that well when I’m traveling.
JWittz has two voices. There’s the introduction voice where he sounds like he’s recording in his mum’s kitchen and then the performance voice which sounds much better. Just at first pass, his performance voice is much denser and louder than it would be normally with a normal microphone. I think he did some work on it to make it like that.
There are three different correction suites in the posting so far. Which do you like?
The goal is not automatically apply lists of effects, filters and corrections. It’s to record your voice clearly with the least work. Which sample do you think comes closest?
I have twice now recorded a spoken performance with modest microphones at home and was able to create ACX Audiobook quality sound files by only changing the volume. No effects, no filters, no processing. It can be done and that’s not a bad goal.
All the corrections you love have to be applied to every performance every time you speak. The fewer the better.
I agree with Koz’s point about getting the sound right in front of the mic with the least amount of effects.
But I have to say all the YouTube samples linked to (not just the poster’s) are quite uncomfortable listening to for long periods of time even on my high quality Sony MDR V6 headphones. All videos were a bit loud with way too many odd artifacts and wonky sound characteristics (i.e. boomy midrange, unnatural reverb, muffled or scratchy highs).
I’m listening through a 2010 MacMini with volume slider set in the middle. iTunes and OS have all sound enhancers/EQ’s turned off.
Your Raw sample is a bit midrange heavy with slightly mushy highs with intermittent crispy detail (spittle) when pronouncing “T”, “P”, “S” etc. It’s not bad, but lacks brightness, clarity and presence.
I gave it a go in Audacity 2.0.6 Mac OS 10.6.8. by first turning your mono sample into stereo (dupe & render-without adding any stereo effects) then applying my own “Plate” reverb setting (applied in Audacity’s Reverb…) that closely mimics Apple’s fantastic Audio Unit (AU) MatrixReverb effect set to 20% wet “Plate” (adjust “Reverberance” to taste). “Plate” gives a sense of presence and expanse of space as if in a room without noticeable echo. Then I added (-2db Bass) & (+6db Treble) in Audacity’s “Bass and Treble…” effect. Lastly amplified by 5db using Amplify effect.
So there’s another version, and before you decide which is best for you, it’s good to have quality headphones or speakers. If you can’t hear the same things we’re hearing, it’s rough to adjust or know what you have
Further, if you change anything such as get better soundproofing or a different microphone, without good monitoring, you’ll have no good idea what changed in the performance.
Probably not. He has different natural voice pitch than you do. Most of the quality is the same between the two performances. I think you should go with these simple corrections for a while and get used to producing the video.
I applied a custom loudness tool from Steve called SetRMS. It’s a Nyquist program. It’s a paragraph of almost English words which you can copy and paste in Effect > Nyquist Prompt. It automatically sets the overall volume of the show (or whatever you select) and doesn’t care if some of the blue waves get too high. Scroll down to where it says “You can get SetRMS from me.”
– You can get LF-Rolloff from me.
Decompress the ZIP archive into LF_rolloff_for_speech.xml. It’s pretty tiny.
Adding Audacity Equalization Curves
– Select something on the timeline.
– Effect > Equalization > Save/Manage Curves > Import
– Select LF_rolloff_for_speech.xml > OK. (it won’t open the ZIP. You have to decompress it)
– LF rolloff for speech now appears in the equalization preset curve list.
Then I ran Effect > Limiter with these settings.
Try those two on a longer performance. Try around 20 minutes or so. SetRMS was designed for audiobooks, but it should work just fine for you. If your voice changes volume too much, the finished segment may sound a little funny, but if you manage to stay even, it should sound OK.
I’m composing some of this as we go, so post back if you can’t follow a step.