Something else that confounds me ...

Thanks for pointing that out Koz, I have edited the post and yes, that is what I meant to say. I think I need a new key board for my pc, as the one I am using now, has 30% of the letters worn off. While I can type 98 words per min, I only have 97 mistakes. Every day I thank God, for the person that invented the spell checker! :wink:

Please feel free to edit any of my post, that you deem necessary. I have no problem with that at all and will do my best, to keep your having to do so, at an extreme minimum.

First of all CONGRATS!!! If you can upload a whole chapter via drop box, I would love to listen to it. Since I am so new to this ACX thing, and I have learned so much by being a part of this thread, (@DL Voice, I have learned the most from you), I would like to use it as a guide line, on how you come across with your audio. In Podcasting, I use the same set up as with the ACX Requirements. I record between a -6 and -12, I use a Shure SM 57 Dynamic Mic 2 inches from my mouth speaking past the mic and not into it, speaking into my Face Shield and my noise floor starts at around a -84dbs, with my Podcasting setup turned on or off. This tell me that my PC is very “low noise” as well as my Recording gear.

In Podcasting, I render my files @ a -16 LUFS with a -1dbtp. The pictures below show when I finish this procedure and then check it using Audacity, everything passes except the -3db peak requirement. I have checked 6 of my Podcasts using this method, to help me better understand the ACX Requirements. All 6 of my Podcast were over 45 min and every one failed the ACX -3dbtp by less the a -0.5db factor. I am a very happy man and would like to publicly thank everyone who has contributed advice to this subject!



Thanks for the contrats. Right back at you @Dan_Tucker… Those files you are working on appear to be a real mission.
I’ve just finished chapter 4. Edited and mastered I believe is the word. I know one isn’t supposed to advertise on these boards - and I’m not, but if you want to have a look at the book, you can even get a sample read on Amazon. Look for “The Innamincka Affair” by Robert Chalmers. I’m not bold enough yet to put up any chapters :slight_smile: I’m sure I have to do a lot more editing. And anyway, you may not like a Romance in the good old Mills & Boon style. My next audiobook, is actually my first fiction book in print, The Dragons of Sara Sara. 186,000 words, and Edition 2 is out later this year, with about another 75 to 100k words in it. I have never been entirely with where I left off the first edition. … if I stop playing around. Now that’s a mission. This one, The Innamincka Affair is only 75,000 words.

I must say, Audacity is the bee’s knees for this. I’ve tried using a lot of other programs on the Mac, even Adobe Audition. OMG, how complex is that sucker!!! Talk about over kill, although it does a good job, it’s just way too big and expensive. There are a few others, but they all seem geared toward making music, mixing tracks and so on. Auditon is excellent for recording what I’m doing. Clean interface, simple controls. and pretty rock solid. I can’t see how in the others I’ve tried, that I could monitor those three things like noise floor, rms and peaks all in one go like that.

Like you, it was actually @DL Voice who’s questions and Steve’s answers pointed me in the right direction. Thanks guys. I’d still be lurching around in the dark otherwise.
So far, all I use and almost never all together are, Normalise, Limit, and Amplify. Chapter 4, just completed, Normalised, then Limited. Done.

thanks for all the help folks.
Time for a glass of red me thinks.

Man, did you write all those books, or are you competing with another author who has your same name? I put a video up going over what this thread has been talking about, on my YT Channel. I gave you and @DL_Voice props at the 8 min mark.

I have submitted a formal proposal the the ACX to provide them with up-dated videos concerning their requirements. One of them is extremly confusing and caused me nothing but heartache, when I first started this adventure two weeks ago. I hope to hear back by the end of next week.

I will be doing another video tonight, covering how to quickly edit your audio from start to finish, using Audacity. I hope to have it up by 3am. It is a pretty slick way of doing it and you don’t need to worry about time stamps or audio markers, per-say, just two clicks of the mouse to locate all your edit points, after you import your audio file.

Now, me thinks its time for another pot of coffee. :laughing:

In the process of pulling my hair out by the fistfuls. I’ve been meaning to come back to this thread and post responses to various comments, but I’ve been so busy working on recording this project (and trying to get the audio processing figured out for SURE) that I haven’t had time.

And just when I think I’m breezing along, happily as can be, I run head-on into a wall.

It’s the weirdest thing. After playing around with a step-by-step procedure that I thought was the ticket for me (starting off with low-pass filter, followed by 666 noise reduction, compressor, then normalize, I noticed that parts of my tracks were disappearing. poof! Just GONE!!

I thought it had something to do with trying to remove DC offset, so I held off on the low pass filter, and just did noise reduction, compressor, then normalize (with the ‘Remove DC Offset’ option checked).

The problem continued.

So then I UNchecked the ‘Remove DC Offset’ during normalization.

And STILL the problem continues.

I’ve searched for answers to this head-exploding issue and can’t pinpoint a situation exactly like mine (or if I have, I just didn’t realize that’s what I was reading).

What the #@&)(&%$#@!&(%)@&%(&#@@#$%(@ is HAPPENING?!!!

I feel SO inept!

stomps away to wash my face and calm down

First of all, take a breath. I was in the same boat as you. This is going to be real simple. I just made a video today on how to “Meet the ACX Requirement, EVERY SINGLE TIME”. That is the title of the video. If you do as I say, I will fix your problem, (99.9% Sure), within 10 min after you reply back to this thread, if I am still on-line. All I need to know is this.

  1. What are your input levels for your dynamic mic, according to Audacity?
  2. How far are you away from your mic, as you record?
  3. What is your max noise floor, when you start to record?

If you can not provide me with this information, there is no way I can help you. Watch the video, answer my questions and believe it or not, you should be able to solve your own problem. If not, please post your answers to my questions and we will go from there. You can start the video at 8 min 52 sec and stop at 12 min 30 sec, to get the answers I need from you. At this point the video is only 4 min 30 sec long. https://goo.gl/i9J9Lv

@Dan_Tucker. Not all my books. Just the two I mentioned, and a couple of text books on English, and one on currency trading. :slight_smile:

@DLVoice. Sorry to hear about your problems. I have heard back from the ACX sound engineer, and my 25 minute sample I sent is is good to go, as soon as I remove the mouth noises. Just a bit of editing.

Seriously, I think you may be over correcting. I have a -70 thereabouts noise floor to start with, Mrs in the -20s, and peaks below 0. All I ever need is normalise + limit, or just Normalise. Rarely maybe Amplify, but really only once I think. I now have 4 chapters done. 13 to go.

I’ll maybe post the Sound Engineers comments later.

Wish I knew a way to get rid of mouth noises in production :slight_smile: but from Googling, it’s a common problem.

A lot of mouth noise is due to “Dry Mouth”. Here is a great article from the Mayo Clinic that deals with this subject. I am breaking about 10 of their rules as we speak! :stuck_out_tongue:

Not Dry Mouth in this case. Copy of a post in Stackexchange. Interesting post actually. Taken from this discussion. narration - How do you lessen mouth noise in VO recordings? - Sound Design Stack Exchange

Okay. Pardon me if I sound ranty.

I promised myself I would not ask this question, but frankly, I am fed up with it.

I record narration as a main part of my job in sound.

I have recorded possibly over 3,000 hours (final product, edited down) of narration, ADR, overdubs, etc. etc. etc. in my career.

I have still not found how to lessen someone’s mouth-noise in the recording.

I have searched and searched and searched for a remedy to this.

I have read Randy Thom’s article about it.

Many people’s answer is MIC POSITION. This is utter rubbish of advice. I know well that the moment you add that top back on a voice that you lose by going off-axis that those clicks are just clear as day in the recording and have to be edited out, so I know for a FACT that doesn’t work. Not to mention the recording sounds horrible in the end because you have a U87 pointed at your ear. Sure, you can even move the mic 4 feet back from the talent, you’re going to have one thin recording and a lot of room to battle then - especially if it’s supposed to be narration.

Lemon water. This has had mediocre results for me…

I’ve tried grapefruit juice,

I’ve tried having the guy suck on lemons,

I’ve had the guy put vasoline on his teeth,

I’ve had the guy eat so many green apples he was hypnotized into thinking he was Johnny Appleseed,

I have forbidden the consumption of all coffee,

I have forbidden the consumption of sugar,

I have forbidden the use of honey and other saliva-producing foods,

I have told the guy to drink water - funny, everyone asks the talent to do this when he gets mouthy and IT JUST MAKES HIM MORE MOUTHY, Surprise!! You’re just putting more wetness in his mouth!!!

I have also tried Izotope RX and I personally think it adds digital artifacts to the recordings and makes the voices sound dull and processed…

I have tried everything I could possibly find on the internet or from other professionals about this and they all have had no avail.

I personally think it’s an awareness thing. I think that the talent just has to know what it is and learn for himself how to fix it.

But, what have you used in the past that has actually worked?

Is there a “magic pill” that someone can take and MAGICALLY he has NO mouth noise and won’t need ANY editing at all?

I highly doubt it, but I’m working on 20 seconds of narration right now and getting it clean as a whistle and I’ve spent the last hour on it.

One hour for 20 seconds of voice…

My standards are pretty high for this sort of thing as you can probably tell…

But besides that, what have you found that has worked for you.

Has it ever been a problem with your production and has a project ever been rejected back to you saying “It’s got too much mouth noise in it”?

Sorry for ranting but I just don’t think 20 seconds an hour is very viable.

Thanks - Ryan

The only way I’ve found to fix it so far is by the hunt&delete method.
Robert

Now that I’ve had a night to calm down, I started playing around with it again. What I did this time was to go through the longest track and delete a lot of the dead space in between my talking.

Let me back up and explain that days ago, I went through several of my first tracks and inserted room silence noise in between my lines (deleting the extraneous noises in between (such as when I had to stop speaking because my family was walking around upstairs, or when I had to swallow, or clear my throat, or whatever). Some of these pauses were long, and I used Audacity’s punch copy/paste feature to insert normal room silence in between. I wasn’t deleting these long pauses, just inserting silence. I’m thinking that those long stretches of silence were somehow responsible for what happened when I would run various effects. So today I deleted a lot of those long, empty spaces, leaving a more uniform looking track (like how it would typically look when it’s properly edited). Then I ran low-pass filter, noise reduction at 6,6,6, compression (this time at 2:1), normalize, and nothing vanished from my track (from what I can tell so FAR); I also passed ACX.

I’m itching to go through and make my edits NOW because that’s going to be the longest, most tedious part of this whole process, and I’m running out of time. But I think I’m still not fully understanding something. I believe it was Steve who said that if I needed to remove DC offset or reduce low frequency noise, I needed to do that first or it could cause clicks at edit points. What if I don’t need to remove DC offset? How would I know if it’s necessary? I’ve been doing it automatically with the Normalize feature, but what if it’s not needed in my case? Or what if low frequency noise isn’t an issue for me either? Would removing these things when it’s not necessary cause issues? BTW, my 8K whistle problem is no longer visible in my tracks, so I haven’t even been running the 8KNotch fix anymore.

Also, what is involved with “post editing processing and mastering”? Would this be things like click removal or de-esser or limiter?

I’ve written down what I think the rough process should look like for my situation (I have to keep making lists to keep my head straight about this), and here’s what I’ve been referring back to lately (please let me know if I’m WAY off base on this):

  1. Remove DC offset/run low-pass filter
  2. Noise reduction at 6,6,6 because I was told (I think by Steve) that NR is supposed to be done before using the compressor (note - this may not even be necessary; my room silence noise isn’t that bad … perhaps I’m just being picky)
  3. Compress 2:1
  4. Amplify/normalize (I prefer to use normalize) to -3.2 (with DC offset UNchecked)
  5. Run ACX check
  6. Edits (cut/delete, fix clicks, plosives, and loud esses)
  7. Post editing processing and mastering (does this mean just running the limiter in my case? Or is there something else I’m missing?)

ROFL :smiley:

Another slightly less drastic, but no less arduous technique, is “hunt and filter”.
One of our developers is an audiobook narrator / producer, with a fastidious approach to mouth noises. He became so fed up with the problem that he developed some tools to help deal with it.

One of the tools he developed is the “Spectral Edit Multi-Tool” (Audacity Manual). When used with a “Spectral Selection” (a time and frequency selection Audacity Manual) in which both the upper and lower frequencies are selected, the multi-tool acts as a notch filter, but also the effect fades in at the start of the selection and fades out at the end of the selection.

In the track spectrogram view, mouth ticks are often visible as bright specks. To use the multi-tool, Spectral editing must be enabled and you must be viewing the track as a spectrogram, then you can select the bright speck and apply the multi-tool. The speck will then become less bright or disappear altogether. A keyboard shortcut may be set to activate this effect (Audacity Manual).

There is another tool that he made called “De-clicker for speech”. It takes a more automated approach to the task. I don’t know how well it works as I’ve never used it, but you can find it here: Updated De-Clicker and new De-esser for speech If you have questions about that effect, please ask in that topic.

Another thing that confuses me is that ACX recommends doing such things as low-pass filter, limiter, amplify/normalize after making edits, not before.

Well, you’re welcome … I don’t know how my jumbled musings and questions could possibly help anyone, but I’m glad they did. :smiley:

How on earth? lol It feels like I’m just floundering around with basically no clue. At any rate, thank you.

Simple :slight_smile:

If that’s what ACX suggests, that’s what I do. They also say, down in the fine print… somewhere, to avoid over-working your track. As I say elsewhere, I have refined it down to either Normalise, and maybe Limit. If it needs it.

But yes, after edits always. Take out silences, long breaks, obvious pauses. These will affect your RMS readings. Which are an average, right. Think on it. 1111222345611111. What’s the average? Try 123456. What’s the average?
Example 1 is your unedited track. Example 2is edited. The RMS is your average.

Save yourself heartache. If ACX say ‘do it this way’ …

:slight_smile:

Thank you for responding so quickly after my panic attack, Dana. I was too brain-weary to reply last night. I look forward to studying the video you mentioned.

In answer to your questions:

  1. I believe my gain is all the way up on the Solo and recording volume is all the way up in Audacity. I’d have to double-check to be 100% certain, but IIRC, that’s where it’s at.

  2. I’m about 4 closed fingers away from the pop filter, and the filter might be about … 2 inches or less away from the mic. I haven’t measured exactly, but I’m very close to the mic when I speak. Closer than I was when I first started posting questions here (when I was a full open hand away from the mic); keep in mind, though, my hands are small so my ‘measurements’ won’t necessarily be the same as others. If I move any closer than I already am, my voice becomes overpowering/boomy so I don’t think I can - or should - move any more than where I already am.

  3. Isn’t max noise floor that number I’m given when I run ACX check? If so, it’s pretty much always in the -70s.


    But, like I mentioned in a previous post today, I THINK I might have stumbled on the problem. At least I really, REALLY hope I have. From what I can discern, the long pauses that I had in some areas were contributing to the strange deletion of portions of my track. When I went back and removed a lot of the extra space that was there, the problem seemed to go away. I’ve been throwing all kinds of effects at it since then to test my theory and every time I use the effects on the track (or tracks) where there were very long gaps in between talking the problem shows up BUT when I remove the gaps it goes away. I’ve done this on all the tracks merged together into one long one, as well as testing the tracks individually. Same thing each time (unless I’m somehow missing something, which is entirely possible - I think my brain is fried at this point).

Now the question that remains for me (at least in this particular moment) is how to resolve the difference between what ACX suggests as an order of operations, and what’s been suggested here. ACX says to do edits first, then master:

Record & Edit
https://www.acx.com/help/producing-your-audiobook-2/201986260

Mastering
https://www.acx.com/help/producing-your-audiobook-2/201986260

Steve mentioned the importance of removing DC offset (or reducing low frequency noise - I guess that means running a low-pass filter?) BEFORE doing edits because it can cause clicks. If I don’t run the low-pass filter and just want to remove DC offset, I can do that by using the Normalize effect (with the ‘normalize maximum amplitude’ UNchecked), correct?

Also, I thought I read somewhere (either here or elsewhere) about the importance of a certain sequence when running effects because if it’s done out of order, you can ‘undo’ what you’ve previously done. But I can’t remember now what, exactly, that was talking about. Maybe it was here and I should go back and re-read the last few pages carefully. There’s so much to absorb, I know I’m missing stuff along the way as I try to process everything in my mind.

Thanks SO much for your patience with me. :blush:

Yes, I’ve definitely started to understand RMS a lot better than when I began this. And I’ve gotten to where I can often tell just by looking at a track before and even after normalizing whether it’ll pass ACX.



@Dana Tucker

Dana wrote:
OK, that is your first problem. You should never have to set your recording gear to the max level.

On the SOLO Mini, with a dynamic mic, the Gain knob has to be right up, otherwise the mic isn’t working at its best. Having it all the way up doesn’t effect the signal of the dynamic mic, which is "weak’ naturally. It does in fact allow the mic to work at the level that it’s designed to. It’s “natural” level. Controlling the gain on the SOLO is really only effective with a condenser mic, which uses the phantom power. The dynamic of course doesn’t use the phantom power.
If you were in fact to then put an inline gain amp in, between the mic and the SOLO, you would then overdrive the signal. bad signal.

Stand-alone mic-pre and converters such as this are chronically low on gain. I note that they provide, on average, 40dB of gain which is simply not enough for dynamic (moving coil) or unamplified microphones. I have never used my Shure X2U or Behringer UM2 at anything other than full up. I complained to Shure since an X2U is inadequate for use with their own popular SM-58 and SM-57 microphones. They wrote back, “That’s the way it is.”

McElroy’s video for ACX shows him adjusting his interface for appropriate volume and I wondered how he was getting away with that. The answer is his microphone is a high-end and hot Rode NT1a.

Koz