Using it with a Blue Yeti for animation voiceover and audiobooks.
The image below has two graphs from the Analyze>Plot Spectrum tool. The top image is the spectrum of my raw noise floor without any effects applied. The bottom image is after the Effect>Filter Curve is applied, with the low rolloff settings in the ACX check suite.
I don’t know how to interpret these results to tell if I have an issue that should be fixed. Any guidance would be appreciated!
Edit to add: I guess my question is: Is my raw noise floor adequate as a starting point, or do I need to make adjustments to the recording space?
The 50Hz level has gone down from -67dB (inaudible) to -85dB (even more inaudible).
The main benefit of low roll-off is to remove infra-sound, which is also inaudible*,
but can disrupt processing using effects which have a threshold setting, like compressor, gate, limiter.
[*If you use a larger window-size on the frequency-analysis you may be able to see infrasound before the low-pass is applied]
I guess my question is - Do I need to make any adjustments to the recording space before proceeding with production for audiobooks? Is there a fundamental latent defect in the room? (Other than me and my voice, but that’s a different discussion!)
Yeti’s do that. They produce “rumble” (tones to the left of 100Hz) that aren’t really there in real life. The 100Hz Rolloff For Speech filter gently drops the tones to the left of 100Hz. That’s earthquakes, thunder and very large trucks driving by shaking the house. One more: Flute Stop pedals on very large church organs. That’s what lives down there. If you have a voice in the normal tonal range, nobody will miss those tones. As above, rumble messes with other Audacity tools and effects if you leave it in there.
It’s a little rough to see the correction from a voice print, so this is what it’s doing with a special music test with all tones in it (White Noise).
I was just wondering if the graphs might reveal something that a listen does not.
Yes, but 99% of audio production is done by ear.
ACX has strict technical/measurement requirements in addition to “good sound” so you’ll have to do some measurements. It’s not always necessary to look at the spectrum but it can be sometimes helpful for “diagnosis” if it fails, or if it just doesn’t sound right.
It’s common “studio practice” to roll-off the deep bass on everything except bass guitar and kick drum because voices and most instruments don’t produce any very-low frequencies and it’s just noise/garbage. It’s also part of the [u]Recommended Audiobook Mastering Process[/u].
That can be done without looking at the spectrum and it’s one of the few things that can be done without even listening!
As DVDDoug above, you have to hit two specifications, not just one. ACX Check is our best guess at ACX’s Automated Robot. Peak, Loudness and Noise are straightforward. ACX Check is a condensed collection of already existing Audacity tools that were long, complicated and boring to use.
If you make it that far, it goes on to Human Quality Control for a theater check. That’s where you fall over if people hear you and run with hands over their ears. All the theatrical errors live here. Tongue ticks, stuttering, lip smacking, p-popping, slurring, etc. Those are rough for machines to test, but humans are a natural.
A word on the Mastering Suite. With the sole exception of Low Rolloff, if the tools aren’t needed, they don’t do anything. If you’re already loud enough, RMS Normalize has little or no affect. If your peaks are already quieter than -3dB, nothing happens. Low Rolloff will try to apply twice and you could get odd low-pitched tones in your show.
“…lush green mid-Hudson valley.” I think I remember this. I accused you of sounding like somebody had a gun and was forcing you to read. You can read any way you want, but if you don’t hit enough of the norms, you’ll put people to sleep.
The goal is to get people to pay money to sit in hard chairs and listen to you perform in real life. Take out the ACX step in the middle. Don’t hide behind the technology.
I still think it’s a good idea to read to kids in the library on Saturday. There’s just nothing like a direct, unfiltered connection to your audience to polish your technique. Or go listen to one and compare your work.
As you change your studio around, it’s also good to know that once you start, don’t change anything. There is no getting a new microphone in the middle of a book. We had two posters who moved houses in the middle of long books. It was not pretty.
But I can’t find anything with a direct link on the ACX website. Is that the right place?
That’s a great idea to read at a library. I’m creating some short animations (<60 seconds) for TikTok, Instagram, and some other social media websites, so those are voice practice too.
I write in what is called cinematic present tense, sometimes called objective third person. Since my earlier writing was in screenplay format, I like the immediacy of this POV. I’m writing it so it can be performed like a radio play, but with just enough dialogue attributions that it still works in text. I searched for guidance on this new (* but actually very old) form of storytelling, and found Jules Horne’s book Writing for Audiobooks:
There’s not a lot of good advice out there on writing for audiobooks that I’ve found.
Audiobooks are a viable format because of smartphones and their ability to carry a multitude of files and to access the cloud. The oral tradition of storytelling is thousands of years older than print. So audiobooks are both new in their technology and ancient in their method of storytelling.