I’ve been recording my band with a ZOOM H4n → .WAV files recently, and I’d like to know how to improve the recordings after the fact. The bass is basically non-existent in the files I have. Kick is there, but not the electric bass. Also the vocals are very “mid-dy” and I’d like to clarify them too if possible
I’m Using Windows 7 64bit and Audacity 2.0.3 currently
I’ve tried looking at some tutorials on YouTube, but none seem to deal with live recordings. One that seemed pretty in depth (http://www.youtube.com/watch?v=nB2yliUMTBg) didn’t do a thing for me. My skills at this are non-existent at this point, but I’m a very fast learner.
What’s the best way to tackle this? I’ve read about analyzing the track and trying to work with the frequencies, but I’m not sure how to read the analysis that I’m seeing.
Thanks for any help you can provide!
The effect to use is the Equalization effect: http://manual.audacityteam.org/o/man/equalization.html
I’d recommend that you ignore that tutorial. As one of the comments says “Never never NEVER clip music, thats asking for [damaged] speakers”.
Yep. Seemed like it was meant for playing tunes in cars with giant subs…
Anyway, I’d like to learn how to use the EQ, and I’ve seen info about using the Frequency Analyzer, but not sure how to use the two features to work together.
“Plot Spectrum” (http://manual.audacityteam.org/o/man/plot_spectrum.html) displays a graph that plots amplitude against frequency. For use with music it is generally better to set the frequency scale to “logarithmic”.
On the graph, low frequencies are on the left and high frequencies on the right.
Typically for music the level decreases progressively from left to right.
Here are the plots for a piece of Rock music, then an R&B track and then a classical music track:
Note that you generally need pretty big speakers to reproduce frequencies below 50 Hz, and adult human hearing fades rapidly above about 15 kHz. The region that we are most interested in is roughly between 60 Hz and 6000 Hz.
It is a little awkward to directly compare the different graphs because Audacity automatically adjusts the vertical scale, but you can get the general idea from the “shape” of the graphs.
You will notice that for both the rock and the classical that the graph bulges up a little in the “mid range” (toward the middle of the graph) whereas the R&B bulges up at the very low and very high ends
If I apply the Equalization effect to the rock track with settings like this:
then the resulting frequency spectrum becomes similar to the R&B track:
(which sounds too bass heavy for rock and lacks much of the mid-range interest)
OK. I like that description. Thanks so much. It really makes it much more easy to do than I thought.
I’ll try out a couple of things an report back here.