My wife and I are amateur musicians. My day job is business analyst/programmer, and she is a visual artist. I began using Audacity last year to help someone at my church create weekly virtual worship service videos during the pandemic, mainly just for occasional noise reduction of videos that people send us from their cell phones. I also produced a video of myself playing a trombone piece and accompanying myself on the piano, but the audio part of that was pretty simple, mainly just combining the two tracks.
Now we are attempting a more complicated project for a virtual choir dinner: An a-capella version of the Beach Boys’ “Kokomo”, with about a dozen vocal tracks of my wife and I singing all the parts, plus a few miscellaneous percussion tracks. (And kazoo and ukulele for the instrumental break! ) We’ve finished all the recording, thanks to the loan from my son of a USB mic/mixing box, so they are about as good as we are capable of as amateur singers.
Now it’s clear to me that I’ll need to spend some time working with the tracks, to adjust relative volumes and fix irregularities in timing, perhaps doing some frequency mixing too. My general question for this forum is basically, where to start? In what order should I attack problems, and what other general advice do people have? I’m especially in the dark in terms of frequency mixing. I understand how to work the control, but I’m not sure how to approach thinking about it.
My most specific question relates to relative volume. Some of the tracks had different sections recorded at different times, so they vary in volume even within the track. So I’m supposing I’ll have to first make the track itself uniform with the amplitude adjustment, then I’ll be able to adjust the entire track relative to the others with the track gain control. Is that the right approach? Any other hints about that?
That leads me to a followup, which is that a lot of our tracks look under-recorded compared to the examples. The manual page on waveforms mentions that -0.5 to +0.5 are ideal peaks to aim for when recording, but unfortunately I only read this tonight. Would it be helpful to use the Amplify effect first to amplify them to that, then fine-tune with the waveform editor?
Mixing is done by summation so with more tracks to mix you might end-up having to reducing the the levels. (In practice it’s more like a weighted average with the individual track levels adjusted by-ear.)
The 50% (-6dB) target is just a recommendation to leave enough headroom so you don’t clip (distort) by trying to go over 0dB. Vocals are not very predictable so you may need to record lower and leave more headroom for unexpected peaks. Pros record a LOT lower (at 24-bits with good equipment).
My general question for this forum is basically, where to start? In what order should I attack problems, and what other general advice do people have? I’m especially in the dark in terms of frequency mixing.
Mixing is mostly adjusting levels. I’m not sure what you mean by “frequency mixing”. In a professional recording studio the mixing engineer will usually do some editing and add some effects.
But mixing classical or choral recordings will mostly be level adjustments with very few “artificial” effects. With a “normal” choir recording, you only use 2 or 3 mics, plus maybe a solo mic with most of the mixing done naturally/acoustically in the air. It’s not much different from running a mixer (“sound board”) live at your church. Digital or electronic mixing is the same as mixing in the air except you have control of every mic/track and there may be only one singer on each track. Usually this kind of music is often recorded in a music hall with natural reverb.
With rock or “popular” music there 3 very-common effects - Equalization (mostly minor “corrective” tweaks to correct for microphone variations etc.) The sound guy at your church is probably doing the same thing. Reverb is also common since most recordings are made in “dead sounding” studios. Dynamic compression (and limiting) is used to bring-up the “loudness” or “intensity”. The tracks are often compressed separately before mixing (compressing the vocals can 'bring them out" in the mix) and more compression is often used on the mixed tracks, often as a separate mastering step after mixing,
“Frequency mixing” is maybe the wrong term. I don’t know the terminology very well yet. I meant that maybe I’d have to adjust the frequencies to get a brighter sound, such as with the Graphic EQ effect.
But I think I did pretty well with just Reverb for the solo lead-in just now. We recorded in a small room in our house, pretty dead acoustics, but by duplicating the mono track to make a stereo track, then applying Reverb to the stereo track, I think it came okay. Certainly much better than the raw track. The Reverb factory presets were very helpful, since all those parameters were rather overwhelming. We ended up going with the Voice I preset, with the Room Size cranked down from the initial 70% to 50%.