Mixing Tutorial - General Methods

It’s hard to find a concise tutorial on the web about mixing music. So let me list some basic steps here and perhaps those of you who REALLY know what you’re doing can elaborate on or correct this list and we can all learn how to produce decent mixes.

For simplicity, let’s assume we’re making actual recordings and that everything is mono.

  1. Record clean tracks at as high level as possible without peaking (and never triggering the limiter).
  2. In each track, if there are a few unusually loud peaks that are higher than the overall level of the track, use the Amplify effect to drop them down closer to the overall track.
  3. In each track, apply the Compressor effect.
  4. Now we want to adjust the levels to suit our ear. In each track, use the Gain slider, the Amplify effect, or the Envelope tool wherever appropriate to move individual tracks forward or back in the mix. Equalization can also help do this, but beginners should stick with simple levels. This may take trial and error and producing media that can be listened to on headphones, a stereo, or in the car until the best mix is derived.
  5. Export to a high-quality WAV or OGG file.
  6. Open the high-quality mix and apply Normalization.

How would you do it? What did I leave out? What should I have left out? Which steps are in the wrong order?
Please be as general as possible, so us beginners can learn a baseline. There will be exceptions to every rule.
D

The reason you can’t find any concise tutorials is because there are no rules. Every engineer will mix differently, it’s up to you to do what you think sounds best. Mixing is 75% art, 25% science. You can’t really teach the art of mixing by reading something. Experimenting or mentoring are the best ways to learn the art.

For simplicity, let’s assume we’re making actual recordings and that everything is mono.

  1. Record clean tracks at as high level as possible without peaking (and never triggering the limiter).

Good advice, though if you’re recording to 24 or 32-bit, then it’s not really that necessary to record as loud as you can. It’s still good practice though, especially if you have lousy equipment.

  1. In each track, if there are a few unusually loud peaks that are higher than the overall level of the track, use the Amplify effect to drop them down closer to the overall track.

I disagree with this. Certainly some people will do this with vocals, but I would never do this with regular instruments. It kills the playing dynamics. Also, it’s better to use the Fast Lookahead Limiter to do this (available in the LADSPA plugins pack here). That way you avoid any clicking from disjointed waveforms.

  1. In each track, apply the Compressor effect.

Ahhhhhhh!! I strongly disagree with this.

Compression should be used when it’s the effect you desire. It often ends up on vocals, bass, and drums, but it’s never necessary. I often use very light compression on vocals (and every once in a while as an effect on the drums), but that’s about it (I’m in the minority here, most music sounds highly over-compressed to me).

  1. Now we want to adjust the levels to suit our ear. In each track, use the Gain slider, the Amplify effect, or the Envelope tool wherever appropriate to move individual tracks forward or back in the mix. Equalization can also help do this, but beginners should stick with simple levels. This may take trial and error and producing media that can be listened to on headphones, a stereo, or in the car until the best mix is derived.

Personally I hardly ever use the Amplify plugin. It’s better to use the Gain sliders or Envelope Tool in almost every situation. And you’re right about EQ, it takes quite a bit of practice to use it effectively (it would be much easier if Audacity had a real-time EQ, but that’s not the case at the moment).

But the most important piece of equipment in any studio is the Monitor Speakers. If you don’t have a good set of near-field monitors, then you’ll have to work 10 times harder to get a mix that sounds good on a number of different setups.

  1. Export to a high-quality WAV or OGG file.

Why OGG? It’s a lossy format.

  1. Open the high-quality mix and apply Normalization.

Careful here. Where are you putting this file? Is it going onto an album with a bunch of other tunes? If so, I prefer to load them all into a new Audacity project and mix all the songs down into one long track. Then I amplify the whole thing as much as possible and apply the Fast Lookahead Limiter to the whole thing at a setting of about -7dB. At this point, I use a combination of very careful listening and the envelope tool to bring each song down to about the same level (and to fix any dynamics that the Fast Lookahead Limiter killed).

How would you do it? What did I leave out? What should I have left out? Which steps are in the wrong order?

The only thing you seem to be leaving out is the artistry involved. Mixing is not something you can get “right.” Everyone is going to have different opinions. The most important thing besides a good set of Near-Field Monitors (I use passive Tannoy 6’s) is to make yourself happy with it.

Thanks, alatham! I don’t at all mind your disagreeing, because I admit I don’t know much and I’m trying to get some guidance. An update after reading alatham’s comments:

I like these steps better because they utilize raw tracks with no processing wherever possible!

  1. Record clean tracks at as high level as possible without peaking (and never triggering the limiter).
  2. In each track, if there are a few unusually loud peaks that are higher than the overall level of the track, use the Fast Lookahead Limiter to drop them down closer to the overall track. Question: Use the default settings?
  3. Now we want to adjust the levels to suit our ear. In each track, use the Gain slider or the Envelope tool wherever appropriate to move individual tracks forward or back in the mix. Equalization can also help do this, but beginners should stick with simple levels. This may take trial and error and producing media that can be listened to on headphones, a stereo, or in the car until the best mix is derived.
  4. Export to a high-quality WAV file.
  5. If you’re putting multiple tracks on a CD, load them all into a new Audacity project and mix them down to one long track so that the volume levels are consistent.

Question: When and where do you usually put reverb (yes, I know it depends)? Should it be applied before step 3 to individual tracks or do you sometimes want to reverb the whole mix?

RE: question about limiting each track:
I avoid doing this as much as possible, but if I have a track that’s just too peaky, I set the threshold setting just slightly above the “average” of the track. That way, only the extra loud peaks get altered.

The reason I don’t recommend going in with the amplify function and adjusting each peak individually is because it’s more likely to introduce discontinuities in the waveform since the borders of your selections are almost never on a zero crossing. The Fast Lookahead Limiter neatly avoids that by making drastic gain changes only at zero crossings.

Re: Reverb:
Reverb is tricky. It’s the easiest effect to over-do.

You asked if I apply reverb to individual tracks, or if I apply it after mix-down. I do both, but not always.

Generally, once I’ve got all the other work done on a song, I’ll make a few decisions about what I want the reverb to do. Reverb is usually my last step before exporting a song.

I should start by saying that my electronic drum set has a nice reverb that I like on the drums, so when I record them they already have reverb applied.

It’s usually a 3 step process:

  1. Make a Quick Mix of all of the tracks except the drums (for the reason above). Then I highlight this new track and Copy it. Click Edit → Undo and Paste the track back into the project. Now I have a copy of all the instruments except the drums that will play along with the rest of the tracks.

  2. I apply an EQ to the track by gradually cutting out all the bass, starting at 300Hz and working my way down. Now I apply GVerb to that track with settings similar to this:

Roomsize: 	            15 - 25 m²
Reverb time: 	         4 - 8 s
Damping: 	             0.50 (default)
Input bandwidth: 	     0.75 (default)
Dry signal level: 	    -70 dB (default)
Early reflection level:  -10 dB
Tail level: 	          -30 dB

Now, I turn the gain all the way down on the Reverb track, and start playing the song. I turn the gain up on the reverb until it just starts to become noticeable when listening to speakers (reverb is much easier to hear wearing headphones, so make sure you’re using good speakers when doing this). This is usually around -12dB for me.

  1. If I want some individual tracks to stand out, I make a copy of each of those tracks. Usually this is the vocals, keyboards, and guitars. Then I apply that same EQ curve, and then GVerb to each of these new tracks with slightly more noticeable settings:
Roomsize: 	            same as above
Reverb time: 	         6 - 15 sec (longer than above)
Damping: 	             0.50 (default)
Input bandwidth: 	     0.75 (default)
Dry signal level: 	    -70 dB (default)
Early reflection level:  -10 dB
Tail level: 	          -40 dB (usually lower than above)

That reverb is a little more punchy and noticeable than the one I used above due to the higher early reflection level and the longer reverb time, though the difference is subtle. I’ll go ahead and adjust the gain on each track like I did in step 2, though I often go slightly higher (especially with lead guitar, sometimes I like Neil Young’s reverb-drenched leads).

At this point, I play around with the levels of all the reverb tracks until it sounds as good as I can get it.

The reason I cut the bass before applying reverb is that reverbed bass often sounds muddy or overwhelming, and I like to keep the low end nice and clean. My theory on applying reverb is to use it to make the higher frequencies more noticeable, not to make it sound like you’re head is inside the bass cabinet.

Andy, can you elucidate on this a bit please. Does it mean that if you are recording/editing in 32 bit then you can record at a low safe level well below clipping - and then amplify/normalize the track at the end of the process?

WC

alatham, great tips on cutting the EQ before reverb and copying the track, applying reverb, then gradually adding the reverb track to the mix. This is why I started this thread–I would have never thought of that.

The reason it’s best to record as high as possible (without clipping) is because of the concept of a noise floor. But the importance is often over-stated.

First, I need to explain about dynamic range. The range of human hearing is ~120 dB for people with excellent hearing. Anything lower than that can’t be distinguished by a human. In a digital signal, every bit you use to record something you get an extra 6 dB of dynamic range. A 16-bit signal has 96 dB of range, a 24 bit signal has 144. I think the 32-bit float allows for 192 dB of range, but I can’t remember if the different format throws that off.

But all of that is just the digital domain, what happens when we enter the analog world (amplifiers, speakers, and recording)? Well, there are no perfect amplifiers or recorders. They all have a noise floor that they’re going to put out, and any signal that is smaller than that noise floor will be lost completely. Generally, this noise floor is about -90dB or so for good quality equipment, for studio equipment this is often about -105 dB. So most off-the-shelf amplifiers are unable to perfectly reproduce a 16-bit signal, let alone a 24-bit signal. This is why 16-bits was considered good enough for the CD standard. Even now, few amps will benefit from playing a 24-bit signal. All this is assuming a very loud signal (the sound should peak at the threshold of pain, +130dB), with perfect speakers, in a perfectly quiet room (which can’t actually exist unless you don’t mind suffocating). If we assume less than perfect conditions, then 16-bits is good enough for anyone.

So, exporting to 16-bits is good enough. Audiophiles can hem and haw all they want, but they still can’t pick out the 24-bit signal reliably in a proper A/B test.

However, things are a little different for recording. Any time you record something, you’re really recording two signals, the signal you want and the noise floor of either the analog input circuit or the digital format (whichever is higher). If you were to record “silence” (nothing plugged in), then you’ll really just be recording the noise floor (this is actually how you measure the level of the noise floor for you equipment).

Now, when you amplify a signal, you’re amplifying everything in it, including the noise floor. This is why it’s technically best to record as loud as you can, to keep the noise floor from coming up into the audible territory. For every dB you are below full volume, the noise floor will be raised that much if you amplify the signal all the way (I’ll explain why this isn’t a big deal later).

If you have good equipment, the noise floor of an input circuit might be way down in the -105dB range. As long as you’re recording at 24-bits or 32-bits, that’s where the noise floor will be (if you record at 16-bit, then the noise floor is stuck at -96 dB no matter what, that’s why I record to 24-bits). This is why you should use good quality equipment, it gives you a low enough noise floor that you have some headroom when recording. Note that none of this applies if you have an audibly noisy input, in that case you need to record as loud as you can.

Now lets take an “average” multi-track project recorded with good equipment. I’ll assume 8 tracks, each recorded to peak at -10dB, with no gain change. I’ll also assume the noise floor of the sound card is -100dB (this is what my Delta 1010LT is rated at).

Where will the noise floor of the final track be? It won’t be much higher than -90dB (if it even reaches there). Adding noise signals is tricky when working in dB, since they’re random (so someone correct me if my math is wrong).

A noise floor of -90dB is not perfect, but it’s damn good, probably . There is one problem with that example though. None of those tracks had no gain adjustments. In a real-world example, some of those tracks would have been turned down a bit, thus lowering the noise floor even further. This example can be played on most equipment without the noise floor being audible (even at a very high volume), even though we recorded each track at -10dB.

So that’s what I mean, if you have excellent equipment, don’t worry too much about recording as loud as you can. Chances are it won’t be audible in the end anyway.

Interesting info! I think the noise floor on my old Teac PortaStudio must have been about -2db! (In other words, enough noise to make it a piece of junk!)

I am very happy with my M-Audio Fast Track Pro. I can lay track after track and there’s none of that hiss you have with tape. I read a review (http://acousticsoftwarereview.com/HardwareReviews/M-Audio_FastrackPro_Review.htm) that tested the Fast Track Pro and found a noise floor of -100db at 20kHz and even lower at 20Hz.

Try listening to the recorded noise floor on Electric Ladyland (vinyl) - where Jimi has the amp turned up to 12 not 11 …

WC

Or the noise floor on “The Diamond Sea” from Washing Machine by Sonic Youth during the first second or so.

One of the effects on the guitars can be heard right away before they even start playing. But there’s also a very clear hissing static until the band actually starts playing (and then the music makes my head explode and I can no longer grasp concepts like “noise floor”, “volume level”, and “time”).

Have I ever mentioned I like Sonic Youth?

Based on this conversation, I’ve mixed a couple of projects in the last few days and they didn’t sound awful. One was a loud punk-type number and the other was a quieter acoustic number. I applied this general method:

  1. Record each track with with adequate levels that, most importantly, never peak. This is my RAW Audacity project and I give it a name like “My Song RAW.aup”. If I mess up any track during mixing, I can always come back here and get it. If I add another track later, I’ll add it to the RAW project first.
  2. Save the RAW project under another name such as “My Song MIX 01.aup”. Now I’ll work in this MIX Audacity project.
  3. If there are some instruments to which you know you’ll want to apply Equalization, do this now. For example, my harmonica sounds a little shrill unless I apply the acoustic eq pattern and also lower it around the 3kHz band.
  4. Adjust the levels on each track to suit your ear. Start with all the Gain Controls in the middle and adjust with either the Gain Control or the Envelope Tool. If you have to raise any tracks to a level that will possibly peak, raise the level using the Fast Lookahead Limiter.
  5. Save the MIX project.
  6. Now I will export 2 or 3 high-quality WAV files of the major components of my MIX that I’ll import later into a MASTER project. Generally, I’ll export a file with the mix of everything EXCEPT vocals and another file of the vocals only. If there’s another instrument such as lead guitar that I want to stand out, I’ll export it to a file as well (removing that instrument from the mix).
  7. Create a new Audacity project with a name like “My Song MASTER 01.aup”. This is my mastering project. Import each WAV file you’ve exported in step 6.
  8. Duplicate each track. These duplicates will be the tracks to which I apply reverb.
  9. Apply equilization to each duplicate track to gradually remove the bands below 300Hz.
  10. Apply reverb to each duplicate track. I use the AWIDA Soft DX Reverb Light with the default settings.
  11. Move the Gain on all tracks except the mix track to zero. Play the song and gradually move the duplicate/reverb mix track’s gain up until you can just hear the reverb.
  12. Move the non-reverb vocal track’s gain to the middle. Play the song and gradually move the vocal duplicate/reverb track’s gain up until you can just hear the reverb. If you want the vocals to stand out a little more, bring the duplicate/reverb track up a little more.
  13. Once you’re satisfied, export to a WAV file. This is your master.
  14. If you’re making a multi-song CD, import your masters into one long Audacity project. Amplify the whole thing as much as possible and apply the Fast Lookahead Limiter to the whole thing at a Limit setting of about -7dB. Listen carefully and make any find adjustments needed.

Dave

Sounds pretty similar to what I do, though I work much more haphazardly (because I’m crazy like that).

At this point I can’t really say anything else unless I get a chance to listen. Any change you could post those tracks somewhere (like MySpace)?

I have to do some legal stuff to release my full-band recordings, but my solo stuff is at Stationary Dave | Listen and Stream Free Music, Albums, New Releases, Photos, Videos
The first song on the list is a live recording, but the others are new at-home Audacity projects. None have been normalized and one (I’ll let you guess which) has one track that’s way too hot.
Dave

It sounds pretty good, but I do have some criticisms.

The stereo balance is right-heavy due to the guitar being panned that way (and it being the main instrument). I would pan the guitar about 10% right and the vocals about 10% left to even things out. I’m sure many would balk at putting the vocals off-center, but I don’t care.

Since you have so few instruments, you might think about stereo micing that guitar (if you have two mics).

You Can Turn Around seems to have a guitar track that is a few dB louder than the others, is that what you mean? I think the harmonica could come down a tiny bit too.

Other than that, it all sounds pretty clean and decently recorded. My tastes in production run pretty far from your style (I like lots of studio tricks), so I don’t know how much of my criticisms will help, but they’re there all the same.

I don’t know if I linked you to my MySpace page or not, so I’ll do it here:
http://www.myspace.com/andylatham

George Martin didn’t care either when he was recording the Beatles in early stereo - and he’s one of the best …

WC

Sweet George Martin backs me up (or maybe I back him up).

But putting the vocals off-center certainly seems like it’s an unwritten faux pas for some reason.

This is my first post of many, I’m sure. I’ve been reading through the boards and finding a lot of very helpful info, so thanks to all of you w/expertise and experience who are helping out the people (like me) who are scaling a steep learning curve.

I looked into the ANWIDA free plug-in mentioned in this thread and downloaded/installed it in the plug-ins folder of Audacity. It appears in the correct folder when I look in Windows Explorer but - even after restarting computer, it’s not showing up on my dropdown of Effects when I have my project open in Audacity. I’m wondering if someone could help me troubleshoot why the effect doesn’t seem to be available.

It should be noted that I’m running Audacity 1.3 on Windows XP operating system, in case that makes a difference.

Thanks again!

LilSpider,

That’s a VST plugin, you need to install the VST plugin enabler available here:
http://audacityteam.org/download/plugins

It won’t look the same (due to licensing restrictions, Audacity can’t use the VST graphical display code), but hopefully it will function. VST support is buggy at best, but simple things like reverb tend to work.

Got it. Thanks!