Learning beginning mixing/equalization?

Hi all! I want to start a YT channel to record some covers I am doing, but I also want to mix them so they don’t sound awful. Trouble is I am a much better singer than audio engineer. I’ve Googled things but man is it a complex field…is there like a 101 that I can learn from maybe?

I’m fairly confident my voice is ok, using a Steinberg Cubase as a preamp to record. Just would like to tweak it a little.

What kind of mixing are you doing? How many instruments/vocals/channels are you recoding simultaneously and how many channels are you mixing?

The biggest challenge with “home recording” is getting a good-quiet recording environment to approximate a soundproof and sound-absorbing studio.

It can be a “big topic” and I’m pretty sure you can get a book on mixing… A book might be better (more organized) than randomly looking around on the Internet

using a Steinberg Cubase as a preamp to record. Just would like to tweak it a little.

Cubase is software and it’s actually “better” for mult-track mixing, but it’s a LOT more complicated than Audacity.

I assume you have a Steinberg USB audio interface? Do you have a good microphone?

Equalization is mostly a corrective effect. i.e. If you’ve got too much bass or the highs are too strong, etc., you can EQ it, etc. The exception is that the deep bass is usually filtered-out from almost everything except bass guitar and the kick drum because with most sources the only deep-bass sounds are noise. Then of course, it’s just human nature to tweak everything!

I like to start with the philosophy that a good recording doesn’t need ANY effects or processing, but in many cases that’s an unrealistic ideal and the only professional recordings that are made that way are classical music and maybe some other “acoustic” music. And in the real world, even pro classical recordings are processed.

Then of course there can be special effects (unnatural effects) such as echo, or other effects used “creatively”.

Mixing is mostly done by ear. For example, if you’ve got a good guitar track and a good vocal track you simply adjust the relative volumes 'till it sounds good and you’re done!

When sound guys mix live they mostly just set the mixer levels to get a good balance of the sounds and then they let the band “play together”. (In small venues a lot of mixing happens acoustically. Even if there is a mixing board, a lot of the drum & electric guitar sounds, etc. comes directly off the stage. Often drums & amplified guitar don’t go through the mixer & PA system.)

With modern studio recordings it gets more involved and the engineer uses [u]automation[/u] to adjust the levels of all the tracks throughout the recording (as needed). I’m sure Cubase has automation and Audacity has something similar called the Envelope Tool.

And if you have enough tracks, the instruments are panned across the stereo “soundstage” with lead vocals & bass toward the center and the other instruments “located” left-to right across the stage. (Bass is “hard” to reproduce so the bass goes to the center so you’re taking full advantage of both woofers.)

If you just have a guitar & vocal, mono (or both tracks centered) is usually the most natural. Or if you want stereo you can double-track the guitar (record it twice) and pan one track to the left and the other to the right. (For natural-sounding double-tracking it’s important to make two separate performances/recordings.)

It’s also common to use some compression & limiting to even-up the volume and to bring-up the overall loudness (as necessary) without [u]clipping[/u].

The last of the most-common effect is reverb to simulate the sound of a “good room” (when the recording is made in a small room or sound-absorbing studio).

Note that mixing is done by summation. Hardware mixers are built around a summing amplifier. That means you normally have to reduce the levels to prevent clipping. Hardware mixers and DAWs (such as Cubase) have level controls for each track plus a master level control. Audacity doesn’t have a master-mix level control. With Audacity you can reduce the levels before mixing and/or export as floating-point (which will not clip), then re-open the floating-point mix, normalize (to bring down the level) and then export again in your final desired format.

Automated equalization is possible. The cheapest plug-in I’ve seen which can do that is Hornet31, (~$20).
It will automatically correct any resonance on individual tracks, & can even-out the frequency-content of the final mix.

Thanks so much for the info! It’s honestly more than I am able to absorb at once. Maybe I should just focus on recording first?

As for the plugin, how does it work exactly? I just run it and it does all the work by itself?

Install & apply the plugin like any other VST effect in Audacity.
It will automatically remove any frequency-bias, (resonance), in the recording from the microphone / instrument / room / voice … HorRNet Plugins - #2 by Trebor
You can also use it on the final mix in continuous-mode to dynamically change the equalization with time.

You can try the Hornet31 plugin before you buy, the free demo version of the plugin will insert spoiler silence every 30 seconds.

[NB: Currently only 32-bit plug-ins are compatible with Audacity on Windows, even if you have a 64-bit computer].

I still can’t seem to get it to work? I checked in the menu and it doesn’t display there. I followed the instructions in the manual and in YT but no dice.

Sorry to be such a doofus but this is all totally new to me.

In Audacity you have to enable plug-ins before they appear in the effect menu, see …
https ://manual.audacityteam.org/man/manage_effects_generators_and_analyzers.html

It doesn’t show up in the menu. Is it called Autoequalizer or something?

“HoRNet Thirty One” …

Still doesn’t show up. Am I putting the files in the wrong folder or something? It’s supposed to go into VST or VSTplugins right?

As can be seen in my previous post, I put plug-ins in the Audacity plug-ins folder, for me that’s here …


https ://manual.audacityteam.org/man/installing_effect_generator_and_analyzer_plug_ins_on_windows.html


I wouldn’t get too hung-up or bogged-down with any particular effect/plug-in. “Automatic EQ” is unlikely to give the best results, and it could end-up making the sound worse!

FYI - There is also “matching EQ” that tries to match the frequency balance to a known-good recording of your choice. Izotope Ozone has matching EQ (not free and not compatible with Audacity). But again, it’s NOT something to be used blindly (without careful listening).

(without careful listening)

And that brings us to: How are you listening? Good quality speakers or headphones are handy. The object is to catch errors before anybody else can hear them.

Hollywood loves the Sony MDR7506 headphones. They’re not that good to settle in and watch a movie, but they will show you errors in your show.

This is NPR-West David Greene with his.

Good speakers are harder. I have an oddball collection of models I like to listen to but I don’t think are made any more. I have used the Rokit series. I have a pair of KRK Rokit 5s. If I had to do it again, I would have bought larger versions.

They’re handy because they have the amplifier and equalizer—all the peripherals—built-in. Just connect an analog cable to your system, plug them in and go. I’ve seen these in use for other performances and systems. I’m pretty sure that’s them in this Pomplamoose production.

It’s a little concerning this didn’t come up in your postings or questions. Mixing and equalization make your production sound like something. There is a goal. You have to be able to hear that something. It’s not push the equalize button and go home.

There are audiobook readers who post on the forum with a long list of corrections and changes they made to their performance.

“What did it sound like before you changed it?”


You should be able to answer that.


I have been reading up a bit on EQ and it’s a deep field! I can adjust my mix a bit but it’s highly subjective as to what sounds good?