Musings and questions on cross channel bleed removal

While being into music (both playing and listening) and audio (both high-end consumer and pro), for quite some time… I’m fairly new to mastering/processing/mixing.

At this time, I’m in the middle of starting up a band. We’re still finding/replacing personel, and developing a playlist etc.

I’ve been recording our practices using an M-Audio 8 analog input card, and doing a quick and dirty mixdown so that we can hear what we sound like.

Unlike mastering… I’m not trying to improve the sound of the music or mask any flaws… I want us to hear every flaw.

However… at some point in time I’m going to want to mix a demo CD… and I’ve become fascinated with the mastering process.

Not having near-field monitors (yet)… and due to the fact that my fastest least burdoned computer is the one I just built in my music room to record with… I was doing my quick and dirty mix listening to my 15" PA Speakers. (lots of bass)

I was playing with the gain of my Bass track, and realized that even when turned down all the way, I could still hear the Bass really well… bleeding thru the drum mics, (two medium diaphram overhead condenser and a dynamic on the kick drum mixed pre-computer down to stereo) and the 4 vocal mics.

While able to turn the Bass track gain down to a proper mix level… I realized that stereo placement in the mix would be very muddy… with significant bass coming through all these other tracks.

I found a portion of the project, where we almost everyone was taking a brief break, but I was playing a bass riff… then put my bass down, where it started feeding back prior to me hitting the off switch on my amp.

I could see the Bass signal on various other tracks… most prominently on one of the drum channels (I later found that the drummer had inadvertantly moved them, so that the front of the mic was facing forward rather than down)

I made duplicates of this track and the bass track, and compared the amplitude of the portion I had identified above

Using the amplification effect, I reduced the gain of the dup Bass track so that it matched that of the Drum channel. I then inverted the Bass track, and then listened to just those two tracks.

It didn’t sound much better. I zoomed in on the most abrupt portion of this selection – i.e. where I turned my amp off stopping the feedback.

I realized that they were slightly out of synch. The Bass track source was a DI off my amp, and the sound took a while to travel approximately 15 feet to the drum mics. I slid the inverted bass track to the right and tried to visually match the two tracks, trying to make sure the peak of one matched the trough of the other (BOY – GRIDLINES SURE WOULD BE NICE!!!).

After several tries… Eureka! listening to both tracks resulted in very little sound at all!

At this point I abandoned the experiment. The recording was nothing special.

So… is this technique or something like it common in cleaning-up/mastering live recordings? Are there any problems with it?

Positives? Negatives? Alternatives?

Comments please!

Thanks

I did a Google search (the best calculator on the internet) for:
“15 feet / speed of sound”.

I got this back:

(15 feet) / speed of sound at sea level = 13.435599 milliseconds

Is that about how far off the two tracks were?

Your method probably won’t work well in the long run. If you do this for all the tracks, you’ll be able to remove specific frequencies, but other frequencies will add together (not all frequencies travel equally well through the atmosphere). You’ll also be shifting tracks around too much and throwing off the timing of the music. Even if that’s not a big deal, you’ll have different delays between each mic for each sound source so it will be impossible to cancel out all the bleed-through in a project with more than 2 mics or 2 sources. Track-bleed is a big problem in a live studio environment. There’s a reason all the mics are heavily EQ’d during live performance, part of it is to reduce bleed through (especially of the bass frequencies).

I have a book at home called “Recording the Beatles.” It’s an incredible reference and I highly recommend it for anyone interested in recording, even if you don’t like the Beatles (assuming people like that exist). They go over lots of little problems like this. When the Beatles initially began recording, then did everything live with no overdubs. The studio wasn’t worried much about bleed-through because all the mics were very directional and there wasn’t much bass in the final master (practically none if you ask me). But as fidelity got higher, they had to come up with new techniques. Things like; using isolating panels around the drum set, recording all the vocals and guitar parts as overdubs, using a DI on the bass track mixed with an amped signal, and heavily EQ’ing Paul’s bass so it didn’t interfere with the bass drum. In the end, they would record the bass and drums to a single track without any other instruments and then add everything to the top of that track. So they were really only dealing with 2 instruments at once.

To me, the easiest way to record music with many instruments is to have the whole band make a “scratch” track and then overdub each part on top of that individually. That way you can re-use your favorite mics and you avoid all possible bleed-through. I’m sure it’s possible to record several instruments at once, but you’ll need some very good acoustic isolation, band members who can keep time without looking at each other, and plenty of headphone channels to run monitor mixes all over the studio.

Also, this process is still called Mixing. Mastering is only ever done to a final mix once all the instruments are in place and everything is sequenced as it should be.

Yes… I didn’t write it down… but that’s somewhere in the ballpark.

Hmm… I’m not shifting any of the “real” tracks time-wise… only what I’ll call my “bass canceling track” which is processed to match approximate amplitude of, and to sync to the “real track”, then inverted combined with that track to create my “de-bassed real track”.

Different frequencies don’t travel at a different speed of sound. I can see atmospheric (and other) factors acting as a natural equalizer, which would mean that at different frequencies, the relative amplitude of my “bass canceling track” wouldn’t always match the amplitude of the bass signal in the Drum Mic, as well as the bass in the drum mic being subject to the frequency response of the mic… but still… an inverted bass signal approximately the same amplitude, once synced time-wise with the drum track, should always match peak for trough and trough for peak.

If in some frequencies, the amplitude doesn’t quite match up as well due to atmospheric and other reasons… shouldn’t the resultant new “de-bassed” drum track still have significantly less bass, while having minimal effect on the Drum sounds the Mic is supposed to record?

Not that I plan to, or consider it a good idea… But couldn’t I theoretically, repeat this process with a separate “bass canceling track” processed to match amplitudes and synced with any significantly affected mic, and “de-bass” each track separately?

Excellent… I’ve been wondering where to roll off the vocal tracks on the low end without effecting the richness of vocals. Suggestions?

Damn Heathens!

As the primary culprit is the overhead drum mics, (we’re not playing in a “stage” setup… The dynamic vocal mics (mostly shure SM58s or similar) aren’t facing the instrument amps… this could work… however that’s why I was mixing my three drum mics down to a stereo signal (Kick, dead center - The two overhead condensers which are approximately 5 feet apart, and 4 feet above the cymbals, equal distant from the snare, panned far left and right) to give it a little more presence in the mix, rather than a mono source.

Thinking about creating a demo in the future (not to be a released CD… just as a tool to get gigs), I had planned on taking the best tracks I could, and then overdubbing the drums and vocals individually. This whole thread is basically a mind exercise to make sure I’m thinking right – understanding concepts correctly, rather than a proposed plan of action.

:-/ Sorry… I know that… the two are still linked in my mind however, and wrote my original post after working all night… and being a bit punchy.

Thanks for your time, insights, and valuable experience Andy!

Hmm… I’m not shifting any of the “real” tracks time-wise… only what I’ll call my “bass canceling track” which is processed to match approximate amplitude of, and to sync to the “real track”, then inverted combined with that track to create my “de-bassed real track”.

Fair enough, if it works it works.

Different frequencies don’t travel at a different speed of sound. I can see atmospheric (and other) factors acting as a natural equalizer, which would mean that at different frequencies, the relative amplitude of my “bass canceling track” wouldn’t always match the amplitude of the bass signal in the Drum Mic, as well as the bass in the drum mic being subject to the frequency response of the mic… but still… an inverted bass signal approximately the same amplitude, once synced time-wise with the drum track, should always match peak for trough and trough for peak.

If in some frequencies, the amplitude doesn’t quite match up as well due to atmospheric and other reasons… shouldn’t the resultant new “de-bassed” drum track still have significantly less bass, while having minimal effect on the Drum sounds the Mic is supposed to record?

Yes, correct. All the sounds travel at the same speed. But you’re also right that the delayed cross-feed signal will have a different spectrum, so you’re also affecting the weighing of each frequency. How much, I’m not sure. There’s a lot of uncertainty here.

couldn’t I theoretically, repeat this process with a separate “bass canceling track” processed to match amplitudes and synced with any significantly affected mic, and “de-bass” each track separately?

Maybe, I’m not sure. If bass is the only thing you’re worried about then it might work. But if there’s bleed-through of other sources then they might start overlapping strangely. I suppose the only way to know for sure is to try it.

Excellent… I’ve been wondering where to roll off the vocal tracks on the low end without effecting the richness of vocals. Suggestions?

If we’re still talking about recording, I’ll continue to recommend that you record vocals separately. They’re of lower overall volume so they suffer from “extra” sounds more than other sources. But if we’re talking live, it really depends on the music and the environment. There is no correct answer. I would suggest starting around 300Hz or so and roll that off sharply. When I record vocals I use the Audacity Equalization effect and draw a straight line from 400 Hz @ 0dB to 0Hz @ -30dB. But I don’t want much bass in my voice (I’m a baritone so there’s quite a bit to begin with), and my new mic (a MXL V69 large-diaphragm condensor) is quite warm in the mid-range.

As the primary culprit is the overhead drum mics, (we’re not playing in a “stage” setup… The dynamic vocal mics (mostly shure SM58s or similar) aren’t facing the instrument amps… this could work… however that’s why I was mixing my three drum mics down to a stereo signal (Kick, dead center - The two overhead condensers which are approximately 5 feet apart, and 4 feet above the cymbals, equal distant from the snare, panned far left and right) to give it a little more presence in the mix, rather than a mono source.

To be honest, I have no experience recording an acoustic drum kit. I use a Roland V-drums kit. Your setup sounds like it should work alright, but I’ve always heard that a set of room mics can do wonders for a drum kit. I don’t know if you have room for another pair.

If you have the vocal mics facing a wall (with the vocalists facing the drum kit), then I would recommend hanging some heavy fabric behind the vocal mics, they sound like they’ll pick up a lot of the reflection off the wall. But I really don’t know much about how the room is setup and I can’t pretend to be an expert when it comes to room acoustics.

Good luck making your demos. If you’re ever in Chicago let me know, I’ll come check it out.

Here’s the room layout




You over estimate my ambitions :slight_smile: If we get to the point where we play out once a month or so at local pubs etc. I’ll be happy.

Thanks for all your help.

If I’m reading that correctly, then it sounds like you’ve got about 1/2 reflective materials and 1/2 damping material. I’m told that’s a good ratio. I have a feeling you’d benefit from some bass traps in the corners of the room, but don’t forget that a lot of this is guess work on my part.

It also looks like 1 or 2 of the mics will have a lot of drum bleed, is that something you’re experiencing? You might benefit from moving them around a bit. Although I get the feeling that this room gets really loud when you rehearse, so it might not really matter in the long run.

I do see one little bit that might interest you. You might get a really great “room tone” if you place a stereo set of mics in that little side-room since it has no damping material. On the other hand, it would be a great place to put up some temporary damping material and get a vocal recording booth going that you can fit 2 or 3 people in comfortably.

Forgive me if I don’t check back for a response soon. I’ve been busy working on music at home lately and haven’t been as active as I was a few months ago on this board.