Removing background vocals from Audacity recording?

Hello,

I’m trying to make YouTube videos with my girlfriend. We sat down last week and recorded some gameplay for two hours while recording both the game, and two separate audio files, one with her voice and one with mine. My audio is fine, but her audio also picked up my voice. When playing the two audio files over the top of each other, it creates an annoying echo with my voice. I have tried to use Noise Removal and created a noise profile with a section of her audio where you can hear me talking in the background and tried to remove my voice that way, but this causes a considerable drop in the quality of her audio.

Is there any way I can edit her audio file to remove my voice from the background without causing a significant drop in quality?

My audio was recorded on her PC, using Windows 7 Home Premium 64-Bit and Audacity 2.0.5. Her audio was recorded on her laptop using Windows 7 Home Premium 64-Bit and Audacity 2.0.5. The audio is being edited on my PC running Windows 7 Professional 64-Bit and Audacity 2.0.5. I cannot remember if Audacity was installed from the .exe or the .zip on any of the 3 systems used.

We can’t split a mixed sound file into individual performers or instruments.

In your case what you may be able to do is shift your live performance so your voice on her bad recording lines up. Time Shift Tool — two sideways black arrows. You may get a little bubbling as the internet delays come and go, but I bet it sounds better than what you got now.

Gameplay audio will need to be separate for this to work.

This is why headphones are a really big deal on most live recordings.

Koz

I’m not entirely sure this will make any difference. The audio was recorded at the same time, in the same place. We were sat next to each other while we recorded on two separate devices so headphones and internet delays should play no factor. When playing the audio back, I’ve already synced it up and my voice echoes.

Then I think we’re out.

It’s a rare microphone that can pick up one person and exclude a person sitting right next door and as before, we can’t split a performance into individual voices.

Noise Removal failed because it can’t be used on moving “noises.” Whatever you used for the profile step is the exact sound that will be removed from the show. This works fine for fan noises or other constant sounds, but it doesn’t work at all for speech, and depending on settings, will also try to remove similar sounds unrelated to the profile.

Koz

So I’m either going to have to put up with her audio sounding bad, or mine echoing?

The problem is that your voice hasn’t the same phase (different reflections and arrival times) in both recordings, thus a single alignment won’t work.
You can try the following:

  • Duplicate your track.
  • Align it with your girlfriends track.
  • Make a single stereo track out of it.
  • Use a Stereo vocal removal tool to reduce the center panned vocals (it is exactly the quieter portion in your and your girlfriends channel).
  • You can split up the tracks, slightly move one and repeat the last steps if not enough has been erased.

The female channel should become cleaner and cleaner and you can lastly split to mono and delete your (duplicated) track.

The normal vocal removal in Audacity doesn’t work, it has to be something like:
https://www.dropbox.com/s/tkonxx1njg1lzcu/rjh-stereo-tool.ny
(put the file in the plug-in folder of Audacity and restart the application, it appears in the effect menu under “2D Stereo Toolkit”)

I’m not entirely sure I understand the procedure there. I’m a complete noob when it comes to audio editing. Outside of a little cleaning up, I have no idea what I’m doing. Could you run me through the process in a little more detail, please?

For now, we opted to take the long route and manually edit the audio to cut out anywhere you can hear my voice on her audio track. It sounds okay in the video, but in future we’d like to avoid doing this as combing through two 1 hour recordings is an extremely tedious and time consuming process.

The other thing that I don’t understand is that we are both using the exact same type of microphone. Mine doesn’t pick her up, but hers does pick me up and I can’t work out why.

The other thing that I don’t understand is that we are both using the exact same type of microphone. Mine doesn’t pick her up, but hers does pick me up and I can’t work out why.

Describe the recording system in detail. Part numbers. Where are you sitting and in what kind of room?

It could be something as oddball as her microphone is picking up your voice reflected from a hard wall behind her. Aggressively directional microphones pick up the performer and anything behind them in a straight line.

Pay attention to the response patterns if your microphones can switch them. I have a microphone with five different response patterns and they all do different jobs (pix).

In addition to that, make sure you’re are both sitting in a null or rejection of each other’s microphones.

Koz
Screen shot 2014-07-08 at 2.58.09 PM.png

I’m unsure of part numbers, but we were both wearing Creative Fatal1ty headsets. We were sat in a narrow bedroom-turned-storage room with a sloped ceiling. I was sat at the desk by the door, she was sat next to me, to my left, at about 90 degrees. There was a buzzing fish tank behind us having it’s noise reduced by a blanket. Neither microphone picked that up.

I was going to ask if you were using headsets. Highly recommended. What’s the possibility the method you picked to record has cross-talk or leakage?

You maybe one of those insanely lucky people like me whose voice goes right through soundproof walls. Next time you go for a recording, or maybe just a test, swap headsets and/or seats. I bet the microphones really like you.

Koz

There’s another test. Does your recorder have sound meters? Fire the whole thing up and instead of speaking, scratch and tap each microphone. If you have crosstalk (sound where it shouldn’t be), your microphone scratching will appear as a tiny signal on the “wrong” channel.

Koz

You could also use a Noise gate on your track to attenuate the audio that is leaking in.

Here’s an example sound with 4 parts.

  1. The male track with some female crossover.
  2. The same track after noise gating.
  3. The first mix-down with audible echo
  4. The final Mix-down of the noise-gated male track and the already clean female track.

    In your case, the channels would be the other way round (the female track has crossover) and you would noise-gate the female track.

I’ve used the following code in the Nyquist Prompt:

(noise-gate s 1.2; Look Ahead 
0.15; Attack Time 
0.85; Release Time 
(db-to-linear -40); Reduction 
(db-to-linear -30)); Threshold

What this does:
All audio that falls for a certain period of time under threshold is further reduced by -40 dB.
If e.g. the Release time is too short, you’ll only hear “backgrou” because “nd” is under 30 dB.
The Release time prevents from reacting too fast.
The threshold has to be adapted to your audio.
For this purpose, the leakage has to be measured, i.e. select a portion of your voice in your girlfriends track and call “amplify”. The value you see is the threshold. Press escape afterwards–you don’t want to amplify this selection.
For instance, if the amplify effect shows 38, the threshold would be -38 (or a little bit less, e.g. -35).

This is similar to silence all those parts by hand.

If you tend to speak frequently at the same time, then this method won’t work and you have to try my first proposed procedure above.
Please tell me where you got stuck. Have you managed to install the plug-in?

You still haven’t said how you recorded this given Audacity only records one audio device at a time. I suppose you were playing the mics through the headset and recording stereo mix?

If so the mics will always be a little late (and differently late on different computers) and both recordings will be at risk of drifting apart due to different clock speeds.

You may be better buying a mixer, plug the audio out of one computer into it, then your two mics and there will be no sync problem. You’ll still have to solve the cross talk problem - if it is a problem when the tracks are synced.

Gale

Apologies for the late reply.

I won’t be back at her house until next month, so we’ll be unable to swap mics and test settings etc until then.

I did say how it was recorded. I was playing on her PC, with my headset connected to it. She was sat next to me with her laptop and her headset connected to that. We had tried to use Virtual Audio Cable to combine the two mics into the one track, but because the headsets are the exact same, the PC only detected it as one mic instead of two.

I had managed to install the plugin, but when I duplicated my track and played the two copies of my track, there was no echo. When I played my girlfriend’s track too, there was. From this, I figured that her track was ever so slightly out of sync and I couldn’t for the life me get it to a point where it would not echo. I think that’s where it all fell down for me. I know next to nothing about Audacity. The most advanced I’ve ever got with it is simple recording and occasional noise reduction. I feel I got stuck when your instructions assumed a level of knowledge that I just do not have, and I had no idea if what I was doing was correct, or where to find the options you’d listed.

EDIT: I have tried the Nyquist Prompt option, selected a section of my voice and used amplify to get the threshold of 18.9. I’ve then set the threshold in the Nyquist Prompt to -18.9 and let the effect run. It immediately greets me with an error stating “Nyquist did not return audio”.

Noise Gate can be a little rough to use and make it sound natural. In its simplest form, it takes quiet sounds in a performance and reduces them. The problem is figuring out which sounds are “quiet.” Obviously, if you have someone talking in the background facing the wrong way, their voice will be quieter than yours. However, if you have very expressive speech, Noise Gate may get it in it’s craw to slice off gentle parts of your expression, too.

If you make Noise Gate sloppy to avoid that, it gives you The Rachel Maddow Show, where each of her words has a live “tail” on it and you get to hear what the audience is talking about behind her.

I’m all for figuring out where the leakage is coming from at the live voice stage, not in post production.

If you’re sitting next to each other with identical headset microphones. I’m going with the microphones have a “live” portion of their pickup pattern that in your living room, her microphone points toward you. This is against the possibility that one of them is broken. You can clear that by reversing the headsets and testing. I don’t think you’re ever going to get rid of this problem using post production tools. Noise Gate and Noise Removal look good in the retail box, but rarely perform like you want them to.

“I can’t get Noise Removal to turn my trashed performance into a perfect studio presentation. What’s wrong?”

Take time for testing and do what everybody else does and keep shifting around and changing the environment until you improve the performance or find out what’s actually wrong — what’s causing the leakage. I know you can’t do a game like this, but try one test pass where you’re sitting across a table from each other. I bet there is no problem.

Koz

This actually makes a lot of sense. We both sat next to each other with the mic on the left hand side, bent round slightly to reach our mouths.
This means mine was pointing away from her, and hers was pointing towards me. Unfortunately, we’re unable to reverse the microphones as our headsets come with a detachable microphone that only goes in on one side. The side the mic connects to also cannot be rotated in any way. So short of wearing the headset upside down and backwards, there’s no way to reverse it.

I have used a boom microphone that allowed “tuning” a little bit. If you pull off the foam cover, you discover that the microphone has a tiny scoop arrangement and you can point the scoop to your noise, mouth or chin. Then put the foam back on for pop and blast suppression.

http://www.sweetwater.com/store/detail/C555L

It’s very seriously advantageous to solve this in the real world by moving around things you can touch, taste, and feel. If you don’t do that, you need to become good at Nyquist code management. See:

(noise-gate s 1.2; Look Ahead
0.15; Attack Time
0.85; Release Time
(db-to-linear -40); Reduction …
etc.

With no guarantee it’s going to sound any good at the end. One of the post production bad jokes is our ability to make your show sound terrible in a variety of different ways.

Koz

Thanks but what I meant was, what input source you were recording in Audacity. I thought you were trying to record game audio as well as your two voices. Recording the game as well adds complexity and the possibility of echo because I don’t see how you’re recording the game as well without playing at least one voice through the headset then recording from stereo mix.

And any leakage will be a problem if you are recording this on two different devices or computers because the two recordings won’t be synchronised.


Gale

The input source was just set to the microphone on the headset on both the PC and the Laptop. The game was being captured by FRAPS, which includes audio, and being played at a high enough volume for me to be able to hear it, but low enough to not be picked up by either microphone. Stereo Mix was not part of the set up we used.

Thanks.

Trying to make a single aggregate device with Virtual Audio Cable was a good idea, but I don’t see how the one computer would see the other computer’s mic. It’s easy to rename an input in Windows so they appear different - just right-click > Properties over the device in Windows Sound.

Did you try a splitter cable like Amazon.com, assuming these aren’t USB headsets?


Gale