Polarity using two recordings

Scenario: I’ve got two identical Olympus digital audio recorders sitting side by side and recording at the same time. I want to subtract one from the other to see if there is any noise/sound that only one recorder may have picked up. I’ve played with inverting one of the samples but when I play the resulting audio, it sounds fairly normal to the original. I’ve read that it could be a phase issue between the two files so I’ve tried to phase align them as best as possible before applying the inversion. I used a loud, sharp sound to act as a phase alignment point.

I just can’t seem to get the result I’m looking for. Can anyone suggest a method using Audacity that I can effectively remove audio that is present on both recorders and only be left with sounds that are only present on one but not the other. Hopefully that makes sense. Any ideas are greatly appreciated.

Join the two (mono ?) recordings into a stereo pair, then remove centre, (i.e. remove what is common to both).

I will try that at my first opportunity! Thanks.

I used a loud, sharp sound to act as a phase alignment point.

Zoom-in to the point where you can see the individual samples so you can time-align exactly. You should also be able to see that they are out-of-phase.

For best results you should match the volumes too (unless a difference in volume is part of the experiment). The Loudness Normalization effect is probably the best way to do that. If it’s stereo you might have to volume-match the left & right channels separately.

I’d expect pretty-good cancelation. But if the recordings are long they might drift-out of phase-alignment over time.

…Just to make sure you’re doing the experiment correctly, make an exact copy and invert/subtract that. You’ should get complete cancellation and dead-digital-silence.

In the “Vocal Reduction and Isolation” effect, the “Remove Centre” option may be best as it is less reliant on phase correlation than the simple “invert and add” method.

As a test I took a single mono track and copied it. I then merged the two copies into one stereo track. Running “Remove Center” worked as expected and I had complete cancellation.

I then took two 20 second stereo clips each from a different recorder and made them both mono just for testing to make things easier. I took the two resulting mono clips and used the Time shift tool to manually move one clip so that the phase alignment looks very close.
audio.png
I then merged the two clips into one stereo clip and tried the Remove center effect. The amplitude of the resulting recording was lowered but still was easily heard.

This must mean that there are significant differences in terms of the original recordings even thought the recorders are the same make, model, version, and settings used. That, or the phase difference is still not quite close enough.

I can see differences!


P.S.
Thinking more about this… In a “normal room” with reflections (or reverb) you’ll get differences in the higher frequencies because the mics aren’t both in exactly the same spot.

I was doing some experiments with an SPL meter in my home office once - With white noise and ~5kHz tones I could hear the level change drastically just moving my head slightly. That’s because the reflections and (and the different distance between the left & right speaker) cause standing waves with nodes (where the waves cancel out) and anti-nodes (where waves combine in-phase and increase) at different points in the room. That’s an easy experiment to repeat and of course you don’t need the SPL meter.

The other thing I noticed (not related to your experiment but related to reflections) that was when I moved around behind the SPL meter (on a mic stand) the high-frequency readings would change.

With low frequencies the wavelength is longer so the nodes & antinodes are farther apart. If there is any bass in your recording I assume it canceled pretty well?

I knew there were differences in the waveform but I was hopeful that they would be minimal for this experiment. The two recorders are literally side by side so yes, they are not in the exact location. You make a good point on that, I should test to see how better/worse it is with different orientations of the recorders. Maybe head to head so that the mics are as close as possible. Interesting…

Upon further review, the method you all pointed out is definitely sound and does work. I looked hard at the audio and there is a drastic reduction once “remove center” is used. So all in all, this methods answered my in initial question and gives me plenty of testing to do to maximize the reduction as much as possible.

Thanks to you all for the help! I appreciate it.

I just did another test with the mics as close as possible (without blocking exterior sounds) and the difference is astounding! There was significant reduction and artifacts present only on one recorder were very apparent.

Thanks to everyone again!

At 1kHz the wavelength is a little more than 1 foot and at 10kHz it’s a little more than 1 inch. (You can look it up and put the formula into a spreadsheet if you’re interested.)

…If you take a simple case where there are no reflections, If one microphone is 1-foot farther from the sound source (but they are both far-enough that loudness is about equal) they are exactly in-phase at 1kHz, delayed by exactly one cycle. At 6-inches they are 180 degrees out-of-phase. Or at 500hz and 1-foot, that’s 1/2 cycle so you’re 180 degrees out-of-phase again. At 100Hz, 1-foot is not a significant phase difference.

You get similar addition/subtraction from stereo speakers because the sound from the left speaker hits your left ear a few milliseconds before the sound from the right speaker and it’s reversed for your right ear. We don’t notice the effects with regular program material but you can hear “weird things happening” with pure test tones. Or, If you’ve ever reversed the polarity of one stereo speaker you get a “spacey” sound with a lot of cancelation (of the soundwaves in the air) and the bass gets almost completely canceled. Some people use that as a “stereo widening” effect.

There is a fourteenth century microphone trick about this. You can record in remarkably noisy environments by jamming two Electro-Voice microphones against each other and into the same microphone connection—out of phase.

Note one of them has a phase reversing adapter 2>3.

The rest of that is a plain “Y” cable and a standard sound mixer. It’s used by jamming one of the two microphones against your chin and speak only directly into that one. The microphones are close enough together so time and echo displacement are zero or close enough to it.

It’s not Glen Glenn Sound Studios, but you can stand in an airplane prop blast or a storm and do a voice presentation with that.

It uses the “Dynamic” 635a microphone’s ability to never overload and Electro-Voice’s insistence that all their microphones match.

Microphones: Design and Application - Lou Burroughs - 1974

Koz