Stereo Acoustics, What is Phase (in phase, out)?

excerpt from therecordingrevolution.com
Some Tips For Mixing Acoustic Guitar, September 9, 2013 | Mixing, Plugins, Tips:

Be Careful With Stereo Acoustics

One final word of warning. If you are dealing with stereo recorded acoustic guitar tracks, spend some time making sure they are actually in phase. Collapse your mix to mono and see if the tone and fullness of the guitars goes away. If so, you might need to zoom into the waveforms and do some aligning.

One of the saddest things that can happen is that you have a nice sounding (seemingly) stereo acoustic guitar track in your mix when listening in stereo, but in mono (i.e. just about everywhere else except for in headphones) they become thin and harsh. This is why I usually avoid stereo guitars in the first place. But if you have them, make sure they are phase coherent in mono, not just in stereo.
Share

Short of following these recommendations (from above quote), can one simply zoom in and see if the ‘waves’ are aligned between left/right, that there is no offset? Is this what in-phase out-of-phase refers to?

— My Tascam DR05 portable recorder has two built-in microphones permanently postioned and I don’t think there is a delay/latency between them at all – nothing I can visually notice anyway. Of course they come in at different volumes/RMS which is easily balanced, but this “phase” issue/question, is this simply timing being off between channels (or tracks of a song)? A recording problem, depending on the equipment/setup, of different latency values, which I assume would not be a problem anyway (to a point)-- just creating an echo I would think.

Also, not sure what this excerpt is describing, to how only headphones produce stereo?

You only need to worry about phase when recording with multiple mics at the same time from the same source. Depending on how far the mics are apart, you might get a phase problem. You need to find a placement that doesn’t get that effect, as you can’t compensate for it later.

A phase problem sounds like a comb filter. A filter that stops certain narrow frequency bands. As it leaves most of the signal through, it takes a trained ear to spot it.

Sometimes, “phase” is also used for polarity. When dealing with classic mics and possibly mis-wired cables, it’s not always clear which side of the balanced signal is “hot”. That’s why mixers have a polarity reverse switch, sometimes labelled as “phase”. In this case, it’s not completely wrong, as inverse polarity is a 180° phase change.

Your recorder will never show either a phase or polarity problem, unless you use it with external mics and don’t place these carefully.

And, yes, if you zoom in you can align tracks if needed. It’s not always easy and you always need to listen to the result.

The hardest part is recognizing phase problems when listening. The mono test is a good way to test, but it doesn’t show all problems. It will show polarity problems.

Especially recording acoustic guitar is a good way to learn how to spot phase problems. It takes some experimenting with mic placement. And it pays to first find the bad spots if you’re learning and trying to understand why it sounds so bad in that setup.

It’s a good idea to read up about the most common stereo mic setups, like AB, XY, ORTF and M/S. There’s dozens others, but these four should get you going. Even if you don’t use them for your guitar recordings, these will give you a basis to start from.

Oww, headphones.

When listening on headphones, reverse polarity in one channel will give you a dramatic loss of signal, usually most noticeable in the low end.

If your signal is inverted on one channel and you set it to mono, theoretically nothing is left. Basically you’re hearing L-R or R-L. If both are equal, the outcome is zero.

The same mechanism plays for phase problems, but the sound won’t cancel, it will just sound a bit strange.

All of this also happens with speakers, if they are perfectly set up and you and your ears are in the sweet spot. But there’s always some sound that “escapes”, so the effect will be much harder to hear. That’s why you always need cans…

You should be careful about the difference between phase and delay. I did a test a while back (attached) where I did a definitive microphone phase test. I turned on the recording system and held up a sheet of typing paper between me and the microphone. I thumped it sharply with a pencil toward the microphone.

When you inspect the blue waves later, the microphone, if it was doing its job, should have produced a marked and probably overloading single positive wave followed by other trash.
Screen Shot 2016-06-10 at 6.00.42 PM.png
I expect that to work for each microphone in my system. That means all the microphones are delivering a positive wave with an increase in air pressure. Doesn’t matter what it actually is, but they should all match. Similarly in a stereo system, I expect a single impulse sound like that to produce roughly the same wave in each microphone.

I also artificially produced a sound test of what would happen if you had one microphone in your system which was wired wrong or broken such that it gave an out-of-phase sound.

http://www.kozco.com/tech/LRMonoPhase4.wav

You can pull that 39 second clip apart and see what happens to the blue waves.

So that’s phase. The most common two are IN and OUT, but some microphones can play games a bit in order to achieve some effect or correction. A Figure Of Eight microphone is naturally both in and out of phase at the same time.

~~

Simple Delay is what happens when sound takes time to get there. If you have a short and long pathway, even on one microphone, some strange and wondrous things can happen.

Strip away all the towels, books, felt and soundproofing. If you just plunked that microphone down on your shiny desk, the sound would arrive twice. Once direct from your lips and once, later, reflected from the desk. Because of the way sound works, some pitch tones will arrive before others. Some will add and some will cancel. No it doesn’t sound like a very good idea, and in general, desk microphones have problems with this. If you have a choice, put soundproofing on the desk. That’s a furniture moving blanket in the illustration.

This is the exact problem when you multi-mic a guitar. Some tones may get there before others and cause odd talking in a wine glass sounds. Yes, it’s highly recommended you play everything mixed down to mono at least once to avoid odd things happening on simple sound players.

It could be asked why you’re multi-micing a guitar anyway. I’ve always been able to find a sweet spot, press record and go. Or if you do insist on it, record each microphone separately.

You can always add special effects later. You can’t take them out.

Koz

Your questions are always interesting.

attached is a sample file that has four different phases.

The original guitar is in mono and I’ve put a pseudo stereo effect on it, that’s the first part of the file. This produces itself some phase cancellation/exaggeration (what one might call “sounding artificial”)
However, we’ll regard it still as in-phase–reasons later.
The third part is with one channel inverted (=180 ° polarization). The remaining two parts 2 and 4 have a phase of +90 ° / +270 ° (I.e. -90 °) respectively.
You will probably notice that parts 2 and three sound the most weird, especially with head phones.
This can also be proven numerically:
Select one part and go to Effects–>Vocal Reduction and Isolation. Take the last entry “Analyze” and look for the percentage for correlation.
The different parts read something like “correlated by about…”

  1. 79 %
  2. -13 %
  3. -80 %
  4. 13 %

100 % would mean that you have the exact same in both channels whereas -100 % indicates that there are still two identical channels but one is flipped upside down.
About 50 % is likely a good value for a proper stereo track. The main rule is however that the reading should never go negative, invert one channel in this case.
For myself, I’ve written a plug-in that cycles also through +/-90 ° phase shifts (see attached).
hilbert.ny (984 Bytes)
Note that those “odd” polarizations give the impression that the sound comes more from left or right and you’ll probably have to use the pan slider to adjust for it.
The effect is called “Hilbert” after the inventor of the 90 % transformation. It has no parameters. After applying the effect four times in a row, you’re back to the original phase (apart from some numerical errors and the fact that frequencies near DC can’t be shifted 90 %)

There are other possibilities to tackle a phase shift problem but first, you have to encounter a specific one…

Robert

Thanks Robert, I experimented with this plugin and attempted utilization of this information. Intense info, Are you stating that phase (and polarization) can be determined by Effects–>Vocal Reduction and Isolation “Analyze”? With a reading of “50%” being about where a stereo track should be?

My results for my stereo master tracks (3-4 songs) came to 97% and one at 92%.

I then tried the Hilbert effect to see if it might edge it closer to 50%. It did not but either went at the similar 97% or a 1%, applying several times as it seemed to cycle back and forth between these percentages; and soundwise: nothing I could imagine using, as the original — unHilbertized sounded better.

I then tried the invert effect, even split the stereo track, applying to only one channel, either way, the readings still the same.

So, again, is this 50% where a stereo track should be and if so how does one get there?

Again, I record in stereo one track, no overdubs, no mixing in of any other tracks. I balance it with wave stats getting the RMS the same per left/right. I do not adjust the pan.

To get 50% I even tried mixing a mono version in with the stereo, no difference; then a mono with a stereo split and panned all the way out left and right, no difference.

The recommended 50 % are for a full blown-up song with drums keys acoustic/electric guitars and so on.
Your singer/songwriter style won’t reach this ever, unless you double either guitar or vocals.
One neat way to do this is applying a delay with different times for left and right–if you don’t want to physically overdub.

The original of this song has 82 % correlation and I’ve used Steve’s Channel Mixer (Preset “Wide Stereo”) which made it 49 %.
The Hilbert filter is just a phase repair tool and the more mono a sound is the more blurred the outcome will appear. On this song, the correlation will be 6 %.
The main purpose of it is to decorrelate two sound sources that are fighting one another.
For example, a kick drum and a bass might be in the same tonal region but cancelling one another slightly. This can only happen if they are both panned to the centre–the usual case.
Thus, I could combine them in a stereo track, analyse them and use the Hilbert filter for a medium positive correlation. Afterwards, I have to split it to mono again.
Note that listening to the stereo track will be misleading and you can only judge when they are mono again.
Therefore, for two mono tracks (only combined for analysis sake):

  • high correlation: the tracks add up (and you might loose head room)
  • negative correlation, the tracks cancel partly.
  • medium correlation (90 ° shift), they do only share the frequency without too much summation/subtraction.
    Let’s illustrate this for a sine wave:
    0 0.7 1 0.7 0 -0.7 -1 -0.7

adding a second sine wave gives a peak of 2 (6 dB) whereas inverting one gives full cancellation.
If the second wave is shifted by 90 °, the values will be:
1 1.4 1 0 -1 -1.4 -1 0
Thus the peak is only 3 dB.

In general, tracks that share the same pan position should have a correlation of 0 (but not negative). In this way you can stack more tracks without the need of extensive usage of compression and limiting and therefore less distortion.
Nearly the same for stereo:
If the left and right side are barely correlated, there will be more room for the middle, e.g. vocals.
However, we don’t want the song to fall entirely apart, thus the rule for 50+ % correlation in the end.
what about negative correlation, is it ever useful?
Indeed it is, if you want to have a very wide sound–reaching beyond the physical speaker positions.
You may have noticed that a stereo track with identical channels but with one channel inverted (= 2 x Hilbert transformation) has practically no position information, it is not allocatable. In fact, it goes farther than the distance from left to right and is as such 200 % of this base line.
In other words, the width of a sound source is determined by it’s negative portion in one channel.
The trick is to pan a sound hard left or right (0.9) and then to shift (polarise) it by 90 or 180 ° in order to have it further than the physical position. Always check with the head phones too, we don’t want a “in head” sound, the source should stay locatable.
I’ll stop here, although I’ve not yet gone into detail about how to spread the stereo field with other tools than stereo wideners. (try e.g “Remove Center” in “Vocal Reduction and Isolation” with a strength value of 0.25 on one of your songs, normalize it afterwards)

Robert

I tried the “pseudo stereo” plugin and have experimented with delay before — though not with stereo with separate delay for left and right, assuming thats what you meant, varying the delay per channel… (I will try that ASAP, thanks). Anyway all delay, echo, offsetting (psuedo stereo) I’ve ever tried I never liked, it only seemed to distort… which I guess it does… I’ll even venture to say, all analog echo and such experimentation could never be reproduced satisfactorily, digitally. With the exception being of digitally recording of either a natural echo or some analog generated echo. Probably, this on the same reasoning why clipping/overdriving does not work with digital but in analog it was the reason why much rock and blues rocked! Digital kind of sucks… look how things have changed since… /… Another conspiracy?

all analog echo and such experimentation could never be reproduced satisfactorily, digitally.

Right. Echoes are an infinite number of waves striking the microphone after the main event. The best we can do is simulate the main ones until we run out of processing power.

Koz

unless you use it correctly :wink:

The problem clipping / overdriving with digital audio is that it can cause aliasing distortion in addition to the clipping / overdrive distortion. The solution is to ensure that clipping is not entirely “hard”, and to oversample. Hard clipping will create an infinite number of harmonics, whereas soft clipping produces a limited number. The harder the clipping the more harmonics, but as long as the clipping is not entirely hard the number will be finite. Provided that you oversample enough to avoid aliasing distortion, then there is no reason why digital distortion should not be as “good” as analog. Of course, if you want analog noise in there as well then you will need to add it because digital amplification does not add noise.

In the days before digital delay / digital echo, we had “bucket brigade” delay and “tape echo”. Sure they could be used “creatively”, but the actual sound quality was crap by modern standards :wink: