I’ve just started using my Zoom Handy H2n for surround recording and have some questions related to the way of decoding/matching/phasing/panning the channels the right way. I’ve been reading some around and have so fare just been confusing.
The H2n has a 5-mic configuration. The front of the device has a Mid/Side microphone set (3 mics) and the back has a 90deg XY configuration (2 mics).
This is a short description of what I’ve done
- Front Left: Mid + Side
- Front Right: Mid + Side (Phase shifted)
- Center Channel: Mid only
- Left surround: Left XY
- Right surround: Right XY
- sub/Low Freq: A Mono track of all the five above ran trough a low pass filter.
These are then merged into a suitable container like AAC.
- will this be enough for a true 5.1 setup?
- am I missing anything obvious?
- are there any useful plugins around for Audacity? (So far I haven’t found any)
- I’ve read that the center channel should be stripped for low freq. Doing this kinda reduces the over all sound picture in my opinion (ears). Any advices here would be great.
- paning the side channel (front) widens/narrows the front channels. I haven’t found any good settings here just yet. I’ve seen recommendations where the front setup should be about 50% wider than the back channels. So in my case 90 deg in back should lead to 120 deg in front. Does this sound right? Any advices here as well would be lovely.
Thanks in advance folks.
will this be enough for a true 5.1 setup
True 5.1 has four microphones one each in the four principle quadrants (Front-Left, Front-Right…etc), a (0.1) microphone carrying the rumble track associated with, but not necessarily generated from the others, and a Center track generally carrying the dialog — again, you can’t generate Center from the others.
If you do the four quadrant microphones correctly, you can whip ghost sounds around the audience for a terrifying immersion affect while the earthquake rumble and dialog (screaming) doesn’t change.
Most of the postings have to do with faking it to creating “false” 5.1 setup. Yes, that you can certainly do. You can generate the rumble track by filtering the surround tracks, do vocal isolation to get the center track, etc. But if you do that, you lose all the production tools that real 5.1 gives you.
And don’t forget Dolby 5.1 also has Dial Norm which boosts and dips overall volume as needed for theatrical effect. Like having a jet take off in the middle of the theater with the real loudness of a real jet in real life, but still have the rest of the show in normal volume.
Did not know that there where a rumble mic. So that I might need to do something with
I did know of the center mic, that’s why I wondered if a MS’ mid mono track would be enough since it’s picking up direct dialog in the same way as a center mic would do.
I’m calling it the rumble track, but it’s really the 0.1 track in the 5.1 universe. Many microphones roll off the low frequencies, particularly if you’re trying to shoot voices. The Low Cut Filter gets serious wear in Hollywood, but that doesn’t do you any good when you’re trying to simulate this:
I’ve been trying to get a good track of that for several months now. Nothing so far.
But that’s the idea. Remember the very early “Earthquake” movie which came with special floor speakers and track in the movie. All that really did was fed filtered pink noise to the speakers on the cue of a switch track. If you were paying attention (which all my friends were) it sounded like a wall switch labeled “earthquake ON/OFF”. The 0.1 rumble track was Dolby’s provision to do real sound instead of acoustic tricks.
I wonder how you’re going to get a mid-only recording. That sound is going to be present in the other tracks and non-isolated dialog can be disorienting in a surround production. Anybody who is not seated in the theater “sweet spot” may have the picture of the actor and the actor voice be in different locations. And that’s if you’re recording on a sound stage…
Rolling off the low frequencies is entirely personal preference. It’s there on a live sound shoot because as a general, fuzzy rule, there’s nothing valuable below about 80Hz to 100Hz on a sound set, and there are a scary number of evil problems down there.
So that’s a pre-production consideration, not post. If you have a ballsy presenter/announcer in the track. Leave him alone.
“In a world…”
Try it. Set up the recorder on sticks say, outside, and walk around it speaking. What happens to all the various components when you do that? Announce where you are as you go. After you mix the tracks down, do you get a good theatrical version of real life, or do you get oddities and artifacts like Center-Rear sounds like you’re talking down a hallway instead of speaking from the wall where the projector is?
Unfortunately, Audacity has no provision to play multi-track sound live, so it’s multi-pass time.
Write back when you get one to work. This is a forum, not a help desk.
Thanks again Koz
Was just asking for advice from someone that might had some experience. Was a good idea to try out with a room and the mic. Will do just that when I get home from holydays. Can of course file the examples somewhere as well.
The Mid-mic of the MS setup will pic up everything in front, but not if the vocal is moved to one of the sides. So there will be a problem there if one does not add more vocal mics or takes that into consideration during production.
Was thinking of that plane wou we’re trying to catch the sound from. It’s an array of sounds from a large object, actually with its own resonance box (the hull). I started thinking of a rock thrown into water. A big rock would start a big wave. To catch its whole height the instrument that catches the wave needs to be of a certain size to cover it all. One would need some kind if reverse umbrella to concentrate the sound into a smaller mic or a big mic element. Another issue would be the secondary sound coming from the objects around you. They’re part of the sound array as well. Thought one you got there.
The Mid-mic of the MS setup will pic up everything in front, but not if the vocal is moved to one of the sides.
And it can’t. As a practical matter, the lead dialog(s) always happen in front. It’s very disorienting to have speech move around as all the FM stations who tried stereo microphones when they went stereo the first time. It didn’t work. Back to centered, mono dialog.
It’s an array of sounds from a large object, actually with its own resonance box (the hull)
Fortunately, there’s no holes in the box, so there’s no way for the resonances to get out and it all happens outdoors, so there’s no reflections. The actual capture happens on the grass in the park which is in the glidepath. We got lucky.
That man with the white t-shirt toward the right (magnify the pix). He’s standing in the city park, not the restaurant.
What we didn’t get was immunity from the beach breeze which has a lot of the same characteristics as Pratt and Whitney engines.
Meeeh. When put in “surround” mode the front pair is converted to standard stereo signal from MS. I do know the angle of which the two is merged, but reverse engineering the soundtrack would probably result in a loss in quality. I’ll give it a shot anyway.
I would do the walk-around test outdoors (since it’s summertime as I write this). Even with traffic or other noses, you won’t get wall or ceiling reflections to confuse the test. Also, if you do it right, you can tell exactly where all the cars and trucks are in your mix-down.
“Why does that metrobus appear to be driving through my left shoulder?”