analyzing/reproducing an effect

I’m creating some new voice material for a videogame, and I’m trying to get the sound to match the original. – Not the actor’s voice itself: different character, different actor. – But there’s a sound effect on the original voice files, being that the dialog happens over the radio.

Now I can certainly get a passable radio effect, by amplifying/clipping and using a high-pass and low-pass filter. I even put in some white-noise background. The problem is that it sounds like a different radio. :slight_smile:

I’ve tried to mess with the equalizer, but it feels like I need to do a multiple-pass process, something like repeated amp/clip passes mixed in with repeated EQ passes. However, I’m a little out of my depth, in trying to figure out the right order of operations; I’m reduced to almost-blind experimentation. I don’t have a reasonable methodology for this.

Here’s a link to a clip from the original, which I am trying to match:

http://www.acsu.buffalo.edu/~breslin/sample_voice.wav

Thanks in advance for any help you can offer!

Use Audacity 1.3

Analyze menu > Plot Spectrum
You can see that the whole thing is band limited to the range 300Hz to about 5kHz (high pass/low pass filters)

Switch to “Log frequency” and you can see that the low end roll off starts at about 500Hz, so that is the point that you will set your high pass filter to.

Below about 100Hz there is virtually nothing but noise, so you will probably need to do a couple of passes with the high pass filter - one at about 500Hz and a second pass at perhaps 300Hz.

There is also a noticeable notch at 1.3kHz (Equalizer effect)
and a sharp notch at about 1680Hz (notch filter plug-in or Equalizer with a custom curve and “Length of Filter” set to maximum)

You are unlikely to be able to create exactly the same distortion, but getting the Eq right will put you in the right ball park.

Thanks, Steve. It worked out. I picked up the general strategy you recommended. Here’s a postmortem:

I did a frequency analysis on the original “reference” soundfile (the one I linked upthread), looking at both linear and log frequency. I took a screenshot of each so I could refer to them easily. Then I did a frequency analysis on the bare voice file (the new one that I wanted to make sound like the old one).

I picked one difference between the frequency analyses of the reference soundfile and the new soundfile, and I made an adjustment to the EQ effect. I executed the EQ and did a frequency analysis on the result. From that I figured out another further change I needed to make to the EQ (normally just focusing on one frequency or small range).

Then I [crtl-Z] (undo) the EQ change I made to the new file (so I’m not doing multiple EQ passes). I made the further changes to the EQ, ran it again, considered the result (so… what else needs changing?), backed out of the change, repeat.

Repeated about 30-50 times, in fact, until I got the EQ graph correct (or close enough, anyway).

Sometimes I had to make a big bend in the EQ graph to make a small change in the frequency profile. Sometimes a much smaller adjustment to the EQ graph made a much larger change to the profile. This depended on the particular frequency in question.

The logarithmic frequency analysis showed a slope on the low end, which suggested that the original sound was subjected to a high-pass filter. So I did one with approximately the same cutoff and dip. Tried this a couple times and compared back-and-forth with the reference profile. (This back-and-forth was basically the same process as getting the EQ correct. Tweak the numbers, execute, compare profile, reverse, tweak the numbers.) The high-pass changed the EQ profile, so towards the end of the effort, you have to consider both changes at the same time.

This process took care of getting the distortion (clipping) more-or-less correct, without any extra effort. Differences from the original are probably a result of different microphone and original recording volume, but this turned out to be only slightly detectable – not really noticeable at all.

Finally, there was a background hum in the original, which I needed to reproduce. It extended a couple tenths of a second beyond the beginning and ending of the actual voice content of the reference file, and it plays clean where the voice pauses for breath in the middle. I picked up some of these pieces and copied them together, removing any slices that showed a pattern to the patchwork. Then once it was a one-second hum with no beats in it, I copied and spliced random pieces of it within itself until it was long enough for my purposes.

I mixed the hum with the voice, and found that I needed to adjust the EQ a bit more. The reference file had the hum, and my new file did not, so that threw off the frequency analysis a bit. But only a bit – it was all sorted in two more minutes.

Of course I saved the EQ profile, in case I need to add any further voice to this project. Now I’ve got a fair emulation of the effect used by the original sound engineer. :slight_smile:

A nice detailed description breslin. You’ve noticed that it is necessary to be quite obsessive about the detail :wink:

One small detail that it looks like you may have missed: The Eq in between the words is different from the Eq during the words.

Not surprisingly, the sound effect that you are copying is a “sound effect” and not a real recording. The “engine noise”(?) in the background between the words is present to give a sense of place - If this was recorded directly from the radio that noise would not be present and there would be virtual silence between the words due to noise gating (squelch). However, when you are listening to a radio in a noisy environment, then you hear the background noise between the words, but you tune into the words when they are spoken and disregard much of the environmental noise. (Ever noticed that when you listen to a real life recording, the background noise seems much louder than it appeared at the time?)

So the sound effect creator has EQ’d the voice with a 500 - 5000Hz band pass, but the noise between the words is filtered with a 5000Hz low pass (no high pass filter).

With the amount of distortion on the voice it is difficult to tell if the noise is present (but band-pass filtered) during the words, or if they have used “auto-ducking” to suppress the noise while the voice is sounding. You could try both and see which sounds better.

Auto Duck is included in Audacity 1.3
Work on duplicate copies of the tracks - extra tracks can be deleted or muted later.

To use it, place the voice recording (almost complete with filtering done) below the (unfiltered) background noise.
(the voice will be the control track).

Select the background noise track and select the Auto Duck effect.

Set the outer fade times and the maximum pause to zero (or near zero), the inner fade times to around 0.2 seconds and the duck amount to (probably) around -24dB (threshold about -20dB if the voice is normalised).

When you click OK, the noise amplitude will be modulated by the amplitude of the voice track

You now have a noise track that exists between the words, but is much lower during the words.

To access the noise while the words are sounding (this will be band pass filtered if used at all), make a copy of the auto-ducked noise track and invert it, then mix it with another copy of the original noise track. You can then band pass filter this the same as the voice and mix it in as required with the auto-ducked noise and the voice track.