Hello everyone, I’m new to this forum and have been playing around with different effects in Audacity, and I’ve been enjoying it very much. Recently, there is a voice effect that I have been trying to learn how to do but have not found any tutorials or anything similar to what I want. It’s the sound effect used for a Kantus in Gears of War. Here’s a link so you all can hear what I’m trying to make in Audacity.
There’s always more than one effect involved.
Here is something similar:
It’s made with the vocoder plug-in (distance=0). You need a control signal in the left channel and the voice in the right channel.
The control signal was a sine tone that sweeps randomly over a frequency region. I’ve generated it from the Nyquist prompt. It’s best to copy the voice track and to apply the effect to the first one (to get the same length)
Afterwards, I’ve used Steve’s Random pitch modulation plug-in to modify the frequencies even more. Now, one should apply a proper eqaulisation to bring the higher frequencies up again. Feel free to experiment.
Thanks for the help! I only have one question about your audio sample, and that’s how did you make it sound very clear? When I use the vocoder using a sound sample in the left channel and my voice on the right, the result is kind of the same but sounds warbled and muted. Could it be maybe what I’m recording my own voice with, or maybe the options that I need to mes around with?
Have you tried Trebor’s proposed plug-in? It presumably doesn’t produce the same result as the built-in vocoder, where my sound is also quite “muffled”.
I am not sure why this is the case, I have to experiment myself. The best is to take a very close “distance”, I’ve got the impression though. Maybe I’ve simply taken the wrong sound (sine curve). A sawtooth has much mor harmonics.
No damage is done when you also examine the spectra of the original sounds and the result.
You may see where the frequencies are “washed out” the most and bring them up again with the EQ.
With the “mda talkbox” (which is not identical to Audacity’s built-in vocoder) it matters which way round the tracks are, ( I’m on a Linux machine at the moment so I can’t check ), i.e. if you don’t like the effect swap the left and right tracks and try the talkbox effect again.
For the voice to be understandable in the chimera the animal (or other) sound has to contain a reasonably wide range of frequencies, you can add a little white-noise* to the animal sound to fill any gaps in it’s sound spectrum before using it in the talkbox effect.
[ * see the “Generate” drop-down menu for white-noise generation ]
I have and it helped a lot. You also helped me tremendously, because the random pitch modulation adds a really nice finishing touch. I’m still new to equalization so I don’t know very much about bringing the higher frequencies up or down. But thank you very much for your advice as well!
I used the MDA talkbox and instantly, I got the results you and Robert said I could achieve. The vocoder did something similar but it was so warbled it couldn’t be understood. The talkbox plug in did miracles, as well as the combination of the random pitch modulation. Thank you so much!
It’s fine that we could be of some assistance. Trebor has a huge expirience with various effects.
A remark on talk box and vocoder:
In principle, a talk box can’t be reproduced as a plug-in, since it needs a horn to produce a sound (comin from a guitar for example) that is transferred via a tube into the mouth/throat. A live performer then speaks without generating any sound himself. The result is then recorded via an microphone.
On the other hand, a vocoder takes a prerecorded speech to alter the (e.g. guitar) sound by applying the formants to the harmonically rich input sound. It is therefore clear that the MDA Talk Box is also a vocoder since it uses recorded speech and not a physical room.
In contrast, the Audacity vocoder executes no formant analysis (which is hard to do) but works with a number of frequency bans to boost/cut according to the modifying sound.
It now also becomes clear why it makes a difference which sound is in the left and which in the right channel.
This topic has shown that interesting effects can be achieved by the usage of classic effects in another fashion.
I wanted to thank you guys once again for your help with this thread and this effect in Audacity.
I came back because there is another effect that I’m curious about. I’ve tried playing with different effects to get it, but I have no idea how to achieve it (I’m sorry to bump this thread randomly, but I thought I’d post a reply instead of make a new thread for another voice effect).
From the pronounced effect on the prolonged syllables which have a constant pitch , (e.g. “a” of “ancient” , “n” of “frozen” ) ,
it sounds like a component* of the speech is being used to frequency modulate it.
I see. What plug in did you use to get the frequency modulation? I can’t seem to find it in Audacity or under downloads. I think I can play around with it to get an effect similar to it. If all else fails, I’d accept something that sounds close to the Crypt Fiend (from the same game Warcraft 3, http://www.youtube.com/watch?v=JbWzs5DjcDY)
As for the sounds, maybe it’s different tracks with different pitch amounts and a delay?