Related to Recording Vocals: I am quickly learning that there is no substitute for good vocals. I have tried adding...

reverb, echo, delay and equalization. I have also tried some free autotune software (Kerovee - added a lot of unwanted noise; GSnap - didn’t make much difference; Graillion2 - was the best but didn’t beat the raw vocal track). I have also tried double-tracking which I thought make the vocals sound robotic. It just seems that the more you add the more you take away if that makes sense.

Does anyone else feel the same way?

(Audacity 2.3.0 and Windows 10)
yeto

I am quickly learning that there is no substitute for good vocals.

Very true.
There’s a saying: “You can’t make a silk purse from a pig’s ear”. My voice is the proverbial pig’s ear, which is why I’m a violinist and not a singer :wink:

We arrived in the middle of the movie. What’s the goal?

It’s really popular for New Users to try and rescue a ratty recording by layering on effects, filters, corrections, and other sound management. It’s also popular to throw money at microphones when what’s really needed is improve the room.

Why are we doing this and with what?

Koz

What’s wrong? Is it your singing or the recording quality (or both)? There’s not much you can do if you don’t like the timbre of your voice (although EQ can help a little) or if you have limited range, and I don’t know how far you can go with pitch correction if your pitch is totally off.

It just seems that the more you add the more you take away if that makes sense.

You can certainly “over-process” your audio. These are tools and, “It’s a lot easier to break something with a hammer than to build something with a hammer.”

And if you have recording problems (or performance problems), it’s easier to prevent problems than to fix them and a lot of things can’t be fixed.

I’ve never used pitch correction (I haven’t done “live” recording in a long time). I wouldn’t use it unless you need it and you might get-away with applying it only to the “bad notes” so hopefully most of the recording remains untouched.

From what I’ve read, Auto Tune or Melodyne are used “routinely” in the pro world (even when not really needed) so I assume those two applications are transparent (when used normally and not as a special effect). But of course, plenty of recordings were made before pitch correction existed, and sometimes a singer will intentionally go slightly off-pitch.

There is another common technique called “comping” (compositing) where two or more tracks are lined-up and the best is taken from each track (or maybe just one part is replaced, etc.). If you can sometimes hit the note(s) that might be a solution for you. Of course they have to be perfectly time-aligned but I assume you’re recording to a backing-track so that shouldn’t be a problem.

Reverb can be used as a subtle effect where it’s barely noticeable or you can create the sound of a concert hall. One subtle technique is to add a little “too much” reverb and then back-off 'till you don’t really notice it, but then if you remove it completely it sounds like “something’s missing”. (I think most commercial music is like that… I don’t notice reverb on most songs, but there probably is some if you listen carefully.) There are lots of reverb plug-ins and (sometimes) lots of settings so it’s something you have to play with.

Echo (usually called “delay”) is a “special effect”. It doesn’t occur naturally in music except when one singer (or instrument) “echos” the other. (Reverb is the result of shorter indistinct echos.) If you are going to use it, try setting the delay time to sync with the tempo (maybe a 1-beat or 1-measure delay).

Equalization is mostly “corrective” effect, usually to compensate for an imperfect microphone. For example, if your microphone is too “bright” (over-pronounced highs and “T” & “S” sounds) you can correct that with EQ, or similarly if your mic is too “dull”, etc. Generally the adjustments are subtle. But it can also be used in the lower-mid frequency range to “enhance” the tone/timbre of a voice.

It’s good practice to filter-out (or “EQ out”) frequencies below 100Hz in the vocals (or anything else that’s not specifically bass) because anything below the vocal range is just noise.

double-tracking which I thought make the vocals sound robotic.

I assume that was a double-tracking effect (ADT = automatic double tracking)? True double tracking (recording twice and mixing) usually sounds more natural. (It’s not really natural unless you have a twin with an identical voice. :wink: )

…Since you mentioned 2 of the 3 most common effects (EQ and reverb), Compression (including limiting) is also very common. It can be used to get more “loudness” or “intensity” or “grit” and it tends to even-out the loudness. Compression (like reverb) can get complicate with lots of different plug-ins and lots of settings. Limiting is a kind of fast-compression and it’s more straight-forward and you are less likely to get unwanted side effects (when used in moderation).

You are a singer. Your violin is your voice.

Congratulations on being able to play a wonderful instrument,
yeto

Have you ever heard the saying “you look like what you look like”? Well, although my equipment is not the best “I sound like what I sound like”. In other words when I play back my vocal tracks they sound like me. When I try to improve my vocals (by using effects) people tell me the vocals sound okay but by adding effects to the vocals the songs start to lose there identity and are no longer associated/linked with me. They would rather “my” songs (songs that I write) sound like me singing them. I like my voice/timbre and for the most part I sing in tune but I would rate my singing as good but not great.

I don’t really have any questions at this time as Audacity has a great manual and along with searching this forum I am usually able to fine answers to any questions I may have.

I was just wondering if anyone would agree with me in that sometimes adding effects is not always the answer to resolve a vocal problem.

Thank you for taking time to reply,
yeto

But of course, plenty of recordings were made before pitch correction existed

Maybe I am either old-fashioned or behind the times when It comes to pitch correction but I am thinking I like my vocals to be natural.

One subtle technique is to add a little “too much” reverb and then back-off 'till you don’t really notice it, but then if you remove it completely it sounds like “something’s missing”.

I am going to try this.

It’s good practice to filter-out (or “EQ out”) frequencies below 100Hz in the vocals (or anything else that’s not specifically bass) because anything below the vocal range is just noise.

I will give this a try as well.

I assume that was a > double-tracking > effect > (ADT = automatic double tracking)?

No, I sang both parts.

…Since you mentioned 2 of the 3 most common effects (EQ and reverb), > Compression > (including limiting) is also very common. It can be used to get more “loudness” or “intensity” or “grit” and it tends to even-out the loudness. Compression (like reverb) can get complicate with lots of different plug-ins and lots of settings. Limiting is a kind of fast-compression and it’s more straight-forward and you are less likely to get unwanted side effects (when used in moderation).

I will try compression/limiting to see if it delivers a positive effect to my vocals. Thank you for the information.

Thank you for taking time to replay with all the great information,
yeto