Waveform and Draw Tool?

I can find no explanation of the blue waveform itself. Is the light blue the vocal itself , or is it a specific frequency range or other? If it is the vocal, why can’t
there be a tool like the draw pencil to simply rightclick the frequencies out of the file.?

I understand the theory that some mixed frequencies may be instrumental,etc, but if is a captured separate stream, then it seems to me something can be programmed to remove it.

BTW: The audacity vocal removal procedure (V2) doesn’t work. It lowers the volume a little and changes the quality somewhat, that’s all.

Thanks for the info.

The blue waves represent graphically the vibration of air when somebody sings or plays an instrument – or a jet flies over. If you hold an empty styrofoam or paper coffee cup up to your mouth and yell into it, you can feel the vibrations of the cup on your hand. Those are the back and forth vibrations represented by the blue waves, more air up and less air down.

The shades of blue color more or less represent peak air movement, the way the air actually moves, and average energy, the way your ear actually hears. None of these characteristics have anything to do with identifying the flute sounds in an orchestra.

Vocal Removal fails more often than it succeeds. It’s a pure arithmetic tool that subtracts anything that appears exactly the same in both left and right of a stereo show. Since the lead singer is usually in the left-right middle of a song, the lead singer goes. Also, the bass and drums tend to be in the middle and they go, too.

This is a before and after clip of what happens to most songs.


The tool only works at all if you have a good quality stereo show, so mono MP3 downloads don’t work at all, etc.

Splitting a song into individual, good quality instruments and voices is a tool we haven’t got yet.


there is a page in the manual about this. See: http://manual.audacityteam.org/o/man/audacity_waveform.html


Thanks all for the response.

Yes I know about the frequencies, but in reference to the graphic representation itself, if the program is capable of retrieving the frequencies
(light blue) of the vocal segment and showing that to us, then it should equally allow for removal of those segments. Then we can follow through with saving to
a new file as is per normal. We can now see if what is left over is acceptable for our use.

"Remove Vocal"doesn’t remove the lite blue area.

As an aside, a playback option where we can reference the waveform(in zoom) while playing the section real slow and allowing break points with option to remove the sign wave of that sound would allow this. It would be tedious maybe days of work, but I feel it is possible. But I realize this feature could be out of range for a free product.

You misunderstand what the two shades of blue represent.

The darker blue represents the peak level of the waveform.
The lighter blue represents the rms level of the waveform.

“Peak” level is a simple “magnitude” of the sample values.
“Rms” level is a kind of average value.

This has little to do with frequency.
There is an article here about the basics of digital audio: Audacity Manual

The basic problem is that vocals and instruments contain overlapping frequencies. All real-world sounds contain a fundamenta frequency (the note or pitch you perceive) plus harmonics and overtones. At any moment in time, there are many frequencies from one voice or one instrument. The harmonics and overtones are what makes two singers sound different when they are singing the same notes, and it’s why a piano sounds different from a guitar. Complicating matters, In music the singer and instrument are often singing/playing exactly the same note!

You can isolate the left & right and do some “tricks” like center channel removal, but You can’t un-bake a cake and you can’t un-mix a song!

The waveform you see is represented in the time domain. Time is on the X-axis (across) and amplitude is on the Y-Axis (up and down). You can think of it as representing the position of the speaker moving in-and-out, or the position of the speaker at any point in time. There is no “frequency”, although the graph does contain the frequency information and you can get an idea of frequency.

You can convert from the time domain to the frequency domain using [u]FFT[/u] (and there are many audio effects that use FFT). You can make a graph (frequency spectrum display) in the frequency domain, but the spectrum changes from moment-to-moment. So… You would need thousands of separate spectral graphs to represent an entire song.