Do you mean “Frequency bands”? Most “real world” sounds contain thousands of distinct frequencies. You can duplicate the sound into as many bands as you want, then use “Equalisation” to isolate specific band (or better, use the low pass and high pass filters).
If you want 6 frequency bands, make 5 Duplicate tracks (select the track and click “Edit->Duplicate”)
Now select the first track and use the Equaliser (“Effects->Equalization”) to filter the audio so that you have just the frequencies that you want.
Hint: start off with the settings “flat”, select “draw curve”, and set the filter length to maximum. Then use the lower left slider to increase the range to -120 dB. If you click on the blue line, it will create a movable “node” so that you can edit the filter. I’ve included a couple of screen shots to give you the idea. Depending on what plug-ins you have installed, there may be better filters for this job, such as the low-pass and hi-pass filters, but the general scheme is the same.
I’m not sure exactly what you mean, but if you use v.1.3.4 there are some analysis tools (“Analyze” menu) that may show you what you want. If you’ve not got v.1.3.4, you can download it from the main Audacity web site - It’s ok to have both 1.2.x and 1.3.x installed at the same time as long as they are in different directories.
You seem to be looking for a way to separate a sound file into several different tracks each containing one of the harmonics of the note. This is a cool idea and it would be a great way to make a unique timbre for any instrument quickly and easily. Unfortunately, I don’t think Audacity is going to be able to do this easily. Furthermore, I’ve never seen software that will do this easily, but it should be possible.
There are several problems:
You’d have to use a whole bunch of very narrow bandpass filters (like what Steve is describing, only many more of them). These would each have to be centered on one of the harmonics of the note. The issue here is that Audacity does not currently have a way of recognizing each harmonic, so you’d have to enter them in manually. Since “real world” notes are made up of thousands of these, you’d either have to make thousands of tracks, or bunch several harmonics together into each track. You’d be pulling your hair out before long.
Even then, this would only work for an individual monophonic note. As soon as the note changes, or the sound becomes polyphonic, this method breaks down. The number of harmonics and the position of each harmonic changes. Even the relative spacing between them isn’t necessarily constant along an instrument’s range. So you could only process one note at a time this way.
I’m not sure how to go about doing what you’re doing. I know for a fact that the KYMA system can do things like this (I used to work for that company), but it requires $3000 to get your hands on a basic system. That’s $3000 well spent, in my opinion, but I’m biased. I don’t know of any pure software methods at this time.
And here all I want to do is to play a sound, then record it and find out the difference in volume from playback to recording.
Not according to your older posts.
If you want to know the “volume” difference between a signal that you’re playing and one that you’re recording you’ll have to define exactly what you mean by “volume” (it’s a poorly defined word in the audio world). Do you mean SPL-A? SPL-C? Perceived volume level? Digital Amplitude?
If you want to see which frequencies are strongest you can use the Analyze → Plot Spectrum (which averages the signal and shows the amplitude of each frequency), or click the track name and select Spectrum to show you a spectrogram of the audio (which allows you to see how each frequency changes over time).
If you use the Spectrogram view, I highly recommend dragging the bottom edge of the track so it’s as big as the Audacity window, you get more detail that way.