Hi, I’m new to Audacity and been reading some of the threads and I’m confused . My questions are quite elementary, I’m sure … These are the things I want to do with Audacity and somehow, can’t get it done.
Analyse the audio clip.
I want to know the average loudness (in decibels) and pitch (frequency) _. When I click on plot spectrum, I don’t quite understand the frequency analysis window (I use Hanning window, as recommended). Do I just read at the end of the window? Each time I move the cursor, a different number appears. When I export it to txt file, I get a list of frequency (Hz) and level (dB). Does this mean I can just average it out?
Create a matching white noise clip to the audio.
I’ve been creating a new track and generate white noise. So far, I’ve been using the envelope tool to mimic the pattern. However, I can’t get the right pitch. When I change the pitch using effects (changing it to match the previous clip), it doesn’t generate the same results. I still get lower decibels, despite changing the pitch numerous times.
Appreciate if someone could answer my questions. Hope I am clear with my predicament.
You’re fighting three methods of measuring audio “size.”
Audacity and most other programs measure +/- peak values because it’s easy, fast and cheap to do. Actual energy in the air is the RMS value – the horsepower needed to move the air around and you hear things in the dBSPL “A” curve which takes into account the human ear’s need to hear certain pitches of sound better than others.
Analyze > Plot Spectrum changes depending on what you need and the settings.
The arithmetic works out really well in linear frequency, but you hear in log. 400Hz sounds like it’s about in the middle (440 Hz is the oboe “A” at the beginning of a concert), but it’s clearly not half-way between 20 Hz and 20,000 Hz. The “size” of the sample increases the accuracy as you go up, but sometimes actually tells you less because of the need to analyze all the data
This is an analysis of one single piano note. Note the “size” of the sample is really high, so that’s the graph of every pitch, harmonic, and overtone in the note. The fundamental almost gets lost in the forest of junk.
The default view shows the relative levels of the frequencies present in the selection that is being analysed.
If you analyse a pitched note (for example a note from a musical instrument) there will probably be a spike in the graph that is higher than the others, and that will probably correspond to the “pitch” that you hear. It is usually helpful to switch from the Linear to Logarithmic scale (Axis: Log Frequency).
See here for detailed information: http://manual.audacityteam.org/man/Analyze_Menu
“White noise” does not have a “pitch”. It should be a random distribution, such that (over an average) there is an equal amplitude of all frequencies within the test range. http://en.wikipedia.org/wiki/White_noise
Due to the random nature, each time that you generate white noise it will be a bit different.
I should have provided some background to what I am trying to achieve. Sorry about that
I currently have 5 audio clips each of a baby crying, and a baby babbling. I also have a recording of white noise (generated from an amplifier with a mic). For my research, I need to create a few versions of these audio clips for pseudoreplication purposes. Anyway… I want to create a matching white noise pattern to each bab/crying clip. I do understand that white noise is relatively equal in terms of amplitude.
Back to my 2 questions in my first post,
I exported out the analysis to a text file and obtained an average (with the help of Excel ). I used the log scale (somehow it made more sense visually) and it seems ok to me so far. Having the analysis of my 10 clips (5 each in bab/cry), I now have the average frequency and decibels for each variable. This will be used as a comparison to the white noise clip later.
this question is resolved. Thanks!
Instead of generating white noise from Audacity, I will use my recording of white noise and try to mimic the pattern of a corresponding cry/bab clip. I plan to use the “envelope tool” to mimic the audio pattern of the clips.
I found this topic very interesting.
It is quite old I know but a similar(?) problem is making me bringing it up.
I am not sure the chimera effect is what MinHooi wanted.
What brings me here is that I have two tracks, two different human dialogues. The first track shows some noise (white noise like) in the background, the second is cleaner. After trying to mix (equing and volume) the first track to lower done the noise without altering the speech, I still have noticeable noise on the track so that it creates a sort of muting when the first track fade into the second. My approach is to add some ambient noise recorded independently in the same place to my second (quieter) track. I tried to match the log spectrum analysis of the noise’s first track but I wonder if there is a better solution. I don’t think the chimera effect is what I am looking for here, since there is no much dynamic on both track, just like white noise.
Also I read on this thread that white noise does not have a pitch. Is that so only when the white noise is a theoratical or ‘pure’ white noise that is : all the frequencies at the same amplitude. In my case, there a is a picth, more over a note on both of the first track and on the dubbed track (obviously a different one). I am not sure I am good enough to read a note on the spectral analysis, nor being able to eq the dubbed track to match it.
Any help would be appreciated on how to approach this.