Hi, I very green to audacity so bear with me (hope I’m posting this in the correct section)
I’m working on a youtube video and basically trying to detect what note a singer is hitting. I know of the Spectrum analysis that audacity has, but I’m not sure if I am using right, or if there is a better way to utilize it.
http://www.youtube.com/watch?v=P3Nx8qkQJtw This is one of the videos I’m trying to measure the note of the Tenors voice. ( the 2:34 mark and and 4:00 mark both of them I think are the same notes)
What I’ve done is rip the video, and convert it to MP3, then import to audacity. When I do this, I highlight the small part of the song where he hits the high note. I then go to the Plot Spectrum. When I do this, I’m not sure if I’m doing it correctly, but from what I can tell, the loudest note is a G#7. Is this right? (Doesn’t sound right because that’s extremely high) Can I be sure that is the correct note?
I’ve uploaded a snapshot of what I’m seeing.
I guess it depends how you define a G#7.
You can compare the note (It is the long, held one, isn’t it?) with a sine tone created with the nyquist prompt:
- Select the long note
- Press ctrl-shift-d to duplicate this region (will be overwritten, so deselect the first track)
- open the “Nyquist Prompt” from the effect menu.
Enter (or copy)
mult 0.3 (osc gs7))
Listen to the two tracks and judge for yourself.
The frequency for this note is 3322.44 Hz.
That’s impossibly high, but I am no expert…
a4 (a,)=440 Hz - the normal upper limit for a tenor. The well known high c (c5 or c,) has 523.2 Hz.
Soprano singers can go up to about 1400 Hz.
However, it sounds like a g#5 - which is really high for a man. Compare for yourself (just change the number or the tone - the postfix s is for sharp and f for flat)
If I am not mistaken (and my guitar out of tume) then the whole piece starts in the key of A, modulates to A#, changes to the subdominant D# (for the tenor solo) and modulates a third time to E.
The g# would be the third of this chord. The reprise goes back to D# and ends again with E.
People trying analyze sounds routinely get killed by overtones and harmonics. Anybody can pick a tuning fork out of a performance, but a sung note is a challenge.
This is the analysis of one, single piano note:
If it was in the middle of an orchestra, you’d never find it in the haystack of other sounds.
So yes, generating a set tone and compare it to the singer may be your best bet – and even then, if the singer is rich with overtones and harmonics, it may be hard.
There’s a pitch detection plug-in here that you can try: https://forum.audacityteam.org/t/pitch-detection-plug-in/29126/1
Installation instructions are here: http://wiki.audacityteam.org/wiki/Download_Nyquist_Plug-ins#Installing_Plug-ins
I tried what Robert suggested (extremely helpful by the way), I see what ya’ll are saying about no way being about to use the frequency analyzer to detect one voice out of a entire audio file. I don’t I have the best ear for music, but comparing a g#7 and g#5…its real hard to tell what he’s hitting, but the g#6 sounds like it could be close. I’ll try the pitch detection add-on.
Thanks for the help!
Yes, it is a very high tone.
Our swiss yodelling choirs do that stuff all the time. But I wonder if they could do it without their hands in the trouser pockets…