Calculating average voice pitch

Is there a way to use Audacity to calculate the average voice pitch is a recording that is, say, 30 seconds long? I am trying to train my voice to raise the average pitch, and one of the things I do is read a short paragraph immediately after playing the note I am trying to achieve (as an average); in my case an A (220Hz). If I record this with Audaicity, is there a way to then analyze the clip to see how well I did?

Probably not by pushing a button a reading a number, no. You can use Analyze > Spectrum Analyze or Spectrogram in the left-hand drop-down, but please know that when you sing or play a note, there’s a lot more going on than just one note. Do you remember in music class they threw around the terms “Harmonics” and “Overtones?” Lots of overtones make an instrument rich and fun to listen to, but they also make it rough to measure.

This is a measurement of just one piano note.

http://kozco.com/tech/audacity/piano_G1.jpg

Turns out the tallest peak on the left corresponds to G1 on the piano.

If you tried that trick with a song, you would get a solid forrest of peaks and not be able to tell or measure anything.

What you can do is sing or talk your test phrase and record it and save it. Then record the improved one and see if the blue peaks move to the left (lower pitch).

I know you were hoping for the one-button solution, but I don’t know of any.

Koz

Here’s a similar Topic:
https://forum.audacityteam.org/t/mean-wave-frequency-of-a-selected-audio-track/28885/7
There’s a little Nyquist plug-in included that returns the mean frequency.
Put the code into a text editor and save it as name.ny in your Plug-in folder.
And there’s Steve’s Pitch Detection plug-in available (new plug-ins section in the Nyquist forum).
But I am not sure if it rounds the pitches.

It seems too simple to do what the poster wants. What happens if the durations of the same song performed twice varies as it almost certainly will. How does it handle harmonics? If the poster gets a head cold, do all the numbers go off? Too close to the microphone and the readings to nuts?
Koz

The yin algorithm (which is the basis for both plug-ins) has been developed to find the fundamental Frequency of a voice. There are a lot of other algorithms as well and each claims to find the fundamental most accurately.
But they have all one thing in common: they are only suited for a certain context e.g. polyphony or plucked strings.
The first formant of a voice will determine its pitch and its frequency can even rise above the other formants. That’s why pitch shifting sounds so strange - all formants are getting a higher frequency which isn’t natural.
I think that the results should be quite good, independent of the harmonical upper structure.
I can’t test it thoroughly - my voice is far too bad for that…
Here’s the link to Steve’s plug-in:

Do you mean a “singing voice” or “talking voice”?