Just wondering if there is a way to compare 2 plot spectrums? Like objectively say these two spectrums are different by X%? I’ve looked into pitch detect but its not what i’m after.
For context I’m trying to build a robot to play the violin and repeatability is important. Sorry if a similar topic has been covered, I’m just a desperate novice.
Just wondering if there is a way to compare 2 plot spectrums? Like objectively say these two spectrums are different by X%? I’ve looked into pitch detect but its not what i’m after.
For context I’m trying to build a robot to play the violin and repeatability is important. Sorry if a similar topic has been covered, I’m just a desperate novice.
I don’t know of one. I cheat and screen capture one and put the other up live.
I don’t think even if you could do two you’d get what you want. Spectrum analysis doesn’t give you sound quality and the display itself is a lot of interpolation and guess work.
See the Size setting. That’s the display accuracy, but it’s limited by how much data you have selected and how fast you want it to work.
You may be able to make good use of a different tool. Drop-down menu on the left > Spectrogram view. That’s time left to right, frequency top to bottom and I think loudness or strength is color. And you can put two different ones up at the same time.
You can export the data from Plot Spectrum (see: http://manual.audacityteam.org/man/plot_spectrum.html)
Export creates a plain text file with a list of “frequency (Hz) / level (dB)” pairs. For example, this is a sample of pink noise:
You could use Excel or any other program to apply whatever form of analysis that you want on the raw data - it depends what kind of analysis you want to perform.
For context I’m trying to build a robot to play the violin and repeatability is important.
That’s not going to work… The spectrums will be very-similar if the song is played in a different key or even if a different song is played. You’re mostly going to see the spectrum of a violin (which probably looks similar to the spectrum of an acoustic guitar.)
It’s virtually impossible to judge the quality of a recording by looking at the spectrum, waveform, or spectrogram. Pink noise can make a “nice spectrum” but you wouldn’t want to listen to it. If you play a recording backwards it’s going to have the same spectrogram, etc., etc.
A [u]spectrogram[/u] (or [u]FFT[/u] analysis) adds the dimension of time and theoretically you could tell if the right notes were played at the right time with the right intensity. But it practice it’s not that easy and the difference between two recordings is also 3-dimensional.
The best solution is probably to put one recording in the left channel and the other in the right channel and listen. That’s not going to give you a numerical value but differences should be obvious.
I should’ve mentioned that we’re just observing the repeatability of just open strings, no songs. Thanks for the reply, I’ll try your suggested approach
In effect, the spectrum analysis splits the audio signal into multiple “frequency bins”. The number of bins is half the “Size” setting in the Plot Spectrum interface, so a “Size” of 128 produces 64 frequency bins (the first and last bins are actually only “half bins”, which are for 0 Hz and the Nyquist frequency, and are not included in the output - for normal audio these would usually both be zero). For a smaller “Size” setting, the bandwidth of the bins is larger, so they will tend to catch more of the signal in each bin than if the “Size” is large (more, smaller bins).
The amplitude (dB) measurement is normalized such that a sine wave of amplitude 0 dBFS will measure as 0 dB in the spectrum, and this provides the 0 dB reference level.