Comparing sound/Comparing Plot spectrum

Hey everyone,

Just wondering if there is a way to compare 2 plot spectrums? Like objectively say these two spectrums are different by X%? I’ve looked into pitch detect but its not what i’m after.
For context I’m trying to build a robot to play the violin and repeatability is important. Sorry if a similar topic has been covered, I’m just a desperate novice.

Thanks

Hey everyone,

Just wondering if there is a way to compare 2 plot spectrums? Like objectively say these two spectrums are different by X%? I’ve looked into pitch detect but its not what i’m after.
For context I’m trying to build a robot to play the violin and repeatability is important. Sorry if a similar topic has been covered, I’m just a desperate novice.

Thanks

I don’t know of one. I cheat and screen capture one and put the other up live.

I don’t think even if you could do two you’d get what you want. Spectrum analysis doesn’t give you sound quality and the display itself is a lot of interpolation and guess work.

See the Size setting. That’s the display accuracy, but it’s limited by how much data you have selected and how fast you want it to work.

You may be able to make good use of a different tool. Drop-down menu on the left > Spectrogram view. That’s time left to right, frequency top to bottom and I think loudness or strength is color. And you can put two different ones up at the same time.

Koz

You can export the data from Plot Spectrum (see: Plot Spectrum - Audacity Manual)
Export creates a plain text file with a list of “frequency (Hz) / level (dB)” pairs. For example, this is a sample of pink noise:

Frequency (Hz)	Level (dB)
344.531250	-15.942205
689.062500	-22.243946
1033.593750	-24.248299
1378.125000	-25.565374
1722.656250	-26.610218
2067.187500	-27.472189
2411.718750	-28.155912
2756.250000	-28.728533
3100.781250	-29.256420
3445.312500	-29.711157
3789.843750	-30.156023
4134.375000	-30.528477
4478.906250	-30.836014
4823.437500	-31.141737
5167.968750	-31.429012
5512.500000	-31.702862
5857.031250	-31.962402
6201.562500	-32.213570
6546.093750	-32.461357
6890.625000	-32.667683
7235.156250	-32.901005
7579.687500	-33.148071
7924.218750	-33.327564
8268.750000	-33.452477
8613.281250	-33.678936
8957.812500	-33.853203
9302.343750	-33.995693
9646.875000	-34.135601
9991.406250	-34.317173
10335.937500	-34.432312
10680.468750	-34.583138
11025.000000	-34.771980
11369.531250	-34.919205
11714.062500	-35.064316
12058.593750	-35.152679
12403.125000	-35.273205
12747.656250	-35.393723
13092.187500	-35.506931
13436.718750	-35.600372
13781.250000	-35.689335
14125.781250	-35.865261
14470.312500	-35.985210
14814.843750	-36.049957
15159.375000	-36.154995
15503.906250	-36.242512
15848.437500	-36.303635
16192.968750	-36.389198
16537.500000	-36.482349
16882.031250	-36.586998
17226.562500	-36.667168
17571.093750	-36.755085
17915.625000	-36.876217
18260.156250	-36.899033
18604.687500	-36.990051
18949.218750	-37.103344
19293.750000	-37.181652
19638.281250	-37.213474
19982.812500	-37.296928
20327.343750	-37.401596
20671.875000	-37.449711
21016.406250	-37.547451
21360.937500	-37.652111
21705.468750	-37.721127

You could use Excel or any other program to apply whatever form of analysis that you want on the raw data - it depends what kind of analysis you want to perform.

Topics merged. Please avoid double posting.

For context I’m trying to build a robot to play the violin and repeatability is important.

:frowning: That’s not going to work… The spectrums will be very-similar if the song is played in a different key or even if a different song is played. You’re mostly going to see the spectrum of a violin (which probably looks similar to the spectrum of an acoustic guitar.)

It’s virtually impossible to judge the quality of a recording by looking at the spectrum, waveform, or spectrogram. Pink noise can make a “nice spectrum” but you wouldn’t want to listen to it. If you play a recording backwards it’s going to have the same spectrogram, etc., etc.

A [u]spectrogram[/u] (or [u]FFT[/u] analysis) adds the dimension of time and theoretically you could tell if the right notes were played at the right time with the right intensity. But it practice it’s not that easy and the difference between two recordings is also 3-dimensional.

The best solution is probably to put one recording in the left channel and the other in the right channel and listen. That’s not going to give you a numerical value but differences should be obvious.

Sorry about that Steve.

Thank you, i’ll give that a go.

I should’ve mentioned that we’re just observing the repeatability of just open strings, no songs. Thanks for the reply, I’ll try your suggested approach

This worked quite well I think. I can get a percentage difference between levels (dB). It’s definitely a start Thanks again

A note about the spectrum dB scale:

In effect, the spectrum analysis splits the audio signal into multiple “frequency bins”. The number of bins is half the “Size” setting in the Plot Spectrum interface, so a “Size” of 128 produces 64 frequency bins (the first and last bins are actually only “half bins”, which are for 0 Hz and the Nyquist frequency, and are not included in the output - for normal audio these would usually both be zero). For a smaller “Size” setting, the bandwidth of the bins is larger, so they will tend to catch more of the signal in each bin than if the “Size” is large (more, smaller bins).

The amplitude (dB) measurement is normalized such that a sine wave of amplitude 0 dBFS will measure as 0 dB in the spectrum, and this provides the 0 dB reference level.