Apologies if this is incorrect place to post this query.
My company has created a sound recorder and we are trying to assess it’s performance in particular the system’s noise floor.
The sound recorder output wav files, which I then load into Audacity and use the Plot Spectrum to view the noise across the frequencies of interest.
For my analysis, dB is not a useful measure, I would rather have pascals ¶.
Looking at the following page: https://manual.audacityteam.org/man/plot_spectrum.html
It notes Plot Spectrum takes the selected audio (which is a set of sound pressure values at points in time) and converts it to a graph of frequencies (the horizontal scale in Hz) against amplitudes (the vertical scale in dB). which is understood.
However I am having a hard time getting my head around Spectrum: (default) - Plots the FFT of the data as described above. The amplitudes are normalized such that a 0 dB sine (pure tone) will be (approximately) 0 dB on the graph. In particular the normalized bit - what does this mean? How would you un-normalise? - Or is this just being silly.
Also as we are in the ‘in air audio’ domain, would the dB values presented in the plot spectrum be relative to 20 uPa as discussed in the following: https://ec.europa.eu/health/opinions/en/hearing-loss-personal-music-player-mp3/l-3/2-sound-measurement-decibel.htm
WAV files from my understanding contain the raw sampled audio stream as LPCM values. This will range over the range of the ADC that sampled the single, 16-bit in our case. The Plot Spectrum then FFT’s the signal as noted in the manual to arrive at the amount of energy in each frequency bin. I then fail to see how this is then converted to dB - I know typically dB = 20 x log (v1/v2), where v2 is the reference voltage, or db = 10 x log (p1/p2) where p2 is a reference pressure. What is the reference pressure used?