Would it be possible to use Audacity in conjunction with a computer microphone to monitor Db levels over long periods of time? For a school project I need to model the change in sound levels in various different locations, and need an accurate (and preferably affordable) way of making accurate measurements that I can then model using regression tools.

You’re looking for a Sound Pressure Level meter. You can force Audacity to approximate one of those, but you will never get “real” dBSPL numbers. Just relative ones. This sound is 3dB louder than that one. How loud are they? No idea.

This is a newer version of the ones we use and there isn’t a lot of information on it, but this is the idea.

Thanks for your quick response. I do not think it is necessary for us to have absolute readings; what we need to measure is the relative changes. Is there an easy way of doing this?

Does that Sound Pressure Level meter take measurements over time? Could it interface with a computer program to do so?

You can use this instrument to calibrate your computer. When this instrument reads 0dBSPL (for example), then the computer reads – whatever – and that’s your calibration point. Also remember that electronic sound goes from a maximum of 0dB (equipment failure) and always goes down, Real Life has a zero point and goes both ways, up and down. Broadcast tone in the US is -20dB. In Europe, it’s -17dB.

You could choose your loudest sound and note where the sound meter reads in Audacity. All the other samples will be lower than that. Of course, you can’t change anything between readings. You probably shouldn’t use the computer for anything else. Making a Skype call in the middle of the job would be deadly. Skype resets sound levels.

You also need to turn off all the Windows Conferencing Services. they like to reset levels, too.
Scroll down to Windows Conferencing…

You need to pay attention to the microphone as directional mics can have serious repeatability problems. Also recording for a very long time (10 hours) demands a very well behaved computer.

That sounds like a good plan. In terms of the actual process, how would I do the loudness measurement in Audacity? I need numerical data, not just the simple amplitude graph.

You will need to decide what exactly it is that you want to measure, for example a 30 kHz air vibration could have very high energy, but it would be “silent” because it is beyond the range of human hearing.

Our project is not a physics project, but a math project. We are, therefore, not concerned with any particular sort of loudness, but interested in seeing relative changes over time so as to model them mathematically. We’re measuring the sound inside classrooms, which we expect to model sinusoidally, and the sound in hall-ways, which we expect to model with a cubit, quartic, etc. function. So what I’m looking for specifically is audible sound over time: time on the x-axis, loudness on the y.

Any measurement of “loudness” requires that the frequency bandwith (range of frequencies) is defined, however for the sake of a math project I doubt that your teacher will really be expecting any kind of calibrated results. Simply defining the bandwidth limit by the frequency sensitivity of your microphone will probably suffice, though it may be worth commenting on this in your write-up.

If you run the following code in the Nyquist Prompt effect and use the “Debug” button (not the “OK” button) then there will be an output of the RMS values for 1 minute time intervals. This code is for mono tracks only. If you require a different time interval change the “60” in line 3 to the required time window (in seconds).

(format t "~a"
(do* ((datalist ())
(period (float 60)) ; RMS window in seconds
(start 0 (setq start (+ period start)))
(stop (+ start period)(setq stop (+ period stop))))
((>= stop (get-duration 1)) (reverse datalist))
(setf test (extract-abs start stop s))
(setf test (integrate (mult test test)))
(setf datalist (cons
(round (linear-to-db (sqrt (/ (peak test ny:all) period))))
datalist))))
(format nil "If you pressed the Debug button,nthe RMS values ~
per minute willnbe displayed.")

I can see a practical problem. The noise level in a quiet room can be very low and can compete with the internal noise of the sound card and microphone. In real life you hear the air conditioner, footsteps, muted conversation, but the recording is twice as loud as real life because of the constant fffffffffffff in the background. Not very many Windows sound cards win awards for high quality.

So in the interests of scientific accuracy, you also need to run the capture equipment in a dead quiet room to see where the lower limit of recording accuracy is. You will never get an experiment sample lower than that and it can seriously bias the job.

Remembering back, I think the “real” total noise number is the square root of the sum of the squares of the two values, the reading and the noise. You have to add them in quadrature, but you will need to run the formula backwards, since you have the sum and only one value. It’s been a while. Koz

We’ve gathered our data and, with some effort, have managed to analyze some of it in Audacity. We’ve had trouble analyzing large files (anything significantly over 150 minutes), and so have done our best to select representative portions of our day-long recordings. We’ve then taken the root-mean-square values, plugged them into Excel, graphed them, and done polynomial regression.

One of our lingering questions is this: How exactly does a root-mean-square value compare to a decibel? Could one be converted into the other?

Excuse my ignorance, but could someone help translate this code into English? I see a few key points, but do not understand exactly where the root mean square is calculated.

This takes a section of the sound “s” between “start” and “stop”.

“start” and “stop” are calculated from iterating “period” (60 seconds).

(setf test (integrate (mult test test)))

sums the samples of the square of “test” (test x test) where “test” is the extracted section of the signal.

\

round (linear-to-db (sqrt (/ (peak test ny:all) period))))

starting from the inside, out:
(peak test ny:all) finds the result of the integration (the sum of all samples within the test period)

(/ (peak test ny:all) period)
The sum of the squares is divided by the test period to give the “mean” of the “square”.

(sqrt (/ (peak test ny:all) period))
Gives the square root of the above .

(linear-to-db )
converts the answer from a linear measure to dB

(round )
rounds the “answer” to the nearest integer value.

so what you end up with is the square root of the average (mean) of the sample values squared (rms) over a 1 minute window size, which is then converted to dB.

Does that help?

There is probably a much easier way of doing this, but you always catch me late at night

Thank you! That is enormously helpful! Though the math is easily accessible, the Nyquist language is rather opaque.

The final step you describe is a conversion to decibels. Does that mean that this whole time, we’ve been analyzing the loudness of our recordings not just in RMS, but in decibels? Does this mean that we can make comparisons to measurements made by others–that our analysis isn’t entirely relative?

dB in this case is relative to “full-scale” (track height) which is a linear range of +/- 1.0
This does not relate directly to any “real world” measurement unless your amplifier, speakers and listening environment are all calibrated to some standard.

Nyquist is based on LISP, or more precisely on XLISP which is a dialect of LISP.
The general syntax of LISP is

(function arguments)

, so whereas in other languages (including English) you might write something like (1 + 2 + 3), in LISP the function comes first so it would be (+ 1 2 3)
Functions are always between parentheses ( … )