Measuring SNR?

Hi all,
For a school project I used Audacity (2.3.2) to trim some speech recordings for analysis in praat. Now as I’m writing up the report, I want to include the signal-to-noise ratio of the audio I used (downloaded from an interview on youtube) to compare the quality and clarity of the recordings. Is there an easy way to do this in Audacity? Thanks
(P.S. using windows 8.1 if that’s relevant)

The [u]ACX Check Plug-In[/u] will give you some information you may be able to use depending on the file and how you want to define S/N.

In any case, you’ll need some noise-only and you may need to select and analyze the noise separately.

Audacity has a “Contrast” tool specifically for this task.
It was originally developed for testing speech recordings against Web Content Accessibility Guidelines (WCAG) 2.0.

How it works, is that you select part of the background noise and measure the RMS level, then compare it to the RMS level of the foreground speech.
Details in the manual: Contrast - Audacity Manual

Is the Audacity contrast-tool weighted for human-hearing ?

No. As specified in WCAG 2.0, it measures RMS (z-weighted).

Weighting appropriately for human hearing would be tricky (and likely to be unrepresentative of actual hearing) because the equal loudness contour is very different at “foreground levels” compared to “background noise levels”.

compare the quality and clarity of the recordings.

That may be stickier than you think. It’s possible to prepare a voice work with good signal to noise ratio and trash voice quality if the artist got the noise value by aggressive Noise Reduction in post production. Noise Reduction affects everything and only if you’re in good standing with the sound processing angels does it affect background noise more than the show. This effort is the reason cellphones sound like they do.

“What?!? I can’t understand you!”

If you analyze that conversation, I bet it has terrific signal to noise.

downloaded from an interview on youtube

That only works if you use one conversation and prepare it multiple different ways and then analyze those. YouTube has their own post production processing and compression. The experiment falls apart the first time someone reads your methodology. The experiment might be much more stable if you had a known, good, clear voice performance—say one you recorded yourself—and worked from there.


No. As specified in WCAG 2.0, it measures RMS (z-weighted).

As a cousin to that, which noise are you using? They come in colours (pink, white, brown, etc) and each one has a different affect on the clarity of the voice. And that’s only if you’re using all-frequency noises (spring rain the trees shshshssh). Is that the goal?

What is the formal title of the experiment? We’re just guessing at it.