How is RMS Value Determined?


The waveform display in Audacity shows an RMS value (the light blue part of the waveform).

I understand that the value is derived by “squaring, averaging, and the taking the square root” of sample values however what I’m not clear about is over what time period is this done? Is there a standard for this?



There is no standard, or looking at it another way, there are lots of standards depending on what is being measured and the context, but no standards that apply to waveform displays in audio editors. The “time period” is referred to as the “window size”. I don’t know what window size is used for the waveform display, but my guess is that it varies depending on the zoom level. If you really need to know you would need to dig into the source code to find where it is calculated. Why do you ask?

I’m curious!

Also, I’m doing some audio editing work for a community radio station and there are lots of problems with levels (mics too loud, too soft, etc.).

There is a relatively new standard (ITU-R BS.1771) designed to handle issues of “standard loudness”.!!PDF-E.pdf