RMS measurement for large files

When working with long selections in Nyquist, there is often a risk of running out of memory, which may lead to Audacity freezing or crashing.
More precisely, the problem is running out of “address space”.

Nyquist is a 32-bit library and so has a 2 GB limit for address space. When processing long selections (generally talking about selections over an hour at 44100 Hz sample rate), if the address space for the audio data is not released as Nyquist processes the data, then there is a very real danger of using up all valid addresses.

For analyzing audio, it is usually possible to release memory as the audio is processed. This code snippet below is an example for measuring the RMS level of a mono or stereo track while avoiding the problem of excessive memory use. (This code snippet has been tested successfully on an 8 hour selection in Audacity 2.1.2)

More information about this issue can be found in this article by Roger Dannenberg: http://www.cs.cmu.edu/~music/nyquist/debug-plugin.html

;; Measure RMS

(defun get-signal (in)
  (let ((sig in))
    (setf in nil)

(defun myrms (ch)
  (let* ((sig (if (= ch -1)
                  (get-signal *track*)
                  (get-signal (aref *track* ch))))
         (tmp (snd-avg (mult sig sig) 44100 44100 op-average)))
    (if (= ch -1)
        (setf *track* nil)
        (setf (aref *track* ch) nil))
    (setf sig nil)
    (setf ln (truncate (snd-length tmp ny:all)))
    (peak (snd-avg tmp ln ln op-average) ny:all)))

(defun stereo-rms ()
      (if (arrayp *track*)
          (/ (sum (myrms 0)(myrms 1)) 2.0)
          (myrms -1)))))

(print (stereo-rms))

Is that true on Linux and Mac as well as Windows? I have not tested on Linux or Mac (I guess you have) but my understanding was that on Linux and Mac, Audacity would be tied to (something like) a 4 GB limit rather than the by-default 2 GB limit imposed on Windows (the other 2 GB of the possible maximum 4 GB being reserved for the Windows system).

So this would not be a solution to processing audio, as in Normalize?


On Linux, the split between the kernel and the user address range may be configured in different ways. On my 32-bit Linux machine, which has 3 GB of RAM, the user address range is 2 GB. I guess that it could be more on a machine with more physical RAM (but I don’t have a machine to test that guess).

I don’t know how it works on Mac OS X, but, on a 64-bit Mac with 16 GB of RAM, the 2 GB limit still applies.

No this is not a solution for normalizing because in order to release ram we need to delete the audio as we go when analyzing (looking for the peak level), but if we delete the audio data, then it is no longer available to be amplified - Nyquist can’t go back and fetch it again.

In ‘some’ cases we can work around the problem, for example, if we just want to normalize the peak level we can get the peak level of the selection from Audacity without needing to analyze with Nyquist. For example, this code snippet will normalize to 0 dBFS and has no problem with long selections:

(setf peak (get '*selection* 'peak-level))
(mult *track* (/ 1.0 peak))

Is that a Mac Mini?

My understanding was that many early Intel Macs were in a similar position to Windows because some of the 4 GB physical address space was taken up by system I/O memory requirements. However most higher end and later Macs as I understand it use a chipset that allows them to put system I/O in a separate address space, and use up to 4 GB of physical RAM for a 32-bit app if the machine has sufficient RAM to do that.


Yes it’s a new Mac Mini.

If you’d like to test your Mac, try this snippet on a 4 hour mono track (Use the “Activity Monitor” to watch the RAM usage).

(print (peak *track* ny:all))

Does Audacity crash?

I think you’re confusing two things:

The first Intel Macs had a limit of 4 GB ram and weren’t completely 64 bits clean. Some of this was in memory addressing. It wasn’t until the Core 2 duo’s that the 4 GB limit disappeared, even if earlier chipsets should have been capable of using more ram. These early machines behave oddly, some can address 6 GB ram, with a set of 4 + 2 GB SODIMM’s. That’s why Apple officially only supports 4 GB ram on core duo’s.

The 3 GB limit on Windows comes from an old memory mapping problem IIRC. Run any other OS on the same hardware and it’s gone. MS could fix this, but refuses, because of backwards compatibility.

No. On El Capitan I was able to process somewhat less than four hours with 2.4 GB memory used by Audacity out of 4 GB total, 500 MB remaining. After processing (click OK on the message displaying the peak), I had to wait over 30 seconds for the memory to be released before Audacity could be used again.

On Ubuntu 16.04 64-bit I processed more than four hours with 3 GB memory used by Audacity out of 6 GB total, 1.8 GB remaining. Again there was a long wait non-responding after processing, but notably the memory was not released. :astonished: I have not checked yet if that only happens with this snippet or only if memory use is large. Do you see Nyquist effects releasing memory after use on your Linux system?


With this code:

(print (peak *track* ny:all))

yes memory is released after clicking “OK” for the returned value. (The first time that the code is run, memory use after release has been increased by about 10MB, which I guess is because Nyquist has been loaded. On subsequent run, memory is released to the same level each time.)

The problem on Windows still stems from a Bill Gates proclamation: “640 KB should be enough for everyone”. Hence, the memory controllers, that had a 1 MB physical address space, left a 384 KB upper block reserved for other hardware use, like the GPU. And still, Windows is unable to use that block.

Fast forward to today and that same memory arrangement leads to Windows XP not being able to address more than 3 GB ram for one app.

Years ago, third party memory management used to exist to get around these limitations. Quarterdeck Extemded Memory Manager (QEMM) is the only one I still remember, but several others were around. And you needed those once you had more than 1 MB of memory. Some brands, like Philips had a little trickery in the BIOS/firmware that allowed them to use a block of 64 KB that went unused with most other PC’s, so they touted 704 KB user memory without an extended memory manager.

The Mac has never had such a memory hole and has never needed software to work around it. A 64 bit application (most of them today) should be able to address as much memory as the system allows. Some small apps don’t for reasons of simplicity. What would a bare-bones text editor do with 32 GB of ram, after all? But any decent video editor has no problem eating 32 GB or more of RAM.

The strange thing is, most efficient DAW’s don’t seem to benefit from more than 1 to 2 GB of own ram*. You’ll notice no speed increase when allowing them much more ram. I’ve been told that is for the same reason most DAW’s don’t benefit from more than 4 cores, because many audio tasks can’t be split over more than one core.

  • That’s without taking some very inefficient VST’s into account. But these are confronted with other limits, like one thread per plugin.

I ran your snippet on Ubuntu 16.04 64-bit on a selection within an hour of mono audio. The extra RAM used by the snippet was not released. When I chose a different selection in the track, almost no additional RAM was used, but none was released. Even when I created audio in a new track and ran the snippet on that, it did not use much more memory and released none.

ACX-check behaves in the same way. No release of RAM after OK on the analysis.

This is the same in 2.1.2 release and 2.1.3-alpha.

So although I have access to more RAM than you thought - maybe up to 4 GB - I will use it up quicker if I use different memory hungry Nyquist effects,

Do you regard not releasing RAM as a bug? I can’t really understand the behaviour I described. Why do the effects not take more RAM when working on audio they have not seen before?

I have made no tweaks to share of kernel and application RAM, so the behaviour is whatever Ubuntu 16.04 does by default.


Could you give some numbers for the amount of RAM used at each stage.

Select 15 minutes in a 1 hour mono sine tone 44100 Hz.

Open Nyquist Prompt and paste your snippet. Audacity is using 24.2 MB of RAM.

Apply the snippet. By the time I get the dialogue for the returned value, Audacity is using 186.7 MB.

OK the dialogue. Audacity RAM use is now 186.6 MB.

Create new empty track and generate noise into the selection in that track. Open Nyquist Prompt. Audacity memory use still 186.6 MB. Apply the snippet. When the returned dialogue appears, RAM use is 190.8 MB. After OK’ing the dialogue, RAM use is still 190.8 MB.


That’s a lot lower than my machine. I’m seeing 125 MiB

OK, so that’s about 159 MB of data

So that’s about right for Audacity + 15 minutes of audio.

On my machine, the memory is then released back down to 139 MiB.

So there’s not actually a memory leak. The memory is being managed and reused, but it seems that Ubuntu is reserving that used memory after “OK” rather than releasing it. It could be worth minimizing Audacity at that point and doing something else for an hour or two, then looking to see if the memory is eventually released.

I don’t see that we can draw much in the way of meaningful conclusions, other than that memory management is different in Ubuntu 64-bit to Debian 32-bit.

It’s also different to Ubuntu 32-bit, which releases the memory used by Nyquist plugins after processing.

To make clear one point, when I run a new effect after your script, such as ACX-Check, that new effect does not reuse the memory that Nyquist Prompt was using, but uses new memory that is also not released after OK’ing the result.

I draw the conclusion that crashes are more likely on 64-bit Linux , despite having more than 2 GB RAM available, if you use multiple effects that load all the audio data into memory.


What about the code in the first post of this thread? Does 64-bit Ubuntu release memory correctly there?