Some of my experiments in Nyquist programming are being frustrated by seeming memory leaks.
I don’t have a small code example to demonstrate it, but let me describe what I’m doing.
Iterate over fft frames. (Typically a 2048 window and a 32 skip.) Calculate a certain value based on successive frame pairs. Use snd-from-array and other Nyquist functions to avoid inner loops in Lisp. The calculated values become the samples of another sound using snd-fromobject. A variable is needed to remember the “sound” from the current frame as coming from the previous frame on the next iteration.
If I try this on a big sound over 8 minutes long, I watch memory usage climb until it exhausts my computer and Audacity just crashes.
Inserting calls to (gc) does not help. If I cancel the calculation of the effect midway, memory use does not drop down again nor does it if I call (gc) from the Nyquist prompt.
I have written similar code that makes calculations from only one frame at a time with no need to remember the previous frame or the “sound” derived from it and I do not find similar problems though there is similar use of snd-prod and snd-avg in the calculations.
Memory management for Nyquist in Audacity is far from ideal. One of the major problems is that Nyquist does not read track audio data directly from disk. A well known example is normalizing an audio track:
(setq newpeak 0.8)
(mult s (/ newpeak (peak s ny:all)))
The problem here is that (peak s ny:all) reads all of the selected audio “s” and is not released from memory until the multiplication is completed, thus normalizing audio in one pass is limited by the amount of free memory. There’s some discussion about this issue here: http://sourceforge.net/mailarchive/message.php?msg_id=18401605 (note that this is dated 2008 and some of the information no longer applies.
There is also a longer exposition of the normalization issue and a two-pass workaround, written by Roger Dannenberg, here: http://www.cs.cmu.edu/~music/nyquist/debug-plugin.html
A one pass workaround for the Normalization issue is:
Yes, I just tried with just one minute and watched Audacity’s memory usage in Task Manager climb from about 46 M to 683 M and not drop back even after the computation completed.
Is the process taking more memory but not really leaking it? Does its memory footprint not change if I undo and repeat?
I tried that, and memory usage climbs at once from that plateau to a higher one.
Now some of my complicated calculations do this and others don’t. I haven’t isolated what the difference is. All these experiments involve lots of tiny and temporary “sound” objects typically 1024 or 2048 samples long.
I know you can write leaky code in Lisp that fails to remove references to variables and so fails to make them collectible. I don’t believe I do that. Furthermore, if the entire Lisp environment is destroyed when the effect completes, I would think there would be delayed reclamation then, but I don’t observe that.
More experiments done with selective commenting-out of the complicated calculation I am attempting.
The surprising answer is that the culprit is innocent looking snd-add, but not just any use of it. It seems a necessary condition for the excessive memory use.
What I’m doing: Use snd-fromobject to calculate a “sound” of values derived from fft frames.
For each frame, use snd-from-array to make a “sound” with start time 1 and rate 1. Add (snd-const 0 0 1 1) to that.
Then make a power spectrum by multiplying the sum sound by itself, and using snd-avg to average successive pairs of samples.
Then, other blah blah blah using snd-avg and diff and rms.
The surprising thing is that removing the bolded step, though it makes my calculation meaningless, makes the huge memory leak go away.
This even though the little one-sample sound I am adding is a constant lifted out of the repeated calculation, not reconstructed each time.
I don’t yet have a simplified piece of code demonstrating it.
Now perhaps I can work around this by using snd-xform to eliminate the DC component of the fft frame, rather than trying to pair it with another sample inserted before it. I probably don’t care about it in nicely behaved sounds without an offset.
Memory usage with that code is not massive, but much bigger than it should be, and importantly it is not released when the code completes.
A simple workaround for this example is to add an initial silence starting at t0, with a duration equal or greater than the expected output. All subsequent additions will then overlap this “dummy” silence in the time domain and the memory leak does not occur.
I’d like to get this bug on the Audacity bug tracking system so that it is tracked and not forgotten about. Is there anything that you’d like to add or clarify before I do?