Memory problems with code

Hi!

Am probably as much of a beginner as one can be but I managed yesterday to code a very simple and primitive silence trimmer. It just searches for the first point that a sound reaches a certain level and then trims away the audio before that. So, basically, it trims the beginnings. The problem was that when I ran this plugin in a chain on a lot of files, the memory usage rose a lot (from about 30 megs to 70, processing 36 files).

This is the code:

;control treshold "Treshold" real "" 0.5 0 1
;control maxlen "Max" real "seconds" 1 0 2

(setq dur (truncate (* maxlen *sound-srate*)))
(setq samples (snd-samples s dur) )
(setq y 0) ;init y

(setq i
  (do ((i 0 (setq i (1+ i))))
      ((or (> i dur) (>= y treshold)) (1- i))
    (setq y (abs (aref samples i)))))

(setq start (/ i *sound-srate*))
(setq end (/ len *sound-srate*))
(mult (env 0.003 0 0 1 1 1) (extract-abs start end s))

As you can see, I’m reading the samples into an array and then loops through the array until a sample with an absolute level above the treshold is found. The loop returns the position of the array (i. e. the sample position) and after converting this number to a value in seconds, I trim away all audio up to this point, using extract-abs. I also apply an envelope to fade in the beginning with 3 ms to avoid clicks.

I read in a post here that it is preferable to work with sounds instead of samples so I started to look for a way to do that. After looking through the Nyquist reference I came up with this code:

;control treshold "Treshold" real "" 0.5 0 1

(mult (env 0.003 0 0 1 1 1)
      (extract-abs
        (sref-inverse (s-abs s) treshold)
        (get-duration 1) s))

I thought this was going to be the solution to my problem - but alas - the memory usage was the same. Does anyone have some ideas of how to improve this code?

The second piece of code is better, particularly if you were removing long sections of silence. Iterating through millions of samples is very slow and may crash unless you are very careful. (I formatted your code to make it more readable - I prefer not to count parentheses to see where a do loop ends :wink:)

After each pass memory “should” be released, but unfortunately there are some memory leaks, so it is not completely released until you exit Audacity. If you are able to identify exactly where those memory leaks occur, please let us know and we can raise a bug report.

If your main concern is in trimming a very large number of files, then a better option would be to use SoX (http://sox.sourceforge.net/).

If your main concern is working with Nyquist, then you will just need to limit the number of files that you process in one session and occasionally restart Audacity. I tried running the second code example in the current alpha version of Audacity 30 times, and memory use went up from about 30 MB to about 60 MB). At that rate of leakage I should be able to cope with several hundred files before I need to restart Audacity.

Thanks for the input! I was hoping this had more to do with my coding than an actual memory leak but I tested to run the same chain of files but running only the function (s-ref inverse s 0.5) and it used the same amount of memory (which didn’t decrease until quitting audacity).

My thought is actually to run this on a whole lot of files; I design a lot of my own sample libraries and the most tiresome task is to trim away the silence of each and every sound file. I have tried SoX but I felt that by designing my own Nyquist script, I could fine-tune the behavior to fit my exact needs :slight_smile:

But I’ll probably lower the sample-rate to make it easier on the RAM.

Btw, I also tried using another function, (s-max s 0.5), and that pushed every sound below an amplitude of 0.5 up to that point. I first thought I would be able to somehow use that but I realized I had no clue of how to retrieve the position of the first sound above 0.5. I guess in order to do that, one must bring samples in for calculation - and by that increasing the memory usage.

Audacity can run into other problems when batch processing “thousands” of files. I’m not sure what the actual limit is, but “hundreds” of files should be OK.

Note that sref-inverse will generate an error if no inverse exists (Nyquist Functions) so you would probably want to check for that first. For example, if you are expecting the threshold be exceeded within the first 1 million samples:

(peak s 1000000)

will return the peak level of the first 1 million samples.