How to apply a large number of equalization curves

Hi! I have a large number of equalization curves (400+) that I want to apply to a single audio file. I want to apply each equalization curve independently to produce 400+ audio files, each filtered with one curve – is there an automated way to accomplish this? I see that batch processing is possible using chains, but as far as I can tell, this method applies the equalization filters in an additive way. In terms of commands, I want to do something like:

01 Import wav
02 Apply equalization curve1
03 Export wav
04 Import wav
05 Apply equalization curve2
06 Export wav

Does anyone have suggestions? I’d like to avoid doing it manually if possible! Thanks for your help!

P.S. I’m running Audacity 2.1.3 on Windows 8.1 64-bit OS obtained via .exe installer

Could you fake out Chains? Make hundreds of copies of the show each with a slightly different filename? Ummmmm. No, because Chains will probably try to apply one effect to them all sequentially.

What’s the show? Sometimes we can figure a different way to get the same result.


Thanks for the reply! The audio file has 30 examples of birdsong in a sequence, and I’m using equalization filters to simulate the change in sound as it travels through the environment. Each filter corresponds to a distance traveled, so I need to apply each filter individually to produce examples of birdsong traveling 20m, 21m, 22m, etc.

Even if Chains had an Import command that let you select a file, you would still have to spend a lot of time inserting Equalization commands into the Chain for the different EQ curves.

So I think the answer is no, unless you are skilled at AutoHotkey or similar and wrote some kind of macro that iterated the curve to be used and the file name to be written to.


A gradual low-pass filter will do that part, e.g. use this variant of Steve’s code in Audacity’s Nyquist prompt

   (setq start-freq 22000) ; cannot be greater than 1/2 the sample rate
    (setq end-freq 200)
    (setq passes 2)         ; more passes for a steeper filter cut-off
    (setq sweep-type 0)     ; 1 for a linear sweep, 0 for exponential sweep

    (let* ((nyq (/ *sound-srate* 2.0))
           (f0 (max 0 (min nyq start-freq)))
           (f1 (max 0 (min nyq end-freq))))
      (if (= sweep-type 0)
          (setf lpfreq (pwev f0 1 f1))
          (setf lpfreq (pwlv f0 1 f1)))
      (dotimes (i passes *track*)
        (setf *track* (lp *track* lpfreq))))

To get the full illusion of movement you’ll also need to manipulate the stereo-image, as well as the equalization.

Thank you so much for the helpful suggestions!

Gale – I’ll need to learn the syntax, but AutoHotKey looks like it could help me automate this and various other aspects of my research. It’s definitely worth exploring!

Trebor – this wouldn’t quite accomplish my goal, but I might be able to make it work if I can use the Nyquist prompt to apply a filter that changes in discrete steps.

For a simple case, let’s say I have two examples of birdsong that I want to simulate at 20m, 40m, and 60m. My current method is to string the examples of birdsong together into one audio file, then apply each filter independently:
[song1…song2…] – apply curve20m → all songs at 20m
[song1…song2…] – apply curve40m → all songs at 40m
[song1…song2…] – apply curve60m → all songs at 60m

If I could use the Nyquist prompt to apply filters in discrete steps, I could change my methods such that the filter changes over time rather than the example song:
[song 1…song1…song1…] – apply curve20m…curve40m…curve60m → song1 at all distances
[song 2…song2…song2…] – apply curve20m…curve40m…curve60m → song2 at all distances

Would it be possible to specify different curves to be applied every X seconds using the Nyquist prompt? The equalization curves themselves are fairly complex – our model for attenuation takes into account atmospheric absorption and ground reflection. As a result, the change in dB depends on the frequency of the sound (see attached example). I have 400+ of these curves to apply, so assuming this is possible using the Nyquist prompt, would it be nightmarish to code?

Thanks again for the help!
example_curve.xml (3.4 KB)

That looks like a combination of low-pass and comb filtering. How were the curves generated?

A simple example:

;version 4

(defun filter (sig ctrl)
; Filter code goes here.
; As an example, a simple low pass filter that
; takes a sound "ctrl" as a control signal, and applies
; a first order low-pass filter 4 times to the sound "sig".
  (dotimes (i 4 sig)
    (setf sig (lp sig ctrl))))

; A control signal, where the value (amplitude) of rises
; from 1000 to 8000 in 4 equal length steps.
; The total length is the length of the selected audio.
(setf param (pwl  0     1000  0.25  1000
                  0.25  2000  0.5   2000
                  0.5   4000  0.75  4000
                  0.75  8000  1.0   8000))

; Call the function "filter" and pass the selected audio "*track*
; and the control signal "param" to the function.
(filter *track* param)

We generated the curves using a physical model of attenuation parameterized specifically for my study species at my field site in South Australia – the low-pass filtering is due to atmospheric absorption and the comb filtering is due to ground reflection.

Thanks for the code, Steve! I don’t have any prior experience with Nyquist, and I have very limited experience with sound processing, so I have a few potentially dumb questions: Rather than having a single cutoff value for each step of the control signal, would it be possible to substitute in more complex functions (i.e. my equalization curves)? If I wanted to specify one of my curves rather than a simple low-pass filter, would eq-band be my best option (specifying center frequency and width)? Or is there another way to specify a curve that is closer to the xml format (specific frequency and gain coordinates)?


Yes of course. As indicated in the example code, the function “filter” could be any filter code.

EQ-BAND would be one approach, but it’s rather a round-about approach in that you are then defining your filter algorithm based on your physical model, representing that as a frequency curve, then interpreting the curve as bunch of band filters. A more direct approach would be to design the filter directly from the model. If, as it appears, the filter could be created by combining a comb filter and low pass filter, then rather than using dozens of EQ bands, you could use a comb filter and a low pass filter.

This isn’t an easy project :wink:

The example that I gave was built around a first order low-pass filter, because that was an easy example. The “LP” filter can accept a “sound” as a control signal. Some filter types have fixed (numeric) parameters, in which case a different approach is required.

Taking another example, say we use a simple comb filter (
The “Hz” parameter must be a number, so we can’t use a control signal (sound) as easily as we did for the LP filter. However, there is another approach that we could take (which may or may not be practical for what you want to do, but I’m just giving examples of what can be done:

If multiple tracks are selected, the Nyquist plug-ins process them one at a time, starting with the topmost selected track.
If you import say 40 tracks, then the tracks will appear one above the other in the Audacity project. We could step through a list of parameters as we move from one track to the next.

Example (with just 4 tracks to save typing):

(setf decay (list 0.1 0.08 0.06 0.04))  ;list of 'decay' values
(setf hz (list 100 200 300 400))        ;list of 'Hz' values

(let ((n (- (get '*track* 'index) 1)))  ;get the track number
  ;Check that we have data for track 'n'.
  (if (or (> n (length decay))(> n (length hz)))
      (format nil "Error.~%No data for track ~a." n)  ;Oops
      (comb *track* (nth n decay) (nth n hz))))       ;or apply comb filter

Thanks for your help, steve!

I think Audacity might not be the right tool for this particular project, but knowing these techniques for batch processing will likely be useful in future projects!