Can this code be converted to the nyquist notation?

Hi all,

I don’t know any notation for Nyquist prompt but do know programming and was wondering if I could have this code converted.

int counter1, counter2, counter, sumOfAmplitudesOfSamples;
const int samplesPerSecond = /* I don't know what this value is */;

for (counter1 = 0; counter1 < tracklength; counter1++) {
    sumOfAmplitudesOfSamples = 0;

    /* pouring over track */

    for (counter2 = counter1; counter2 < 2 * samplesPerSecond + counter1; counter2++) {

        /* Pour over next two seconds of track, add current amplitude to sum */

        sumOfAmplitudesOfSamples += track1[counter2];

    if (sumOfAmplitudesOfSamples / (2 * samplesPerSecond) < /* specify a value here in dB, depends on how sensitive you want it */) {

    /* If the average amplitude is less than a value, do this loop: */

        for (counter2 = counter1; counter2 < 2 * samplesPerSecond + counter1; counter2++) {

            /* Pour over the next two seconds of track, set track1's current sample's amplitude equal to track2's current sample's amplitude */

            track1[counter2] = track2[counter2];

I didn’t see anything about loops when I looked over the notation so I was just wondering if this could even be done. What I intend the code to do is: go over track1, check if the average amplitude in the next two seconds of track1 is less than some value, and if it is, replace those two seconds with the samples from track2. Or if somebody could refer me to the notation for loops and such that would also be helpful.

Also, I don’t know much about how sound is treated in Audacity, and assume here that it’s just an array of amplitudes. I may be totally off here, so if you could correct me I would be very appreciative :slight_smile:

Hi Aphilentus

Your code could of course be transferred to Nyquist code.
However, I don’t think that you would be happy with it.

The code takes a brute-force approach but does not do anything meaningful after all:

  • The first loop advances one sample at a time
  • The second loop adds up the samples within 2 seconds
  • The third loop transfers the samples to the first track, if need be.

At this moment, the first loop should increase by 2 seconds or you’ll do it all over again (2 * number of samples in a second - oldest plus new sample)
Supposedly, the track has 100 seconds at 10000 Hz sample rate , the code had to execute 1.0E021.0E041.0E04= 10000000000 inner blocks (up to 20000000000, if the threshold is always higher than the sum)

But you can do better than that, with only one loop= 1000000 inner blocks.

However, nyquist does not work like that. Sounds are not arrays, they take a more abstract form.
Of course, you could request single samples or an array of 2 seconds length, but this will always be slow.

Even with the implemented C code implementation, the code will be very slow (it does obviously also use a brute-force method).
Create a 2 s chirp, paste the following code into the nyquist prompt and fetch a cup of coffee:

(snd-avg *track* (truncate (* 2 *sound-srate*)) 1 op-average)

The code averages 2 seconds and advances 1 sample.
If you’re using an older Audacity version, use s instead of track.

You can amplify the result in order to hear it.
(in your code, we would take the absolute value and not the sample value itself)

That’s all for now.


Nyquist is a separate thing from Audacity.
Audacity is an application that is (mostly) written in C++.
Nyquist is a programming language (based on XLISP).

Audacity usually handles sounds by loading a sequence of sample values into an array buffer and passing that buffer to the processing code, and on success, passing another buffer. The audio data is PCM (, meaning that it is a series of values that represent the amplitude at intervals of 1/sample-rate.
For processing, Audacity always uses 32-bit float values for the samples, where 0 dB (full-scale) is the numeric range +1.0 to -1.0.

Nyquist usually handles sounds as a specific data type. Just as you may have floats, integers, characters, strings… you can have “sounds”.
It is not normally necessary to worry about the internal structure of a “sound”, but since you asked:
Sounds have 5 components:

  • sample rate
  • samples
  • signal start time
  • signal stop time
  • logical stop time

Multichannel sounds are represented as an array of sounds, so a stereo sound is an array with two elements - and each element is a sound (one for each channel).
Nyquist receives the sound from Audacity as the value of the global variable track (older versions of Audacity used the global variable S).

In Nyquist, sounds can be converted to individual sample values, but this is a horribly inefficient way of working. Nyquist is an interpreted language and looping through millions of sample values will be (as Robert indicated) extremely slow.

A faster way (though still not ideal) way to handle sounds is to fetch sample values as an array (grab a whole load of samples in one go rather than sample by sample).

The most efficient way to work with sounds is to work with the sound data type directly. Many of Nyquist’s functions operate on “sounds”.

Nyquist has several loop structures.
Two of the most common are:
(Note that these are standard XLISP commands)

XLISP, (hence Nyquist) also have loop structures that work with “lists”
(all can be found in this index:
XLISP (hence Nyquist) make much use of lists (“LISP” derives from “LISt Processing”) and lists are powerful and versatile structures in all LISP based languages (

Nyquist also has additional loop structures for dealing with sounds and behaviours, including

SND-FETCH and SND-FETCH-ARRAY work sequentially, so can be easily incorporated into loop structures

When working with “sounds” (sound objects), looping through the samples is handled internally by the Nyquist functions (which are written in highly optimised C code). This is very much faster than looping through sample values in interpreted LISP loops.

Here’s an example for looping through samples and halving the value of each sample.
It is limited to a maximum of 10,000 samples so that it does not hang your machine for hours.
This code can be run in the Nyquist Prompt effect (Audacity 2.1.0) and requires a mono track selection:
(it will probably take about 30 seconds to process 10,000 samples)

(let* ((out (s-rest 0))     ;initialise a 'null' sound
       (sr *sound-srate*)   ;the sample rate
       (a1 (make-array 1))) ;an aray with one element
  ; fetch sample values one at a time
  (do* ((val (snd-fetch *track*)(snd-fetch *track*))
        (count 0 (1+ count))
        (t0 0 (/ count sr))) ;advance time from 0 by sample periods
      ;until 10000 or no more samples, then return 'out'
      ((or (= count 10000)(not val)) out)
    ;; body of loop
    (setf (aref a1 0) (* val 0.5))
    ;; push the sample onto end of the 'out' sound
    (setf out
      (sim out
        (snd-from-array t0 sr a1)))))

Here’s an example where we grab an array full of samples. This is a lot faster and can be safely used on mono tracks several minutes long:

(let ((ln (truncate len))   ;length of the selection in samples
      (out (s-rest 0))      ;initialise a 'null' sound
      (max-step 10000))     ;max size of array
  ;;outer loop
  (do* ((processed 0 (+ processed step))
        (step (min ln 100000)(min ln 100000))
        (ln (- ln step)(- ln step))
        (t0 0 (/ processed *sound-srate*)))
       ;keep going 'till step size = 0, then return 'out'
       ((= step 0) out)
    ;;loop body
    ;grab an array of sample values
    (setf snd-array (snd-fetch-array *track* step step))
    ;;inner loop
    (dotimes (j step snd-array)
      ;multiply array values by 0.5
      (setf (aref snd-array j)(* 0.5 (aref snd-array j))))
    ;;convert array to sound and add to 'out'
    (setf out
      (sim out
           (snd-from-array t0 *sound-srate* snd-array)))))

and the best way to halve the value of each sample. This uses the sound object directly and lets Nyquist handle the looping in C code.
This also works on stereo tracks:

(mult *track* 0.5)

Much of the art of Nyquist programming is working out how to abstract a DSP idea so that it can be accomplished efficiently using sound objects and efficient higher level functions rather than literal and laborious sample by sample processing.