Audacity Nyquist tutorial anywhere?

I have a cool stereo audio effect that I’d want to implement as a Nyquist plugin. But I don’t know very well the whole Nyquist concept. I’ve made the effect as a stand alone application written in C++. What I need is a good tutorial that explains how to do this.
The plugin would work as follows:

  • Mark a section in a stereo track
  • run the plugin effect
  • the plugin needs to access each sample in the left and right track and calculate a new left and right sample using the original left and right sample as well as the time offset as arguments
  • the plugin of course returns a stereo sound with the processed samples

I have the Nyquist reference manual, but it really is not enough. I’d need some Audacity Nyquist tutorial.

Some helpful links: (you’ve probably already got this one).

Note that bit-wise processing in Nyquist is rather slow and unless carefully designed it can eat large amounts of memory.

Thanks, steve. The first link seems to be what I needed. Just tested some things, but I seem to run into same problems. I might have something outdated. Better update everything before I make more of these ugly head size holes in my wall.

You’ve not mentioned which version of Audacity you are using, but I’d highly recommend that you use Audacity 1.3.12 as it has a much better (and more recent) implementation of Nyquist. Audacity 1.3 will eventually become the new Audacity 2.0.

The “left / right channel” part of your project is simple enough. Multi-channel (stereo) sounds are handled by Nyquist as an array.
Audacity passes the sound from the selection to Nyquist in the global variable “s”, which in the case of a stereo sound is an array with two elements (aref s 0) and (aref s 1).

The much harder part is dealing with time offset values.
Read the section in the Nyquist manual about “behavioural abstraction” and the “environment” a dozen times
When dealing with “relative” time, in Audacity, the length of the selection is “1” and the start of the selection is “0”.
The documentation is not very clear regarding this because Nyquist in Audacity is different from standalone Nyquist with regard to this. This is one of the most confusing aspects of Nyquist in Audacity and I still can’t explain clearly how it works - probably just best to experiment with it and post specific (practical) questions if you get stuck.

Well, actually I don’t know if I need the access to individual samples and their time offset. By time offset I mean the following:
Say that my marked sound section is 200.000 samples long. For my stereo effect I really don’t care about seconds or milliseconds. Only the range from 0 to 199.999. And in C/C++ I would do something like…

for (i = 0; i < 200000; i++)
    new_left[i] = left(i, old_left[i], old_right[i]);
    new_right[i] = right(i, old_left[i], old_right[i]);

My effect is about rotating the stereo scene. Each individual instrument will make a full “rotation” from its pan position to the far left, then to the far right and then back to its original position. The fun comes from the fact that each instrument moves individually, not all together like when you just turn tthe balance knob.

Here’s an example. Original sound:
And the rotating stereo:

And I seem to have 1.2.6, which might explain some oddities.

Hi jotti,

Nyquist controls sounds via functions and return values, not with arrays like C or C++, so with Nyquist you must think like an audio engineer, not like a C programmer.

Below is some code containing a “pan” function (simulating the “pan” knob of the Audacity audio track), and some code where the “pan” function is controlled via an LFO low-frequency generator.

Load a stereo song into Audacity, mark the whole audio track, then go to “Effect > Nyquist Prompt”, copy the code into the “Nyquist Prompt” text window and click “OK” (if it doesn’t work then Audacity_1.2.x is too old, try it again with Audacity_1.3.x-beta).

(setq *lfo-frequency* 1.0)  ; modulation = 1.0 Hertz

;; panning:  +1.0 = right
;;            0.0 = center
;;           -1.0 = left
(defun pan (sound panning)
  (vector (mult (aref sound 0) (sum 0.5 (mult -0.5 panning)))
          (mult (aref sound 1) (sum 0.5 (mult  0.5 panning)))))

(if (arrayp s)
    (pan s (lfo *lfo-frequency*))
    "This effect does not work with mono tracks")

If you control the “pan” function with a self-made envelope (search the Nyquist manual for the “pwl” function), then you can create your own arbitrary panning behaviours.

Hope that helps (a bit),

  • edgar

I hope this is not bad news for me. I really think I need to access each sample value as it appears in say a wav file. My algo is very mathematical and I guess it’s far from anything one would consider normal audio processing. If Nyquist/Lisp can’t provide me with the single samples (two times 44100 samples per second), I wonder if a C/C++ plugin can do it for me. But going the C/C++ way seems a bit difficult, since I mainly work with Windows and OSX, not Linux. Anyone know if the Audacity interface to C/C++ plugins could pass the sound as an array of 16 bit ints?

Edgar didn’t say that you can’t access each sample. He said that … well you’ve already quoted what he said. :slight_smile:

Normally it is preferable to operate directly on “sounds”. I think I’m correct in saying that Nyquist treats “sounds” as a data type, so as you have integers, floats, characters and strings in other computer languages, in Nyquist you also have “sounds”. Operating on “sounds” is reasonably quick as it is handled by “primitives” that are written in efficient C code (may be C++ or some other variant - I don’t get involved with Nyquist at that level).

Samples from a sound may be converted into an array with the function (snd-fetch-array) or (snd-samples) This can be useful for experimental and demonstration code but when possible it is usually avoided in practical plug-ins in preference for dealing with sounds directly as applying a function to a sound is much quicker and more efficient than bit-wise processing in Nyquist. Sounds can be created from an array with (snd-from-array).

The details how Nyquist internally works and the history how and why Nyquist was created can be found under Functional Languages for Real-Time Control.

The particular problem with arrays is that a simple array is not enough. With a simple array the processed sound cannot be bigger than the computer’s memory, where in 99% of all cases the free memory does not suffice, so a framework is needed where the array is used as a “window” containing only a part of the sound and administration functions to fill and empty the array as needed. This is in principle how all audio plugin frameworks, and also Nyquist internally, work.

Nyquist has the “snd-fetch-array” (converts a sound into an array of floats) and “snd-from-array” (converts an array of floats into a sound) functions, but sample-by-sample processing with arrays is rather slow with Nyquist.

AFAIK the only audio plugin framework (using C and arrays) known to work on Windows, Mac, and Linux is

But probably the best place to ask is audacity-devel, because the Audacity developers are C/C++ programmers who know this much better than me.