Beginner's question

Hello. Among the examples of scripts written in programming language Nyquist represents the chapter " Voice Synthesis "(Nyquist examples and tutorials - Voice tutorial). My question is the following one: is it possible (by adding notes in the script) to transform a script into a script which could appear in the menu “Generate” of Audacity 2?
Beforehand thank you for the information.

Do you mean, if a plug-in can be created from this tutorial or what?
Have you tried to test the code in the nyquist prompt? I had some problems because there were some unidentified characters in the code. Here is a little bit modified version that produces a random semi-melodic line from the Nyquist prompt.

(defmacro form1 (p h r)
  `(+ (* (+ (* (+ (- 392) (* 392 ,r)) (expt ,h 2))
            (* (- 596 (* 668 ,r)) ,h) 
            (+ (- 146) (* 166 ,r))) 
         (expt ,p 2))
      (* (+ (* (- 348    (* 348 ,r)) (expt ,h 2)) 
            (* (+ (- 494) (* 606 ,r)) ,h) 
            (- 141 (* 175 ,r)))
      (+ (* (- 340 (* 72 ,r)) (expt ,h 2)) 
         (* (+ (- 796) (* 108 ,r)) ,h)
         (- 708 (* 38 ,r)))

(defmacro form2 (p h r)
  `(+ (* (+ (* (+ (- 1200) (* 1208 ,r)) (expt ,h 2))
            (* (- 1320 (* 1328 ,r)) ,h) 
            (- 118 (* 158 ,r))) 
         (expt ,p 2))
      (* (+ (* (- 1864 (* 1488 ,r)) (expt ,h 2)) 
            (* (+ (- 2644) (* 1510 ,r)) ,h) 
            (+ (- 561) (* 221 ,r)))
      (+ (* (+ (- 670) (* 490 ,r)) (expt ,h 2)) 
         (* (- 1355  (* 697 ,r)) ,h)
         (- 1517 (* 117 ,r)))

(defmacro form3 (p h r)
  `(+ (* (+ (* (- 604 (* 604 ,r)) (expt ,h 2))
            (* (- 1038 (* 1178 ,r)) ,h) 
            (+ 246 (* 566 ,r))) 
         (expt ,p 2))
      (* (+ (* (+  (- 1150) (* 1262 ,r)) (expt ,h 2)) 
            (* (+ (- 1443) (* 1313 ,r)) ,h) 
            (- (- 317) (* 483 ,r)))
      (+ (* (- 1130 (* 836 ,r)) (expt ,h 2)) 
         (* (+ (- 315)  (* 44 ,r)) ,h)
         (- 2427 (* 127 ,r)))

(defmacro form4 (p h r)
  `(+ (* (+ (* (+ (- 1120) (* 16 ,r)) (expt ,h 2))
            (* (- 1696 (* 180 ,r)) ,h) 
            (+ 500 (* 522 ,r))) 
         (expt ,p 2))
      (* (+ (* (+  (- 140) (* 240 ,r)) (expt ,h 2)) 
            (* (+ (- 578) (* 214 ,r)) ,h) 
            (- (- 692) (* 419 ,r)))
      (+ (* (- 1480 (* 602 ,r)) (expt ,h 2)) 
         (* (+ (- 1220)  (* 289 ,r)) ,h)
         (- 3678 (* 178 ,r)))

; ADSR-SMOOTH: a standard ADSR envelope
(defun adsr-smooth (signal dur)
     (mult signal (env 0.1 0.2 0.5  1.0  0.8  0.4 dur)))
; VIBRATO: generates vibrato
; vib-rate = vibrato rate in Hz
; dur = duration in seconds
(defun vibrato (vib-rate dur)
    (osc (hz-to-step vib-rate) dur))

; PULSE-TABLE: build table for generating a pulse signal
; harm = number of harmonics
(defun pulse-table (harm)
  (abs-env ;prevent any timewarping in the following
    (let ((table (build-harmonic 1 2048)))
      (cond ((> harm 1) ;sum remaining harmonics
         (setf harm (- harm 1))
             (dotimes (i harm)
               (setf table (sum table (build-harmonic (1+ i) 2048))))))

; PULSE-WITH-VIBRATO: generate pulse with vibrato
; step = pitch in steps
; duration = duration in seconds
; vib-rate = vibrato rate in Hz
(defun pulse-with-vibrato (step duration vib-rate)
  (let (harm freq)
    (setf freq (step-to-hz step))
    (setf harm (truncate (/ 22050 (* 2 freq))))
    (setf table (scale (/ 1.0 harm) (pulse-table harm)))
    (fmosc step (vibrato vib-rate duration) (list table (hz-to-step 1) t))))

; VOICING-SOURCE: generate voicing source: pulse with vibrato + LPFs
; step = pitch in steps
; duration = duration in seconds
; vib-rate = vibrato rate in Hz
(defun voicing-source (step duration vib-rate)
        (pulse-with-vibrato step duration vib-rate) 
        (*  1.414 (* 2 (step-to-hz step)))) 
        (*  1.414 (* 4 (step-to-hz step)))))

; NOISE-SOURCE: generate noise source: noise + offset oscillator + LPF
; step = pitch in steps
; duration = duration in seconds
; vib-rate = vibrato rate in Hz
(defun noise-source (step duration vib-rate)
        (noise duration)
        (fmosc step (vibrato vib-rate duration))) 8000))

; SOURCE: generate source signal: voicing + noise sources
; freq = fundamental frequency in Hz
; duration = duration in seconds
; vib-rate = vibrato rate in Hz
; voicing-scale = percentage of voicing in the resulting signal (0.0 -> 1.0)
; noise-scale = percentage of noise in the resulting signal (0.0 -> 1.0)
(defun source (freq duration vib-rate voicing-scale noise-scale)
        (scale voicing-scale (voicing-source (hz-to-step freq) duration vib-rate))
        (scale noise-scale (noise-source (hz-to-step freq) duration vib-rate))))

; MAKE-SPECTRUM: formant filters
; freq = fundamental frequency in Hz
; dur = duration in seconds
; vib-rate = vibrato rate in Hz
; v-scale = amplitude scaling for the voicing source
; n-scale = amplitude scaling for the noise source 
; p = horizontal position of the tongue (0 = front -> 1 = back) 
; h = vertical position of the tongue (0.0 = low -> 1.0 = high)
; r = rouding of the lips (0.0 = spread -> 1.0 = rounded)
(defun make-spectrum (freq dur vib-rate v-scale n-scale p h r)
    (let ((src (source freq dur vib-rate v-scale n-scale)))
        (setf spectrum
                (reson src (form1 p h r) 50 1)
                (reson (scale-db (- 10) src) (form2 p h r) 70 1)
                (reson (scale-db (- 14) src) (form3 p h r) 110 1)
                (reson (scale-db (- 20) src) (form4 p h r) 250 1)))))

; SYNTHESISE: the synthesise function
; Simplified version of the instrument used by the agents discussed in Chapter 6.
; f0 = pitch frequency
; w1 = amplitude of voicing source (min = 0.0 max = 1.0)
; w2 = amplitude of noise source (min = 0.0 max = 1.0)
; a = horizontal position of the tongue (0.0 = front -> 1.0 = back) 
; b = vertical position of the tongue (0.0 = low -> 1.0 = high)
; c = rouding of the lips (0.0 = spread -> 1.0 = rounded)
; fm = vibrato rate (in Hz)
; h = duration in seconds
(defun synthesise (f0 w1 w2 a b c fm h)
    (adsr-smooth (make-spectrum f0 h fm w1 w2 a b c) h))
;=== The code for the instrument ends here ===

; Test the SYNTHESISE function with different 
; positions of the articulators
(defun vowel-1 (f0)
    (synthesise f0 1.0 0.005 0.0 0.0 0.0 5.6 1.0))
(defun vowel-2 (f0)
    (synthesise f0 1.0 0.005 0.0 0.0 1.0 5.6 1.0))
(defun vowel-3 (f0)
    (synthesise f0 1.0 0.005 0.5 0.0 0.0 5.6 1.0))
(defun vowel-4 (f0)
    (synthesise f0 1.0 0.005 0.5 0.0 1.0 5.6 1.0))
(defun vowel-5 (f0)
    (synthesise f0 1.0 0.005 1.0 0.0 0.0 5.6 1.0))
(defun vowel-6 (f0)
    (synthesise f0 1.0 0.005 1.0 0.0 1.0 5.6 1.0))
(defun vowel-7 (f0)
    (synthesise f0 1.0 0.005 0.0 0.5 0.0 5.6 1.0))
(defun vowel-8 (f0)
    (synthesise f0 1.0 0.005 0.0 0.5 1.0 5.6 1.0))
(defun vowel-9 (f0)
    (synthesise f0 1.0 0.005 0.5 0.5 0.0 5.6 1.0))
(defun vowel-10 (f0)
    (synthesise f0 1.0 0.005 0.5 0.5 1.0 5.6 1.0))
(defun vowel-11 (f0)
    (synthesise f0 1.0 0.005 1.0 0.5 0.0 5.6 1.0))
(defun vowel-12 (f0)
    (synthesise f0 1.0 0.005 1.0 0.5 1.0 5.6 1.0))
(defun vowel-13 (f0)
    (synthesise f0 1.0 0.005 0.0 1.0 0.0 5.6 1.0))
(defun vowel-14 (f0)
    (synthesise f0 1.0 0.005 0.0 1.0 1.0 5.6 1.0))
(defun vowel-15 (f0)
    (synthesise f0 1.0 0.005 0.5 1.0 0.0 5.6 1.0))
(defun vowel-16 (f0)
    (synthesise f0 1.0 0.005 0.5 1.0 1.0 5.6 1.0))
(defun vowel-17 (f0)
    (synthesise f0 1.0 0.005 1.0 1.0 0.0 5.6 1.0))
(defun vowel-18 (f0)
    (synthesise f0 1.0 0.005 1.0 1.0 1.0 5.6 1.0))

;; play everything
(defun vowel-n (n f0) (eval (read (make-string-input-stream (format nil "(VOWEL-~A ~a)" n f0)))))

(defun play-all-vowels ()
  (seqrep (i 18) 
(scale 20 (vowel-n (1+ i) (real-random 130.0 290.0)))))
(abs-env (play-all-vowels)

The big question is, which parameters you want to adjust in the GUI and what sound should be given out, after processing.)

To make a plug-in appear in the Generate menu you need to add the appropriate header code: Missing features - Audacity Support
In particular,

;type generate

An example of the header for a generate type plug-in:

;nyquist plug-in
;version 3
;type generate
;categories ""
;name "Risset Drum..."
;action "Generating Risset Drum..."

I do not speak English. I thank the moderators for their answer. I am confronted in problems. I try to use the console Nyquist in the menu Effects. I use Audacity 2.0.2 under Windows XP.
If I understood well, console Nyquist allows to try simple scripts written in language LISP. If the script is bad, Audacity posts: " the window Nyquist did not return audio". Having clicked on “OK”, we can open again the console Nyquist to use the command"debugging". In my case, most of the time the window " the window Nyquist did not return audio " opens again. In that case, if I open third time the console Nyquist I can read a message like:

Warning: appending “” to default-sf-dir
Saving sound file to C:Program FilesAudacity 2Audacityfrederic-temp.wav

total samples: 36960
AutoNorm: peak was 1,
peak after normalization was 0.9,
suggested normalization factor is 0.9

If I go to the file of Audacity, I find effectively this audio file. My question is the following one: where can we find pieces of information on this file?
For example:
_ Why Audacity do not place this file in the open project?
_ Do we have to leave this file ".wave " isolated in the file of Audacity?
Beforehand thank you, for the answer.

Please post your Nyquist code so we can see what you are doing.

If you want that Nyquist gives your sound back to Audacity (to the selected track) just omit the “play” function, e.g. instead of

(play <whatever-mysound>)

simply return your calculated sound, e.g.

(abs-env (osc c6)); sine tone 1 sec

or the variable that holds the sound without parenthesis.
The tutorials often use “(play )” because they are written for the stand-alone version of Nyquist and it is the normal way to play the sound. In Audacity, you play the returned sound like any other track with the usual buttons.

I try to understand how we write a script in LISP but it is very difficult because I use a translation of the textbook Nyquist 2.37 which is bad. To start with, I worked on the first examples.
I thus have three question to ask. it concerns the following script

 (defun mkwave ()
   (setf *table* (sim (scale 0.5 (build-harmonic 1.0 2048))
                     (scale 0.25 (build-harmonic 2.0 2048))
                     (scale 0.125 (build-harmonic 3.0 2048))
                     (scale 0.062 (build-harmonic 4.0 2048))))
   (setf *table* (list *table* (hz-to-step 1) T)))
(defun note (pitch dur) 
   (osc pitch dur *table*))

 (play (seq (note c4 i)
            (note d4 w)
            (note f4 i)
            (note g4 i)
            (note d4 q)))

My questions are the following one:
_ this scipt is it really adapted to the Nyquist prompt? This module is placed in the effects, but it is not installed in the effects " generate ".This could explain the other problem.
_ Why does Audacity post " Nyquist did not return audio ". I have listen the sequence before Audacity posts this error message.if I look in C:Program Files / Aucacity 2 / Audacity / I find a file (.wave ) which corresponds to the script.
Beforehand thank you for this information.

As I said before, omit “(play … )” that surrounds the “seq” command. The sound will replace the current selection. If you want to ensure that the sound has always the same length, replace play with “abs-env”
This command makes that the length of the current selection is ignored.


  1. No it is not adapted for the nyquist prompt. omit play (which writes the sound to a file) and use abs-env (if you want real seconds for your produced notes)
  2. Don’t confuse the Nyquist prompt with the different plug-in menus (Effect, Analyse and Generate). The prompt is only for testing code. If you want to make a plug-in that appears permanently in one of the menus, you have to use an editor and put some special header lines on top. There’s an introduction available on the wiki.
  3. The last expression in the code produced no sound in the track but played and saved it instead, hence the message.

Link in my previous post: Beginner's question - #3 by steve

Hello Sirs. I have a question very simple to rest: why a Nyquist script installed in Program Files/Audacity2/Audacity/Plug-Ins/ does not appear in the menu " Generate" " of Audacity 2.0.2?
Rayuredevinyle.ny (1.38 KB)

Nyquist requires standard ANSI characters.
The accented characters and the “quote” characters in that file are not part of the standard ANSI character set.
What program did you use to write it?

I think it is a problem with the used character sets. Read the log file from the help menu, presumably the fault will be listed there. Something like “unicode conversion failed” or alike.
Open the file in you favorite Editor and try to save it in another format.
Replace all brackets «» with double quotes. (")

Hello Sirs.
(My computer works under Windows XP.)
I leave again a message for a problem which I indicated. To try a script that I wish to see appearing in the category plug-ins Nyquist “Généreré”, I thus wrote a script.
I use the Open Office org 3.4 software.In Open I use the police " Courrier New " .The size is “11”. I deleted all the accents and I replaced guillements by other signs. I save my script by selecting the option " text (.txt ).Then, I copied this file (.txt ) in the sub-file “Nyquist” of the file Audacity. I renamed the file by changing the extension of the file " .txt "there ".ny ". Finally , I copied the file “.ny” so created in the sub-file “Plug-ins” of the file “Audacity” in Audacity 2.
The current problem is that this script does not still appear in the list of the plug-ins of the menu " Générer ".
Do you understand where is situated my beginner’s blunder?
Beforehand thank you for your answer.

Don’t use Open Office for this. The Nyquist script must use standard ANSII characters and be “plain text”.
You could use Windows NotePad, but I would highly recommend NotePad++

NotePad++ has automatic parentheses matching and you can enable syntax highlighting for LISP (which is quite similar to Nyquist). It can also show line numbers. These (and other features in NotePad++) make it a pretty good program for writing Nyquist scripts.

Hello Sirs.
I thank Steve for its help. I began to work with filters Nyquist presented in the chapter " More examples - Filter Examples " (in " Nyquist Reference Manual 2.37").Among the presented filters, there is this one:

(play (reson (scale 0.005 (noise))
(pwl 0.0 100.0 1.0 1000.0 1.0) 20.0)))

I have two questions to ask:
_ Why all the small scripts presented in this chapter after all generate of the audio, while they are presented as filters. However the beginner I am, when I use a filter, I select an audio file and I apply a filter which is going to modify the audio.
_ Then, I would like to understand why, when I listen to the sound produced by the script which I noted above, the sound lasts as long as the selection in the audio track that I used, while if I writes this script, the generated sound lasts only 1 second. Besides, the intensity of generated sound is not at all the same, while the values did not change!

(abs-env (reson (scale 0.005 (noise))
(pwl 0.0 100.0 1.0 1000.0 1.0) 20.0)))

Again, thank you for your help

Your example uses “reson” to filter white noise (which contains all frequencies randomly distributed).
A band of 20 Hz width is seperated from all other frequencies, gradually rising from 100 to 1000 Hz.
If you replace (noise) with “normal” audio (=variable s), you’ll get a wah-wah like effect.
The function “abs-env” resets the environment to absolute values. This means that global variables are restored to their defaults. This is mostly apparent for sustain, which gets the value 1 (second) instead of the duration of the selected area in the audio track. The sounds produced are in both cases equally “intensive” but can vary due to the randomness of “noise”. The following code gives the peaks for the resulting sounds:

(print (s-save (reson (scale 0.025 (noise))
              (pwl 0.0 100.0 1.0 1000.0 1.0) 20.0) ny:all "" :play t))
(print (s-save (abs-env (reson (scale 0.025 (noise))
              (pwl 0.0 100.0 1.0 1000.0 1.0) 20.0)) ny:all "" :play t))

The “s-save” function is a good replacement for “play” because it can play a sound without writing the result to the hard drive (“” is used to set the file name to nil).
I’ve increased the scale factor, because the result was hardly hearable in the original code.

Hello Sir.
There is something that I find strange here. I would like to speak about the following script (script 1):

(abs-env (reson (scale 0.025 (osc 84))
(pwl 0.0 100.0 1.0 1000.0 1.0) 20.0)))
As had said it Robert J.H we obtain a kind of effect WHA WHA.
If I change the shape of wave (osc 84 becomes white noise).

(abs-env (reson (scale 0.025 (noise))
(pwl 0.0 100.0 1.0 1000.0 1.0) 20.0)))
In the first wave we have the impression that it is the volume which keeps increasing (script 1). While in the second shape of wave, we have the impression that it is the height of the sound that changes. The sound goes more and more to the treble.

There is also something that I do not understand. If the function (abs-env sends back the values by default parameters, there is a means to know which result we obtain with the following values: (pwl 0.0 100.0 1.0 1000.0 1.0)?

The wah-wah effect puts an accent on certain frequencies, depending on how open the pedal is. In our example, only the center frequency of the resonance filter is moved whereas mechanical pedals also modify the bandwidth (that’s the constant value of 20.0 Hz after the pwl). our sweep goes from 100 up to 1000 Hz. Imagine the frequency band as a line from left to right (deep to high). The resonance filter creates a little bump on this line, which is gradually moved to the right. the steepness of the slope on both sides depends on the bandwidth (20). Your sine tone of 84 has the narrowest bandwidth imagineable, therefore you only hear a increase of the volume when the center frequency gets closer. Besides the frequenciy of 84 (=1046.5 Hz) is never reached by the filter, otherwise the volume would go back after the crest (try 72 or 60 for example).
Guitar sounds have a lot of partials, thus the wah-wah effect sounds more interesting - some tones get louder whereas others decrease in volume. Since white noise has all frequencies present, you’ll get in this case the impression of a upwards sliding “glissando” tone.
To simulate an actual wah-wah, you should replace the bandwidth of 20 with another pwl function e.g. (pwl 0.0 50.0 1.0 500.0 1.0). perhaps it is also necessary to change the volume of the whole sound accordingly. I’ve once adapted an existing physical wah-wah in Nyquist and that’s the code:

;; wah-wah2
(defun wah-wah (sound pedal &optional (start-fr 450))
    ((g (diff 2(mult  0.2  (s-exp (mult (s-log 4.0) pedal)))))
     (fr (mult  start-fr 
       (s-exp (mult (s-log 2.0) (mult  2.3          pedal))))) 
     (q   (s-exp 
       (mult (s-log 2.0) (sum 1  (mult  2   (diff  1 pedal))))))
     (bw (mult  fr 2 (recip q))))
        (if (numberp pedal)
            (format t "~a: ~a, ~a, ~a, ~a~%" pedal g fr q bw))
            (mult g (reson sound   fr bw 1))))
(dotimes (i 11)
    (wah-wah s (* 0.1 i)))
(setf pedal-move (abs-env (s-exp (mult 0.4  (s-log (ramp))))))
(s-save (seqrep (i 15)
(abs-env (mult 0.9(pluck (+ 48 (random 24))))) pedal-move)) ny:all "" :play t)

the wah-wah function takes a pedal position from 0 to 1 and calculates the values for the reson filter accordingly.
It is really not necessary to understand the code. You can implement a wah-wah a lot simpler and it may sound even better than this version.

Your second question:
if you enclose a function or behaviour with “abs-env”, the result will be as it is written. For (pwl 0.0 100.0 1.0 1000.0 1.0): curve starts at 0 with a value of 100, next point is 1 s with a value of 1000. the last position is 1 s again but since it is already occupied, pwl moves one sample further and sets it to value 0 (default value for pwl for the last time point).
The temporal behaviour of Nyquist is a little complicated and there are some threads which already deal with this subject.
It’s been only today that I’ve struggled myself with pwl and abs-env. I’ve made an envelope like

(setf envelope (abs-env (pwl 0.0 1.0 3.0 100.0 3.0)))

Ok, this envelope had the desired length of 3 s.
I then wanted to limit the lower limit to a certain value. This can be achieved by:

(setf new-env (s-max 20 env))

The new envelope will be filled with values of 20 when the old envelope does not exceed this value. The strange thing was that the new sound was now only 1 s long. This is because the numeral value 20 is converted to a sound with the actual duration of the environment. But because we have reset the environment with “abs-env” the nominal duration is now only one s.
The new sound is built with the length of the shortest sound thus the ugly result.

abs-env evaluates the argument in the default environment.

I’ll give an example to demonstrate:

If you select a section of a track and then apply the following code in the Nyquist Prompt effect:

(noise 2)

It will generate noise that is twice as long as your selection.
For “effects”, the “local time” starts at the beginning of the selection, ends at the end of the selection and is allocated a duration of 1 (one).

(noise 0.5)

will generate noise that is half the length of the selection.

Now what if we want to generate noise for exactly 2.5 seconds? We know that (noise 2.5) will not do what we want because that will generate noise that is 2.5 times longer than the current selection. What we need to do is to specify that we want to use “real” (global) time and not “local” time. We can do that using abs-env.

(abs-env (noise 2.5))

I thank the moderators for their comments.
I would like returning to the script proposed by Robert J.H.

(print (s-save (reson (scale 0.025 (noise))
(pwl 0.0 100.0 1.0 1000.0 1.0) 20.0) ny:all “” :play t))

Robert J.H indicates that this script allows, if I understand well, to reach the level of peak. I have a first question to ask: what, in this script allows to reach it? Is it the code [ny:all] (" all the samples ")?
Then, I would have beginner’s remark to be made. It complicates the understanding of the coding Nyquist, if finally we can by using different scripts achieve the same sound result in Audacity 2.0.2. For exemple:

(abs-env (reson (scale 0.025 (noise))
(pwl 0.0 100.0 1.0 1000.0 1.0) 20.0)))


(print (s-save (abs-env (reson (scale 0.025 (noise))
(pwl 0.0 100.0 1.0 1000.0 1.0) 20.0)) ny:all “” :play t))

Both scripts make listen a short glissando; simply, the Robert J.H’s second script does not generate an audio file in a track of Audacity.

Now, I would like asking a second question. This question concerns an aspect that I do not manage to find in the textbook “Nyquist Reference manual version 2.37 Roger B. Dannenberg”.
This question concerns the synthaxe of a script. How as it is had to organize a script?
It is necessary to follow this order presented in the chapter " Introduction and overview / Examples": to begin with the waveform [(defun mkwave ()], then pass in notes or in succession of notes [(defun notes (osc-tri 84)] or [(play (seq ( c4 ) ( d4) ( e4 )] for example.
Then pass in the envelope [(defun env-note(
(mult (note
(env (]
, finally to end with a linear function [( pwl )]?
I hope that the moderators will understand what I wrote awkwardly.
In fact, I am the order of the textbook.
Beforehand thank you.