Division in Nyquist (Lisp)

Hi, can I do this:

(setf y (/ y peak))

for each field of the array “y”?
Where “peak” is the highest value in “y”
Or should I use a “while loop” for normalize it?

The problem is that Audacity show a clipped signal
when I transform “y” in a sound.
Thanks…

In contrast to Common Lisp, XLisp has no mapping functions for arrays, so normalizing samples in an array must be done by a hand-written loop and is very slow. Here is a much faster example how to first get the maximum peak of the samples in the array, and then do the normalizing on the sound level:

(let* ((array-sound (snd-from-array 0.0 *sound-srate* y))
       (array-peak (peak array-sound (length y))))
  (scale (/ 1.0 array-peak) array-sound))

This version will only scale down “too loud” array sounds, but not amplify “too low” sounds:

(let* ((array-sound (snd-from-array 0.0 *sound-srate* y))
       (array-peak (peak array-sound (length y))))
  (if (> array-peak 1.0)
      (scale (/ 1.0 array-peak) array-sound)
      array-sound))

Both versions return the sound from the array normalized to +/-1.0 floating-point samples, so no overdrive clipping will appear in the Audacity track.

  • edgar

Thank edgar, I tried that but it returns only positives values of sound.
I just did “copy/paste” of your code because I don’t undertand it, my plug-in
looks like this:

;nyquist plug-in
;version 1
;type generate
;name "Test pulses..."
;action "Generating pulse train..."
;info "Pulse Train for exciting..."

;control Fs "Sample Rate" int "Hz" 44100 44000 88000
;control f "First Harmonic" real "Hz" 8000.0 1.0 20000.0
;control duracion "Duration" real "seconds" 3.0 0.001 10.0

;asign correct length to "lengthY"
(setf lengthY (+ (truncate (* Fs duracion)) 1))

;create the "y" array with "lengthY" length
(setf y (make-array lengthY))

;calculate the harmonics amount "N"
(setf N (truncate (/ (/ Fs 2) f)))

;fill the array "y"
(setf (aref y 0) 0)
(setf k 1)
(setf pico 0)
(while (< k lengthY)
	(setf A (/ (* 3.141592 (* f k)) Fs))
	(setf (aref y k) (/ (* (sin (* A (+ N 1))) (cos (* A N))) (sin A)))
	(setf k (+ k 1)))

;return a sound from array "y"
(let ((array-peak (peak (snd-from-array 0.0 Fs y) (length y))))
  (scale (/ 1.0 array-peak) (snd-from-array 0.0 Fs y)))

Also, my code is slow because of the while but I don’t know
other way to create the wanted sound.

What difference between let and let* ???

Thanks for answer.

 ((array-sound (snd-from-array 0.0 *sound-srate* y))

Creates a sound called array-sound from the array y Nyquist Functions

(array-peak (peak array-sound (length y))))

Finds the peak level of the sound array-sound Nyquist Functions

(scale (/ 1.0 array-peak) array-sound))

Scales the sound by 1/peak amplitude. Nyquist Functions

See: XLISP let*

(Good to see someone else interested in Nyquist. I hope we’ll see some of your plug-ins in the future :slight_smile: )

Here is an example how to create a pulse train from the Audacity Vocoder ‘pulse’ generator (see “vocoder.ny” in your Audacity “plug-ins” folder):

;nyquist plug-in
;version 3
;type generate
;name "Pulse Train Generator..."
;action "Pulse Train Generator..."

;control pulse-frequency "First Harmonic" real "Hz" 8000.0 1.0 20000.0
;control sound-duration "Duration" real "seconds" 3.0 0.001 10.0

;; replacement for the "frequency" slider in the original plugin

(setq sample-rate 44100)

;; the 'pulse' has a pulse-width of exactly two samples length
;; one sample is at the positive limit, the other on the negative limit
;; the array size (= waveform period) depends on the sample rate

(defun pulse-waveform (frequency)
  (let* ((array-size (round (/ sample-rate pulse-frequency)))
         (waveform-array (make-array array-size)))
    (dotimes (array-index array-size)    ; fill the array with silence
      (setf (aref waveform-array array-index) 0.0))
    (setf (aref waveform-array 1) 1.0)   ; one positive peak sample
    (setf (aref waveform-array 2) -1.0)  ; one negative peak sample
    (snd-from-array 0.0 sample-rate waveform-array)))

;; convert the 'pulse-freqency' into a MIDI step number,
;; make 'pulse-table' a 'pulse-waveform' wavetable of 'pulse-step'

(setf pulse-step (hz-to-step pulse-frequency))
(setf pulse-table (list (pulse-waveform pulse-frequency) pulse-step t))

;; finally create the pulse train by using
;; 'pulse-table' as a wavetable in a Nyquist 'osc'

(osc pulse-step sound-duration pulse-table)

Audacity Nyquist “generate” plugins always create Audacity tracks of 44100 Hertz sample frequency (Steve may correct this if it is wrong) and the pulse width must be an exact integer multiple of a single sample, otherwise the pulse signal will alias (produces unwanted additional mirror frequencies).

I don’t know how to change the sample frequency of the Audacity track to 44000 or 88000 Hertz with Nyquist, that’s why the “frequency” slider of the original plugin was replaced by a static “44100” value.

If you want a positive or negative peak only (as often shown wrongly in lots of teaching books), then leave away the respective line in the “pulse waveform” function, but be aware that this then will produce a DC offset, which must be handled by additional “DC compensation” code…

  • edgar

For Nyquist in Audacity “Generate” type plug-ins will create tracks at the same sample rate as the current Project Rate (as indicated in the lower left corner of the main Audacity window).

Nyquist cannot change the sample rate of an Audacity track.

Example:
If you generate a 3 second tone with a sample rate of 88200 Hz in a Nyquist “Generate” plug-in, but the Project Rate is 44100 Hz, then when the sound is sent to the Audacity track, the tone will be one octave lower and twice as long (6 seconds). The explanation for this is that “a 3 second tone with a sample rate of 88200 Hz” is 3 x 88200 = 264600 samples. As said, Nyquist cannot control the sample rate of an Audacity track and when new tracks are created by Audacity they are created with the same sample rate as the Project Rate. The 264600 samples will thus occupy 6 seconds (6 x 44100 = 264600).

I think I’ve got a way of producing a bandwidth limited pulse train that overcomes that limitation (and it’s quite quick because it only needs to calculate enough for a wavetable)

;control freq "Frequency" real "Hz" 100 10 10000
;control dur "Length" real "seconds" 3 1 100

;; bandwidth limited pulse *pulse-table*


(setq *pulse-table* (s-rest 1))               ; initialise *pulse-table*
(setq len (* dur *sound-srate*))              ; make progress bar
(setq wlength (/ *sound-srate* freq))         ; wavelength in samples
(setq nyqf (/ *sound-srate* 2.1))             ; below Nyquist frequency

(defun maketable (freq)
  (let ((wl (* 2 (/ freq))))
    (do ((i 1 (1+ i)))
        ((or (>= (* i freq) nyqf)(> i 2048)) *pulse-table*)
      (setq *pulse-table*
        (sim *pulse-table* 
          (osc (hz-to-step (* freq i)) wl *table* 90))))
    ;; trim start from sound
    (setq *pulse-table* 
      (extract-abs (/ freq) wl *pulse-table*)))
  (setq *pulse-table* 
    (mult *pulse-table* 
      (/ (peak *pulse-table* (truncate (* 2 wlength))))))
  (list *pulse-table* (hz-to-step freq) T))

(osc (hz-to-step freq) dur (maketable freq))

To continue…

  1. Can I work with a sound like an array?
    I mean, in place to create an array and then transform it in a sound. In order to do it faster. I’ll need a way to access a sound field for set its value.

  2. steve said Nyquist use 20 decimals, so, the smallest number we can create is (power 10 -20) (right?, and if not, put it as an example).
    Supouse I need divide for the smallest number, resulting a big number, can Nyquist represent it?
    For example:
    (/ 1 smallest)
    (/ 2 smallest); !
    (/ biggest smallest); WTF!

Which are the smallest and biggest numbers?

Thank you.

Processing audio sample by sample is slow. Whenever possible it will be much faster to operate on the sound as a whole rather than sample by sample.
For example, you could clip audio at +/- 0.5 with the following code:
WARNING - this code is VERY slow and may crash Audacity
If you want to test this code, use a very small selection of a mono track, say 0.1 seconds)

(setf newsound (s-rest 0))

(dotimes (num (truncate len))
  (let ((next (snd-fetch s))        ; get next sample
        (sdur (/ *sound-srate*)))   ; duration of one sample
    (if (> next 0.5)
      (setf newsound 
        (sim 
          newsound 
          (snd-const 0.5 (* num sdur) *sound-srate* sdur)))
      (if (< next -0.5)
       (setf newsound 
        (sim 
          newsound 
            (snd-const -0.5 (* num sdur) *sound-srate* sdur)))
       (setf newsound 
        (sim
          newsound
          (snd-const next (* num sdur) *sound-srate* sdur)))))))
newsound

Rather than the above, it would be better to read the audio into an array and then step through the array, finally converting the array back into a sound:
WARNING - this code may freeze or crash Audacity if used on more than a few seconds of audio.

(let* ((num  (truncate len))              ; number of samples in selection
       (sndarray (snd-samples s num)))    ; read samples into an array
  (dotimes (sample num)
    (if (> (aref sndarray sample) 0.5)
      (setf (aref sndarray sample) 0.5)    
        (if (< (aref sndarray sample) -0.5)
          (setf (aref sndarray sample) -0.5))))
(snd-from-array 0 *sound-srate* sndarray)) ; make sound from array

This is much better than the first example, but it is still very slow when processing more than a few seconds of audio.

Now compare with producing the same effect by operating directly on the sound:

(clip s 0.5)

In this example, not only is the code very much shorter to write, but the function clip uses the Nyquist function snd-clip which is written in C++ and is very much faster than looping through the samples in Nyquist (and it does not crash Audacity).


It’s a bit more complicated than that.
Nyquist uses 32-bit float format (single precision), so the values that can be represented are between 0 and +/- infinity.
For smaller values the intervals between numbers are smaller, but as the numbers get larger, so the intervals between numbers get larger.
The smallest positive value is 2 ^ -149
How it works is described here: http://www.psc.edu/general/software/packages/ieee/ieee.php

When I previously said: “Nyquist works to something like 20 decimal places” I was just giving a ballpark figure and indicating that it was a lot more than 5 or 6 decimal places. The following code will print out the value of pi as used by nyquist, to 60 decimal places:

(setq *float-format* "%1.60f")
(format nil "~a" pi)

You will notice two things:

  1. After 51 decimal places the numbers all become 0.
  2. It is only accurate to 14 decimal places:
    Nyquists value of pi compared with the actual value of pi to 20 decimal places:
    3.14159265358979000737 (Nyquist)
    3.14159265358979323846 (actual value)

Yes and No. Nyquist was designed to work with sounds as streams, not as arrays (like Matlab or GNU Octave do). Nyquist provides functions to read sounds into arrays and to convert arrays into sounds again, but the manipulation of samples (floating-point numbers) in arrays is very slow, because neither nyquist nor XLISP (the programming language Nyquist is based on) provide functions to manipulate arrays other then (AREF …) and (SETF (AREF …)). Everything else must be written in interpreted XLISP code by hand.

But Nyquist provides functions to play “parts” of sounds. The Nyquist CUE function for example can play a part of a sound defined by a START and a STOP time. This works faster than arrays.

An overview of the internal working of Nyquist can be found here:

CMU > R.B.Dannenberg > Bibliography > Functional Languages for Real-Time Control > Nyquist

Nyquist computes samples as floating-point numbers of type IEEE “double precision”. The smallest and biggest floating-point number is defined by the standard C-library of the operating system where Nyquist was compiled on.

An academic question - I’ve seen some references in the Nyquist manual to processing using double-precision floats, but aren’t the sample values themselves represented as single precision (4 bytes)?

Inside Nyquist all XLISP FLONUMs and all audio samples are computed as IEEE doubles. It may happen that Audacity internally uses single-floats and the Audacity Nyquist interface then converts singles to doubles and vice versa, but inside Nyquist all audio samples are computed as double-floats, independent of the underlying hardware architecture, and also with integer sound file formats.