I don’t think the code I put is correct.
Is the problem normalize passing the audio twice?
Ideally, we want normalize for this, but I guess amplify would work, as well.
How would the code go about “amplify” replacing “normalize”?
In order to find Nyquist Prompt script resources I preferred - I had to go to the Internet Archives as that whole project-initiative was shut down last year I think.
I can’t find anything about amplify.
There is a list of useful documentation on my blog: Documentation | AudioNyq
You never responded as to whether or not “Amplify” is a good replacement for “Normalize” in this case…
All feedback is much appreciated
“Amplifying” in Nyquist is achieved by “multiplying” the audio by a specified amount (mult
), or using the equivalent function scale
. Both mult
and scale
are safe for general use.
How do I write it so that each track is set to be Amplified by -50, then the volume reduction occurs right after?
;version 4
;type process
; Nyquist script to duplicate a track 640 times,
; progressively lower the pitch and volume for each duplicate,
; normalize levels of each duplicate,
; and mix the results into a single final track.
(setq num-duplicates 640)
(setq final-result 0)
(setq gain-db -0.1)
(setq shift-step 0.001)
(setq mult-db -50)
(defun process (sig)
(do ((i 0 (1+ i)))
((= i num-duplicates) final-result)
(let* ((shift-ratio (- 1 (* shift-step i)))
(processed (pitshift sig shift-ratio 1))
(processed (mult-db processed) (* i))
(processed (scale-db (* gain-db i) processed)))
(setf final-result (sum final-result processed)))))
;; Apply 'process' function to all selected channels.
(multichan-expand #'process *track*)
Is this code correct?
In the plugin Delay (Pitch Change) - normalize levels for each echo exists. Why does that work?
Here’s the script:
;nyquist plug-in
;version 1
;type process
;name "Delay (Pitch change)..."
;action "Applying Delay with Pitch change..."
;info "by David R. sky\nReleased under terms of GNU Public License"
;control decay "Decay amount" real "dB" 0 0 24
;control delay "Delay time" real "seconds" 0.5 0.0 5.0
;control count "Number of echoes" int "times" 10 1 30
;control shift "Pitch change factor" real "shift" 1.1 1.001 3
;control md "Pitch: increase or decrease" int "0=increase 1=decrease" 0 0 1
;control norm-level "Normalization level" real "" 0.95 0.0 1.0
; delay with Pitch Change by David R. Sky
; updated January 4, 2006 to also work in stereo,
; also includes normalization
; note that pitch change is accompanied with change in duration
; setting stretch factor
(setf shift (cond
((= md 0) (/ 1.0 shift))
((= md 1) shift)))
; function to stretch audio
(defun change (sound shift)
(if (arrayp sound)
(vector
(force-srate 44100 (stretch-abs shift (sound (aref sound 0))))
(force-srate 44100 (stretch-abs shift (sound (aref sound 1)))))
(force-srate 44100 (stretch-abs shift (sound sound)))))
; Roger Dannenberg's delay function, slightly altered
(defun delays (s decay delay count shift)
(if (= count 0) (cue s)
(sim (cue s)
(loud decay (at delay (delays (change s shift) decay delay (- count 1) shift))))))
; normalize function
(defun normalize (signal)
(setf x (if (arrayp signal)
(max (peak (aref signal 0) ny:all) (peak (aref signal 1) ny:all))
(peak signal ny:all)))
(scale (/ norm-level x) signal))
; applying the effect
(normalize (stretch-abs 1 (delays s (- 0 decay) delay count shift)))
I appreciate any feedback.
It does not normalize for each echo.
The block of code that begins with defun normalize(signal)
is the function definition. The function takes one argument, signal
(the audio to be normalized).
The normalize
function is “called” (applied) with syntax in the form:
(normalize sound-to-be-normalized)
Looking through the code, the normalize function is called once only (in the final line).
What do you mean?
It has the code that you stated before in another post “takes up a lot of space”.
; normalize function
(defun normalize (signal)
(setf x (if (arrayp signal)
(max (peak (aref signal 0) ny:all) (peak (aref signal 1) ny:all))
(peak signal ny:all)))
(scale (/ norm-level x) signal))
So, it has to be functioning differently in the echo context of it all.
In the code I showed you, it worked exactly like you said: the code would fail-run on the selected track and not even bother to do anything else because of how much data it would have been.
ChatGPT wrote the first example posted by GenuineW. Ironically, not so genuine.
Yes, Steve knows this. I specifically said that the script was generalized and inaccurate.
Did you come here to contribute… or be a tool?
Yes, but the Delay plug-in applies Normalize once, so holds one copy of the audio in ram, whereas your code applied Normalize within the loop, so holding 640 copies of the audio in ram.
Why not just amplify (scale) the result of your looped process by a factor less than 1 to reduce the overall level?
Unfortunately, I’ve done that already and does not prevent or solve flang.
I don’t know what that means.
What are you trying to do? The code that I posted already reduces the level of each processed “layer”.
Correct, mainly by volume (pitch reduction also reduces volume by default in Audacity).
I’m trying to get clear audio with the intended values of the code, but it can only be done with significant devaluation of the setq.
The flang is being caused by phasing-interference… meaning that the signals are close together in the mix - canceling each other out, resulting in modulation and information loss.
I’ve been trying to resolve this.
Maybe it can all be done like you said with - normalize-final result…
So that it is only applied once.
My question becomes, can this be somehow incorporated for the final result(?):
; normalize function
(defun normalize (signal)
(setf x (if (arrayp signal)
(max (peak (aref signal 0) ny:all) (peak (aref signal 1) ny:all))
(peak signal ny:all)))
(scale (/ norm-level x) signal))
Seems important for all signal peaks of the data to be processed exactly like this…
If you indent your code correctly, it becomes much easier to read and easier to debug.
Here is a modified version of the normalize
function that allows you to pass the desired normalized level to the function:
(defun normalize (signal level)
(if (arrayp signal)
(setf x (max (peak (aref signal 0) ny:all)
(peak (aref signal 1) ny:all)))
(setf x (peak signal ny:all)))
(scale (/ level x) signal))
In the code that I posted previously, (multichan-expand #'process *track*)
tells Nyquist to apply process
to each selected audio channel. As we want to do additional processing (normalizing), we can assign the result of the processing to a variable, like this:
(setf processed (multichan-expand #'process *track*))
Then we can pass processed
(the result of the processing) to the normalize
function like this:
(normalize processed NORMALIZE_LEVEL)
where NORMALIZE_LEVEL
is the (linear scale) level that we want to normalize to.
NORMALIZE_LEVEL
may be set via a slider control like this:
;control NORMALIZE_LEVEL "Normalization level" real "" 0.95 0.0 1.0
(Using UPPER_CASE for ;control
constants is recent convention in Audacity. It helps to identify global constants that are created by the ;control
lines.)
Thank you very much for this explanation!
I tinkered with the code; it’s almost there.
But what’s wrong with this interpretation:
;version 4
;type process
; Nyquist script to duplicate a track 640 times,
; progressively lower the pitch and volume for each duplicate,
; normalize levels of each duplicate,
; and mix the results into a single final track.
(setq num-duplicates 640)
(setq final-result 0)
(setq gain-db -0.1)
(setq shift-step 0.001)
;control norm-level "Normalization level" real "" 0.95 0.0 1.0
(defun normalize (signal level)
(if (arrayp signal)
(setf x (max (peak (aref signal 0) ny:all)
(peak (aref signal 1) ny:all)))
(setf x (peak signal ny:all)))
(scale (/ level x) signal))
(defun process (sig)
(do ((i 0 (1+ i)))
((= i num-duplicates) final-result)
(let* ((shift-ratio (- 1 (* shift-step i)))
(processed (pitshift sig shift-ratio 1))
(normalize processed NORMALIZE_LEVEL (* i))
(processed (scale-db (* gain-db i) processed)))
(setf final-result (sum final-result processed)))))
;; Apply 'process' function to all selected channels.
(multichan-expand #'process *track*)
(setf processed (multichan-expand #'process *track*))
When I was testing it, it’s almost as if it’s silent - but with the flang still present.
Maybe it has something to do with the final result signal?
I’m sure if you tested it, it’d be something you could tell what’s off about it.
“flang” is not a real word in English. What do you mean by “flang”?
Flanging, the audio effect.
The flanging kind of effect is the result of overlaying multiple copies of the same audio with different stretch amounts.
Imagine you start with a sine wave,
then you make a copy of the sine wave,
then you stretch the copy so that it runs at a slightly different speed from the original,
then you play the two sine waves together.
Both sine waves began at the same point and are “in phase”. They both go up and down together.
As they play, because one is going at a different speed to the other, they will gradually move out of phase. Eventually they will move back in-phase, then out of phase, and so on, creating a slow phasing (or “flanger-like”) effect.
With your effect, the same thing happens, but is more complex as you have 640 copies all playing at the same time at slightly different speeds.
In other words, if it does not sound how you expected, it is because your expectations were inaccurate.