This is not really a practical plug-in which is why I’ve not posted it to the plug-ins page, but it includes some features that may be of interest for pug-in authors.
I wrote this plug-in to help me work out why the Audacity Normalize effect doesn’t work properly in all cases (a bug that is currently being fixed). As such it is closely based on the Normalize effect in Audacity 2.0. On Linux it is quite a lot slower than the standard Audacity effect but on Windows it is almost as fast.
The first thing required is a function to calculate maximum and minimum peaks.
The obvious method is to use the “peak” function:
(peak (s-max 0 s) ny:all) ; positive peak level
(peak (s-min 0 s) ny:all) ; negative peak level
The problem here is also one of the bugs (now fixed) in the Audacity Normalize. If the audio signal (s) is all above or all below the audio track centre line (DC offset is greater than the peak amplitude) then this does not work because it only looks at positive values and negative values, not at the “highest peak” and “lowest peak”.
A simple way to get this to work properly is to add a large offset to the sound, for example:
;;; get absolute peak values
(defun MaxPeak (sig)
(- 10 (peak (sum 10 sig) ny:all)))
(defun MinPeak (sig)
(- 10 (peak (sum 10 (mult -1 sig)) ny:all)))
The next thing required is a method to calculate DC offset.
The way that the Audacity effect does it is too loop through the sound sample by sample, much like this:
(setq offset 0) ; initialise offset
(dotimes (i (truncate len)) ; step though samples
(setq offset
(sum offset (snd-fetch s)))) ; sum all sample values
(setq offset (- (/ offset len))) ; divide by number of samples
This works fine but looping through samples in Nyquist is really slow.
A much faster way is to use the function “integrate” which steps through the samples (in highly optimised C+ and sums them.
We can then look at the value of the final sample using “sref” (note that we can’t use “peak” because we need to know the signed value).
(sref
(sound (mult (integrate s)(/ (get-duration 1))))
(/ (1- len) *sound-srate*))
“sref” interprets its time argument with reference to local time, so we need to apply “sound” so that “S” is evaluated correctly.
Finally we get down to the business of calculating the gain and offset and applying them. This is where it gets complicated because we have need to consider every combination of “with/without DC correction”, “with/without normalizing” and “linked/separate channels”.
I’ll not go through all of the detail, other than to say that if DC offset correction is to be used then it needs to be calculated before calculating the max/min amplitude of the audio so that the calculated max/min take into account the offset for that channel.
Normalize.ny (3.21 KB)