Using INTEGRATE for computing the DC component is a great speed improvement, and usually it’s even sufficient to compute the DC component of a few seconds of a sound, because the DC component never changes, otherwise it’s no DC. This gives room for further speed improvements.
But again reading variable names like MinPeak and MaxPeak makes me go nuts:
A signal is not a graph. A signal consists of an AC component (Alternating Current, in audio processing this component contains the audio information) and a DC component (Direct Current, generally unwanted in audio processing, but sometimes useful in envelopes).
- The Minimum value of a signal is always zero, there is no “Minimum Peak”.
- The Peak value of a signal is the higher absolute value of the most-positive sample and the most-negative sample, computed from the AC component, with the DC component already removed.
This means that the PEAK value of the AC-component is the same as the PEAK value of the inverted AC-component, so there is no need for two peak variables, but it’s important first to remove the DC-component from the signal before computing the PEAK value.
The Nyquist PEAK function already takes care of computing the absolute value of the most-positive and most-negative sample, if the sound, given to the PEAK function as argument, is the AC-component without the DC-component.
Never confuse “most-negative” with “min” in signal processing, this leads to very hard to find bugs. Nyquist itself is a bad example in this regard, because S-MIN and S-MAX both treat signals like graphs and therefore return wrong values in the DSP sense.
Simplified version (I hope it’s easy to understand, but not really good coding style):
(defun normalize-mono (input-sound) (let* ((DC (sref (sound (mult (integrate input-sound) (/ (get-duration 1)))) (/ (1- len) *sound-srate*))) (AC (sum input-sound (- DC))) (MaxPeak (peak AC ny:all))) (if (zerop MaxPeak) ; avoids divison-by-zero error (if (zerop DC) input-sound AC) (scale (/ 1.0 MaxPeak) AC))))
In case of “normalizing without DC removal” the last SCALE line becomes:
(sum DC (scale (/ (- 1.0 (abs DC)) MaxPeak) AC))
Question: What is a practical use-case for “normalizing without DC removal”? Do I have overlooked something?
Sorry for the rant, the code includes all-in-all some very good improvement ideas.
That’s nice. Yes there are some excellent improvement ideas. I’ll need to contemplate a while
DC is not “sound”, so a “sound” card has no right to pass DC, so the DC component of a sound recording should always be zero, so removing a DC component should be unnecessary.
Are you suggesting that the Normalize effect should “always” remove any DC component and that it should not be a user selectable option?
Now (six o’clock in the morning) a new version, rewritten according to the “AC and DC components” theory. This is probably still not the final version because I have e.g. still not found a good abstraction how the code could be MULTICHAN-EXPANDed to an arbitrary number of channels.
…but only in theory, I’m often enough forced to work with bullsh**t equipment, where removing unwanted DC is the most important function of all.
The point is that I never have found a single use-case where normalizing without removing unwanted DC ever made sense. But it’s also not a good attitude to bigheadedly say that nobody will ever need it. It often happens that one day I need some particular settings and the next day in a similar situation I need exactly the opposite. It’s difficult to say what is “unnecessary” and what not.
But currently there are some nonsensical settings in the plugin like “neither remove DC, nor normalize” what in end effect does nothing… ???
Normalize-2.ny (4.42 KB)
the points that I made when I wrote the proposal were:
- DC offset removal (if present) should ideally be the first thing done after capture and before all other editing
- Normalizing/amplitude adjustment is normally the final editing function prior to exporting usefaul audio files and after all other editing has been done.
Hence the need /desire to separate the two effects - am wrong in this?
I do agree that if a user has DC on the signal then if they’ve not removed it prior to other editing then it is useful to use the opportunity of normalizing to correct the DC (but rememeber that until 1.3.13 and 2.0 we were advising users to make amplitude adjustments with Amplify rather than Normalize, as until then Normalize insisted on working on each channel of a stereo pair independently).
The other reason I don’t like “hiding” DC removal in the Normalize function is that it tends to blind the user to the fact that they have DC on the signal, which is normally the sign of a poor soundcard and should really be dealt with at the hardware level by replacement.
My other gripe is that if you run Normalize just to remove DC offset ,with amplitude adjustment turned off, the progress dialog still ays "Normalizing … " when it is clearly doing no such thing.
Another problem with bullsh**t equipment is that the offset may not be constant DC but rather a slowly drifting offset. I think this is a case for offering more than one type of offset correction (probably not as part of the Normalize effect but rather as a separate “Offset Correction” effect). A drifting offset is subsonic AC rather than DC, so a high-pass filter is required to remove it rather than (only) DC offset correction, but this is drifting off into a different topic.
I’ve been thinking hard for an example of when Normalizing without removing DC makes sense. The only example that I can think of is if Audacity is being used for processing/editing non-audio signals (which some users do use it for). So even though it is very much a fringe case I think that DC correction should be retained as an option.
There is also the issue of speed.
In the Audacity effect, calculating the DC offset requires looping through the samples in an analysis stage and this definitely slows down the processing (by about 25% on my computer). There’s a comment in the Audacity Normalize code:
“// we don’t need an analysis pass if not doing dc removal”
So it looks the intention is to minimise processing time.
In the Audacity Normalize implementation calculating minimum and maximum sample values is (so I’ve been told) a lot faster than calculating the offset.
In Nyquist, using the “integrate” function there appears to be little if any speed penalty in calculating the DC offset.
It does seem nonsensical to include an option to disable normalizing in the “Normalize” effect
I think that this possibility exists because there is no separate “Offset Correction” effect. It would imho make a lot more sense for the “Normalize” effect to always normalize (with DC correction as an option) and if offset correction is required without normalizing then an “Offset Correction” effect should be used.
We see quite a few posts from users that have DC offset but have not found the correction tool. If offset correction was a separate tool then it would be a lot more visible as well as resolving the nonsense of the “do nothing” option in Normalize.
Thanks for the Normalize-2.ny code. It’s very instructive.
It also very clearly indicates the bizarreness of normalizing without DC offset correction.
(if normalize without DC correction → normalize the AC signal and add the DC offset)
The bad news is that the Nyquist speed improvement is caused by the fact that Nyquist loads all samples from Audacity section into memory, and analyzing a sound in memory is of course much faster than reading the samples from disc.
But unfortunately this means that the Nyquist speed improvement is based on the same effect that causes the well-known Nyquist memory problems, or in other words, with a proper Audacity Nyquist implementation, Nyquist would not be faster than the Audacity “Normalize” code, it would more likely be slower.
This also means that the current plugin code will crash Audacity if the selection does not fit into the computer’s memory. But it’s an experimental plugin, and I think that we still work in the right direction.
More detailed answer will come later …
Unfortunately my code is wrong, because “normalize without DC correction” must be:
- normalize the AC signal + add the DC offset, amplified by the same amount as the AC signal
and this is indeeed the same as:
- MaxPeak = difference of the most-positive sample to 1.0
- MinPeak = difference of the most-negative sample to -1.0
- amplify the entire signal AC+DC by: (min (abs MinPeak) (abs MaxPeak))
This is the reason for the “without DC removal we need no analysis pass” comment in the Audacity C/C++ code, and this is indeed faster than my AC/DC code. I still try to understand the code in “audacity/src/effect/Normalize.cpp” …
;; no dc-removal, but normalizing (if (zerop pk) input-sound ; silence (mult (/ amp (+ pk (abs dc)))(sum dc ac)))))))
Another minor problem is that the DC offset calculation is not always exact.
(defun compute-dc (input-sound) (sref (sound (mult (integrate input-sound) (/ (get-duration 1)))) (/ (1- len) *sound-srate*)))
If the computed value is not exactly the same as actual DC offset then for a constant, non-zero signal
(peak (sum input-sound (- dc)) ny:all)
will be non-zero so the “silent” but ever so slightly offset sound will be normalized to the target value rather than simply having the offset removed.
A solution is to allow for a little inaccuracy by changing
(if (zerop pk)
to something like:
(if (< pk (db-to-linear -96))
I think this version could be adapted for “multichan-expand’ing” if dc-list and peak-list were arrays rather than lists but I’ve just used a “Do” loop instead.
I’ve removed the nonsensical “don’t normalize” option.
“Remove any DC offset: (default Yes)” has been reversed to “Retain DC offset: (default No)”.
“Normalize Stereo Channels Independently: (default No)” has been reversed to “Link Stereo Channels: (default Yes)”.
This version supports multi-channel sound (although Audacity currently only supports a maximum of 2 channels).
Normalize-3.ny (2.21 KB)
Re-thinking all that stuff over and over again and taking into account that PEAK already includes an ABS function then this means that “normalize without DC correction” is nothing else but:
(mult s (/ amp (peak s ny:all)))
and in this case there is indeed no need for a DC analysis.
Is this really true? … I think yes.
This is not nonsensical (is not nonsensical = sensical?), I need “DC-removal without normalizing” really often. What I meant was:
- Normalize with DC-removal - makes sense
- Normalize without DC-removal - makes sense?
- Do not normalize with DC-removal - makes sense
- Do not normalize without DC-removal - is complete nonsense
Also the trick using peak-lists with MAPC for multichannel sounds is a great idea.
Quotes from waxcylinder in post 178468 on the first page of this thread:
DC-removal (if DC is present) must be the very first step after importing. This must be hammered into every user’s brain.
Normalizing is the first step after importing, while the last step before exporting is usually called “mastering” and goes far beyond normalizing, it usually includes dynamics compression and intentional peak clipping a.k.a. limiting. Unfortunately Audacity has no real-time audio effects, so it’s nearly impossible with Audacity to do dynamics compression or mastering in a meaningful way, so with Audacity there is no other way than using normalizing instead of mastering, but this is nothing but a limitation imposed by Audacity. In real life normalizing and mastering are two pretty much different things.
I fully agree, there should appear a “Your soundcard is sh*t” window to make the user aware that s/he has paid money for a piece of crap.
Seriously though Edgar, does your response mean that you would support the separation of the DC-Removal and Normalize functions? My proposal on the Wiki could do with a bit more support.
The practical situation looks like this:
- Normalize is a two-pass process: (1) find the maximum peak, (2) amplify to the given level
- DC-Removal is a two-pass process: (1) find the DC component, (2) subtract the DC component
- Normalize (1) and DC-Removal (1) both can be done in the one and the same pass
- Normalize (2) and DC-Removal (2) both can also be done in the one and the same pass
If you want to perform “Normalize with DC removal” with two separate Normalize and DC-Removal functions then you need 4 passes instead of 2 passes, and if “Normalize with DC removal” is needed with long audio recordings it makes a difference if you need to wait ten mintues or twenty minutes until the work is done. You can save a lot of time if both functions are unified in the same effect. This is the only reason.
From the “logical” workflow viewpoint it seems to make sense to separate the functions, but in practice it is no good idea.
Ok so I can see the need for retaining DC removal in the Normalize effect for:
- those who feel the need for speed
- those who overlooked it after capture
But I would still like to have an additional effect which just does DC-removal.
- It makes the concept of DC on the signal more obviouus to the user and more discoverable in the documentation
- If James ever gets around to enhancing the effects hot-key bindings (in 2.0.1 alpha now) to become really HOT-keys with immediate action and no dialog then I for one would definitely want separate DC Removal and Normalize effects (but happy to have the DC removal still in the Normailize - I just wouldn’t use it there).
Why “Don’t Normalize” isn’t sensicle
(lovely word Edgar )
Let’s say that you don’t want to compress the dynamics.
Which is more sensical?
A) Apply the Compression effect and select “Do not compress”.
B) Don’t apply the Compression effect.
Let’s say that you don’t want to normalize.
Which is more sensical?
A) Apply the Normalize effect and select “Do not normalize”.
B) Don’t apply the Normalize effect.
“I need “DC-removal without normalizing” really often”
Then apply the “Offset Removal” effect.
DC removal could be bundled into the Equalization effect (frequency = 0) and/or into the Amplify effect (does the user really want to amplify the DC components?).
As an example, let’s say that a user wants to amplify by +6 dB but they don’t want the DC component (which will be doubled when they amplify):
- Apply the Normalize effect (“normalize” disabled, “DC removal” enabled) → 2) Apply the Amplify effect.
How very confusing - this operation has nothing to do with Normalizing, in fact we specifically do not want to normalize so why on Earth should we need to use the Normalize effect?
I totally agree. This is a powerful argument for why DC-offset removal should be included (default) in the Normalize effect but I don’t see that it precludes having a separate “Offset” tool.
Although “Normalize without DC offset removal” is rarely desirable I think that we agreed that there may be situations where it is needed (e.g. if processing non-audio signals where the DC component is required and the signal capture device is DC coupled). Similarly (and equally as rare) there may be situations where the user wants to add a DC offset (I would have found that useful this week). There are also situations where a user may want to remove a non-DC offset (an offset that is not constant, AKA very low frequency AC).
We can discuss later what features an “Offset” tool may have and what exactly the tool is called, but it should not be called “Normalize”.
Normalize and Amplify should be unified (only technical people understand DC = 0 Hertz), but I’m afraid that in this case Peter is right that the DC-Removal side-effect of notifying the user about the minor quality of his/her’s soundcard then would be too much hidden behind all the amplification options.
I didn’t write that amplifying the DC component makes any sense for audio editing, but this is the physical behaviour of an electronic amplifier. An amplifier amplifies both, AC as well as DC. I can’t think of a single case where amplifying the DC component would be useful in an audio editor.
I still hesitate to remove “Normalize without DC offset removal” from the options, but I can’t explain why.
Absolutely. I think Audacity is an audio editor (please correct me if I this this wrong), so the built-in effects should be audio related only, while intentional DC-offsets for non-audio signals can be covered by external plugins, where I have no problems with it.
Yes Audacity is described as an “audio” application (Audacity® is free, open source, cross-platform software for recording and editing sounds) but it is also widely used (particularly in educational environments) for non-audio signal processing.
Some of the stranger examples from the forum: