Normalize and DC offset correction

Why “Don’t Normalize” isn’t sensicle

(lovely word Edgar :smiley:)

Let’s say that you don’t want to compress the dynamics.
Which is more sensical?
A) Apply the Compression effect and select “Do not compress”.
B) Don’t apply the Compression effect.


Let’s say that you don’t want to normalize.
Which is more sensical?
A) Apply the Normalize effect and select “Do not normalize”.
B) Don’t apply the Normalize effect.


“I need “DC-removal without normalizing” really often”
Then apply the “Offset Removal” effect.

DC removal could be bundled into the Equalization effect (frequency = 0) and/or into the Amplify effect (does the user really want to amplify the DC components?).
As an example, let’s say that a user wants to amplify by +6 dB but they don’t want the DC component (which will be doubled when they amplify):

  1. Apply the Normalize effect (“normalize” disabled, “DC removal” enabled) → 2) Apply the Amplify effect.
    How very confusing - this operation has nothing to do with Normalizing, in fact we specifically do not want to normalize so why on Earth should we need to use the Normalize effect?

I totally agree. This is a powerful argument for why DC-offset removal should be included (default) in the Normalize effect but I don’t see that it precludes having a separate “Offset” tool.

Although “Normalize without DC offset removal” is rarely desirable I think that we agreed that there may be situations where it is needed (e.g. if processing non-audio signals where the DC component is required and the signal capture device is DC coupled). Similarly (and equally as rare) there may be situations where the user wants to add a DC offset (I would have found that useful this week). There are also situations where a user may want to remove a non-DC offset (an offset that is not constant, AKA very low frequency AC).

We can discuss later what features an “Offset” tool may have and what exactly the tool is called, but it should not be called “Normalize”.

Normalize and Amplify should be unified (only technical people understand DC = 0 Hertz), but I’m afraid that in this case Peter is right that the DC-Removal side-effect of notifying the user about the minor quality of his/her’s soundcard then would be too much hidden behind all the amplification options.

I didn’t write that amplifying the DC component makes any sense for audio editing, but this is the physical behaviour of an electronic amplifier. An amplifier amplifies both, AC as well as DC. I can’t think of a single case where amplifying the DC component would be useful in an audio editor.

I still hesitate to remove “Normalize without DC offset removal” from the options, but I can’t explain why.

Absolutely. I think Audacity is an audio editor (please correct me if I this this wrong), so the built-in effects should be audio related only, while intentional DC-offsets for non-audio signals can be covered by external plugins, where I have no problems with it.

Yes Audacity is described as an “audio” application (Audacity® is free, open source, cross-platform software for recording and editing sounds) but it is also widely used (particularly in educational environments) for non-audio signal processing.

Some of the stranger examples from the forum:

I love those tricks like the one with the integrate function. But aren’t the values provided by this function not rather surprising? I mean, one has to multiply the result by the samplerate to get the running sum, e. g. to get 1-2-3 from 1-1-1. Apropos 3: the last sample doesn’t give the sum of all samples before, that’s why the dc-calculation is (very) slightly off. if the dc is calculated with a appended silence, we get the whole sum:

(defun calc-dc (sig)
    (sref (sound (mult (integrate sig)
                       (/ (get-duration 1))))
          (/ (1- len) *sound-srate*)))

(defun calc-dc2 (sig)
(* (snd-sref (integrate (seq sig  (s-rest))) 
(/  len *sound-srate*)) (/ (get-duration 1))))

(format t "DC-offset ~a~%and with silence for correction: ~a~%" 
(calc-dc s)
(calc-dc2 s))


(abs-env (sim 0.17  (s-rest  1.02)))

You must run the code twice in order to generate the constant offset and calculate the DC components afterwards. Well, I admit this correction is rather academic. Even if you produce a tone with a osc-function you’ll get a DC much larger than this correction. (try for example osc-tri: the dc-offset will problbly be about -0.25 … , but thats because the tri-table is currently not useable, a problem I will post in an other thread).
I wonder how professional plug-ins handle the DC-correction, or in other words, who decides what is AC and what DC?

Footnote: I note that DC removal is sometimes being done in hardware now. My new Tosh W7 laptop with a Realtek card has in its sound services a DC removal option (which is set “on” by default).

Afterthought: maybe it’s being done in the device driver software - but it’s certainly being done before it reaches Audacity.

Peter

The other reason is because the calculation is done in single precision, so when there are a lot of samples there are significant rounding errors.

The same way that Audacity does it - by taking an average of all the sample values.
Some applications (eg CoolEdit Pro/Audition) offer two types of DC offset removal - “absolute” (average sample value) and “high pass filtering” (suitable for real time correction).

After a little contemplation, the values of (integrate sig) make sense, they are simply the product of the amplitude with the length of one sample. Thus with a infinite high samplerate one would receive the precise area under the curve (T0 Tn). Therefore, the signal is rather treated as a continuous one than a discreet one and its running sum. I was only surprised, when I saw those little numbers ( e. g. 2.26-e005) and expected greater ones. The documentation doesn’t mention the involvement of the time-domain.
I’ve just made some experiments with the convolve function and depending on the impulse responses, you’ll get also the slope and integrate functions as output. I thought to myself, with the right filter kernel, the 0 frequency part could be eliminated also (in other words a suitable HP-filter). I know that convolution is terrible slow, but in order to understand the DSP principles, one has to take a look at all sides of the matter.
RJ

Yes, terribly :stuck_out_tongue:

Topic split to here: https://forum.audacityteam.org/t/convolution-dsp/25518/1