effects - normalize vs amplify

Is normalize also the amplify effect but only to -3.0b? If so, why is it set to -3.0db

What is DC offset?

Normalize can be set to any value you like up to 0dB (Audacity 1.3.x)

If you record “silence”, Audacity should give a flat line exactly in the middle of the track (marked zero on the vertical scale, which i the same as “minus infinity decibels”). Due to imperfections in sound cards, the signal is not always exactly at zero, it may be at +0.001 or - 0.014 or some other value. This is the DC offset and can be corrected with the Normalize effect.

Other than that, Normalize and Amplify are more or less the same thing.

When you have just recorded something, it can be useful to Normalize to -3 dB as this will remove any DC offset, bring the volume to a good level, but maintain 3dB overhead so that you are less likely to run into clipping as soon as you start apply any effects. (If you normalize to 0 dB then run a high or low pass filter you are quite likely to get some clipping due to a slight boost that occurs near to the cut off frequency). Allowing a few dB’s of headroom makes it much easier to avoid clipping without sacrificing too much dynamic range.

They’re not quite the same thing, but the difference is easy to miss. Steve is on the money except for one thing:
They behave differently when you apply the effect to multiple tracks.

If you apply Amplify to more than one track, each track will be amplified by whatever value you entered into the dialog box. If you entered +5.4dB, both tracks will be amplified +5.4dB.

If you apply Normalize to more than one track, each track will be amplified separately in order to get that track to the specified peak value. If track 1 peaks at -6dB, track 2 peaks at -12dB, and you apply Normalize at -3dB, track one will be amplified by 3dB and track 2 will be amplified by 9dB.

Good point alatham, I forgot to mention that. It is also very relevant if you are using “chains” (batch processing).

I do almost everything with an idea of my audio peaks hovering around -3. It’s a good compromise and is the only value available in Audacity 1.2. Audacity 1.3 gives you the ability to choose any value you want. I still pick -3.

Normalize is an automatic Amplify. Apply 6dB Amplify to a sound track and everything gets louder by 6dB whether or not that causes damage. Apply Normalize value of -6 and Normalize will change the volume of the whole show as much as it needs so that the highest audio peak just touches -6.

Normalize is absolute. Amplify is relative.

Everybody wants Normalize to fix the volume variations between badly recorded performers. Sorry. It won’t do that. Actor B will remain twice as loud as Actor A no matter which tool you apply to the whole show.

It can help to blow up the timeline and change it from percent to dB so the meters match the timeline.

http://kozco.com/tech/audacity/Audacity1_full.jpg

It’s under the black down arrow on the left.

DC Offset.

This one’s rough. Bad sound cards and cheap audio electronics sometimes add a small battery voltage to the audio. You can’t hear it, but it causes no end of grief if you want to do audio effects. If you find popping and clicking or thumping that you can’t identify as you’re editing–say at the beginning and the end of every effect–you may need to remove the DC offset with the Normalize tool, or wherever it is in 1.3.

Is that enough? It’s a pretty serious topic and I can put you right to sleep if you want.

Koz

I thank-your time and sharing your knowledge. I am acquiring a lot of very useful audio information.

Okay, I know this is the type of question that will drive the experts crazy, because it shows that I have absolutely no grasp of the theory and concepts behind audio recording. It’s sort of like someone using a word processor for the first time and asking if he/she should play with the kerning on characters, and if so, by what amount. The answer is, of course, “It depends.”

I’m a noob, and I’m trying to learn all this, but it’s going to be slow. In the meantime, I’ve just recorded a podcast. I recorded it as mono (I don’t know why, but I did) and now I have 6 separate tracks. (I did that because it’s easier for me to find and correct goofs in smaller segments.)

I’m now getting ready to export the whole thing as an MP3. I thought I read somewhere that I should Amplify before I do that. Reading this thread now, I’m thinking that I should select all and then Normalize.

Is that true? My goal is to make all 6 tracks a similar loudness so that when I glom them together, the listener won’t notice that this is a patchwork of six files.

Morven

<<<I recorded it as mono (I don’t know why>>>

Audacity wakes up from first birthday in Mono, not Stereo. Change in the Audacity Preferences.

<<<now I have 6 separate tracks. (I did that because it’s easier for me to find and correct goofs in smaller segments.)>>>

It’s easier for me to work like that, too.

<<>>

If most of your project peaks on the green playback meter go up to about -6 or -3 or so, then on average, the show levels are pretty much normal. This is where you close your eyes and play the piece beginning to end. Does it sound normal? Are there any places where you lose it because it got too low in volume? Since you’re producing the show in snippets, it’s a snap to go back and adjust individual tracks.

Save Project and Export WAV before you create your MP3 show. The Project will allow you to bring back your original layers (assuming you don’t throw any of the original sound files away) although it will not bring back the UNDOs. I’m not sure why that is.

The WAV will give you a very high quality mono or stereo backup from which you can create another MP3, Audio CD, MP4 for the web, etc.

Koz

Koz, I think this is designed this way so that Audacity can release the .au files that are “unused” after the edits to free up spac on the drive.

It would be useful, though, to have an addition to the “Close Project” function that retained the Undo history and the .au files, selectable as an option.

WC

I agree. It seems to me that retaining UNDO is a very important reason for using Projects.

Koz

but then we would get loads of posts saying “help, I’ve used 300 GB of disk space and I’ve only recorded 3 albums”.

The warnings about excessive disk space usage would have to be in massively big letters.

I have just recently transferred a feature request from the forum to thre Wiki requesting retention of the Undo trail - switchable on/off - please go to the Wiki and vote if you feel strongly.

WC

<<<“help, I’ve used 300 GB of disk space and I’ve only recorded 3 albums”.>>>

As opposed to the people who save their multi-thousand dollar project and go home only to come back the next day and discover they have no show because the UNDO vanished. I take option A. Audacity is way too talented at destroying your show.

Remember WordStar? I don’t either. It was a very early word processor that even in the face of the early ratty machines would very rarely flush your work. It had a design philosophy that under no circumstances was any original work to be damaged or deleted. Ever.

Koz

<<>>

…by clicking on this link…

Koz

ok, ok :slight_smile: - the link for the Wiki FR page is: http://audacityteam.org/wiki/index.php?title=Feature_Requests

And you will find it in: Improved Resource Control > Undo Buffer Management > Save Undo History into the project file

WC

Good point indeed.

Now, if the track contains multiple clips, they are normalized together. (Mulitplied by the same amount.)
Now I zeroed part of track (Ctrl+L, 1.3.5.) and normalized whole of it to see zero changed to some almost zero. Whats that?

Uf, just wanted to ask if the two channels of stereo track are normalized together or separately,
and thinking turned out to be dangerous.
Good night.

The Normalize effect is applied to the track. Audacity will calculate how much to amplify the track by to achieve the target peak level. (it would be a problem in many situations if it did not work like this).

If you are not working with a 32 bit track format, the Normalize effect will apply dither to the the output. Didn’t you recently have a conversation on the forum about how Audacity applies dither to silence?

I think it would be better if Audacity did not apply dither to long periods of silence (even though it is inaudible). It makes sense for Normalize to apply dither, since there may be slight (very slight) clicking as an audio signal fades out due to quantising errors, but I think after a few samples of silence, dither should not be used.

Take this example:
Generate silence (16 bit)
Normalize
Normalize (again)
Logically you would expect silence, but what you actually get is full scale noise. Of course this is an artificial test, (why would you want to repeatedly normalise a silent track?) but I think it illustrates a flaw in apply dither without considering the context.

As you know, Audacity treats stereo tracks as 2 mono tracks (side by side). Each (mono) channel is normalised independently, just like any other mono track. This can be quite useful if you have a recording that has the pan out of balance, providing a simple way to correct it.

  1. The point is that it did not look like the ususal dither.
    1a. There were spikes of the same height. In dither noise in exprort I usually see about three different heights. (Well might be the ‘fast’ dither now?)
    1b. The spikes were only of positive polarity.
    OK, I took another look today. (Inded I had 16bit track.) To be continued few lines below…

  2. I have bad memory. What I would say I rember is this: Perhaps I had conversation on Forum about dither
    and then notified Gale on -devel about my observation on non-dithering silence.

Just said it: My observation is that it (1.3.5.) does not dither silence (prehaps even short periods.)

… so we can continue.
A picture (normalized normalized silence inside longer low level track): Please care only about the upper track.
The left part suffered one more normalization. You see there Large dither and Small dither combined.
The small dither is one unit height, sometimes up and sometimes down (nevertheless, always from zero, which I decided to ignore right now.)
The large dither is one Unit height, too (however the Unit is larger that the previous unit, because amplified).
However it does not consist of spikes of both polarities. Instead it is like —.–.----.–.—…-----
two levels, where the upper is much prevalent.


This might be caused by interferrence between “dither only nonzero” and DC-offset removal
(a bit of silenced-out in a track probably gets DC-biased when the whole track is DC-removed.)
So I feel there is something a bit unheatlhy. I do not say it is important or not, but once dither is considered
important, it should work as expected, not as it happens to happen.

This does not need dither: copy & cut (in the same format), add/mixing (with multiplier 2), special amplifications (integer multiplier), silencing.
This needs dither: Generate sine and others, general amplifications (including 1/2), normalize, almost any effect.
This needs dither but deserves special treatment (at least at 8 or 10 bits): fade in/out.

I did not thing about it deeply, but I think this should be the order : 16source->float->manipulation(in float)->dither(in float)->16bit convert. That is, dither in float before conversion.
[If anybody votes for dither after conversion, I say that might be fast but not clever, it could not shift quantisation noise to other frequencies, but it can only mask it. Then dither would best come later, at the export time.]


How many samples uses the 1.3.5 to decide? If too few, gets wrong.

I confirm it does what you say. (I would split the tracks to get this. And I would cry for RMS normalization.).
But it can destroy carrefully balanced recording. Orchestra - does not it have drums to one side, for example?
And I would guess live stereo piano recording would seem to be about half a dB unbalanced, at least.
OK, it is easier to write <> than almost four time longer <>.

Those pictures are indeed strange and quite different from what I get. Without knowing your exact method I am unable to reproduce what you get.

This is what I get. The first track is using shaped dither on silence and normalised twice. The second track is a low amplitude sine wave that is first amplified to a very low level and then normalised (also shaped dither). The third track is a small amplitude sine wave, amplified to a very low level then amplified, but this time using rectangle dither.
Screenshot.png
The introduced noise looks to me very much like dither noise.
It is also interesting to note that using rectangle dither does not introduce noise to silence.

I’ve no idea :smiley: I’ll leave that to the experts to work out. I’m just indicating that the current method of shaped dither produces unexpected and undesirable results in certain cases.

I think it would be quite easy to write a Nyquist plug-in to do this, although using actual RMS may not be exactly what you mean. RMS normalisation gives a better approximation to “Equal loudness” normalisation than peak level normalisation (and I am betting that this is what you really want), but it is not perfect. Equal loudness normalization should follow the response curve of the human ear, which is not easy to do because it varies from one individual to another - however RMS normalisation is probably closer than peak level normalisation.

In this case you would use “Amplify” rather than “Normalize”. Since you are talking here about a stereo mix rather than individual instruments on individual tracks, “Amplify” would achieve the desired result whereas “Normalize” would not (If you need to correct DC offset, then this can be done with the “Normalize” effect without changing the levels).

The important point here is to know the difference between what “Normalize” and “Amplify” do in Audacity. Fortunately, having both of these effects at our disposal, just about every case is covered.

Back to normalizing.

I use the Anwida DX Reverb Lite plugin.

When I apply the reverb, I lose 6-18db average.

So, I end up having to amplify like crazy to get back to 0db.

I read about normalizing in this thread and decided to try that as well.

I had to normalize at -12db to account for all the loss and to end up with
a normal size waveform/sample.

Here’s the problem, all that is very good except for one major issue.

The line that is supposed to be flat in the middle of the track at 0 is huge.
I think the noise floor is getting amplified with everything else and even the
noise removal tool cannot get rid of the loud hiss without ruining the recording.

Question: is there a way to do this without the above mentioned problem?
Or is it just the equipment generating the noise and it is more pronounced
after normalization/amplification?

In other words I want to know how to keep the flat line, even when normalizing/amplifying.