Normalize and Amplify are sisters. They turn the volume of the whole performance up or down to meet the criteria. Full stop.
If you Normalize to -1dB for example, you should be able to find at least one place in the performance where one peak in the blue sound waves reached -1db. This can be a little rough to see since the waves are in percent, but the bouncing sound meters should show it.
You can also change the sound waves to dB, but that can be even harder to see.
If you actually want processing and mastering, that’s a whole different set of tools.
-1dB works out to be 89.1% which you can convert (.891) and round off to 0.9 in Audacity-speak.
You can grab the bottom of the timeline and pull down to make it taller and it will actually show you that. Then all you need to do is find the one peak (up or down) which lives at -1dB.
I have this conversion table. I ran one of the numbers backwards and they seem to work. Click the graphic to see the whole thing.
This is the perfect illustration of why you never ordinarily do sound in percent. The sound channel, for example in an Audio CD goes to -96dB. Look at where the percent would be if you did that.
Koz
In this case I was really trying to find out if Audacity is using ffmpeg behind the scenes to do the normalization & what ffmpeg options would be needed to get same results as Audacity. I’ve been trying ffmpeg-normalize. I know this is a question about ffmpeg and Audacity but there isn’t a common forum as far as I know
FFmpeg is purely optional for Audacity and used to extend the range of audio formats that can be imported / exported.
The Normalize effect is one of the standard built-in effects. It makes use of Audacity’s “summary data” (a unique feature of Audacity projects) to determine the peak amplitude of the selected audio, (which is very fast and efficient), and then multiplies each selected sample by an appropriate amount to scale the waveform to the desired level.