hallo. so i recently installed your newest version of audacity (2.2.1), and noticed that the amplify tool is different. while the old one used to display the number in a 0.0 format, the new one displays it as a 0.000 format. the old one also used to round the nearest tenth, whereas the new one doesn’t appear to round up or down at all (the old one would display a 1.7 amplification option without clipping, while the new one offers a more precise 1.673 on a track of mine for example. after amplification, the tracks are indeed different (i cant visually or audibly hear it), but eac’s ‘wav compare’ tells me otherwise.
i’m not trying to expose you guys here or anything lol, i just want to know for my own obsessive mind whether or not what i’ve amplified in the old versions in the past in fact clipped by, say .007 as referring to that number example above and the pictures below, or what is going on here. maybe you guys can shed some light on the new amplify tool and how it differs from the old one and whether or not the old one did cause clipping. the following images are of the same track, in both the different versions.
I don’t know what Audacity is/was doing behind the scenes, but I’m pretty sure it isn’t/wasn’t clipping. And, changing a few peak values here-and-there won’t cause [u]clipping[/u]. It takes at least two samples in a row before you start to get a squared-off waveform, and your DAC might not actually “square off” the analog wave unless you have 3 maxed-out samples in a row.
…Analog audio isn’t perfect and digital audio isn’t perfect either. Digital audio is quantized in time and amplitude so as soon as it’s digitized you loose information. But, the digital resolution is far-far better than your ears (assuming “CD quality” or better) and better than the analog parts of the recording/playback system.
You must have been using a very old version of Audacity.
2.1.0 and earlier displayed one decimal point precision in Amplify
2.1.1 changed to two decimal point precision
2.1.2 and later changed to three decimal point precision
If you really want single decimal point precision displayed you can always use the Normalize effect - it does the same as Amplify but in a slightly different way in that is amplifies to a level rather than by an amount.