Even if this thread starts with extensive explanations why nothing works as wanted, the goal is to find a way how to make it work. I only want to point out all things I already had tried and know that they will lead to a dead end.
The following question is from [u]Date time formatting possible? (For label plugin)[/u]:
I already have looked at the souce code of the Audacity Nyquist interface and there are no functions or variables that can be used for an “easy” computation of the absolute minimum or maximum sample values of the Audacity selection, but I know that Audacity internally stores the maximum sample value for every AU-file in the project folder, so it should be possible to predict at least the maximum sample value in an Audacity selection, but we would need the help of the Audacity developers for finding a way how to compute this value.
Before I start to nerd the people in the Audacity developers list I would like to discuss what it really needed, so we will not have to change the Audacity source code back and forth a thousand times to find out what we need by trial-and-error.
Minimum Sample Value
To predict the absolute minimum is easy, it’s zero. Everything else is unreliable because it depends on the A/D precisision of the soundcard, that in low bitranges is superposed by the temperature noise of the analog preamplifiers, and also affected by the digital quantisation noise, what again is nothing but the temperature noise of the semiconductors involved in the A/D conversion.
Sounds strange, but in electronics everything is temperature dependent. Only at a temperature of zero Kelvin there is no noise, but an audio recording at zero Kelvin would contain no audio signal because everything will be frozen, even the air. Every audio recording above zero Kelvin will contain temperature noise, no matter how expensive the recording equipment is. More expensive equipment will produce lower noise levels, but there is no recording equipment that produces no noise.
Do not confuse the “absolute minimum sample value” with the “background audio noise level” (I assume that Steve knows this, but there may be other readers, and this is a very common confusion).
Question: What would be a practical use-case for the knowing the absolute minimum sample value?
Maybe there is something I have overlooked or not fully understood.
Maximum Sample Value
Knowing the maximum sample value is in so far important, because a soundcard clips at volume levels of more than 1.0, but again there are questions.
1. Amplifying the selection to the maximum with no further audio processing is already covered by “Effect > Amplify” with “Do not allow clipping”.
2. Amplifying to the maximum before processing is rather seldom needed, because Nyquist internally computes all audio samples with double-float precision, independent of the Audacity sample format. The only situation I can imagine is to simplify “threshold” level settings, but maybe there is something that I have overlooked.
3. Amplifying to the maximum after processing is often desired, but therefore the sound must first be computed in memory, then analysed for the maximum sample, and then amplified up or down, so that the maximum sample becomes 1.0 again. This would lead to the well-known Audacity Nyquist memory problems.
Number three is the most-often needed situation, but cannot be solved by a “maximum sample” variable in the Audacity Nyquist interface.
In CMU Nyquist for example, with “autonorm on”, a sound must be computed twice. First Nyquist computes the sound at a wrong volume level, where an internal variable holds the maximum sample value after the sound had been computed. Then the sound must be computed a second time, where the audio output is scaled by 1/max-sample to get an audio signal of exactly 1.0 maximum value.
With Nyquist in Audacity this would end up with the entire sound in memory, no matter if the sound is analysed in memory (as in number three above), or computed twice as in CMU Nyquist.
Question: What would be practical use-cases for knowing the maximum sample value before audio processing?
Maybe there are things I haven’t really understood? Audio processing is sometimes no as easy as one might think.