Well, I simply, respectfully disagree with those shoulds. I think you are confusing the two questions of how to visualize sound, and how to define a filtering effect that modifies a sound.Videogamer555 wrote:Currently, when you select a band (say 100 Hz to 10000 Hz in this example), it uses the logarithmic center, REGARDLESS of which mode you are using for spectrogram. The center frequency is thus 1000 Hz. This should be the selection method ONLY when using the logarithmic spectrogram. When in the linear spectrum mode, it should use the LINEAR center instead, which would be 5050 Hz in this example. You see (100+10000)/2=5050. Sadly, this doesn't work in Audacity. My use of the program is for scientific/technical use, which PRIMARILY uses linear spectrograms for filtering. In this use, the center frequency refers to the midpoint between high_freq and low_freq. That is in general center_freq = (low_freq + high_freq) / 2. Bandpass means keep all frequencies in the range low_freq to high_freq (and reject all other frequencies) and try to keep the pass band as flat as possible, but if there must be one point which is higher than the rest it should be at the center_freq (aka mid_point) which I defined in the previous sentence.
The purpose of spectral selection is just a visual aid in choosing a band that you then apply a filtering effect to, and what the filter does has no dependency on how you happen to display the sound. So I don't understand what "using a spectrogram for filtering" means. A spectrogram is only a view.
If you want to define FFT based filtering effects using the spectrum (I do not say "spectrogram"), again, that's really independent of how you happen to view things: the window size and windowing function in the effect might very reasonably be quite independent of what you happen to use for the different purpose of display settings. Noise removal does this sort of thing in fact, but its window size is a constant unaffected by spectrogram preferences for view.
Therefore I think "center frequency" as displayed by the black line should be a function of the top and the bottom frequencies, independent of the vertical scale, and the more often useful function is geometric mean. If you switch view type, the black line will remain at the same frequency, however that may map to the vertical scale, and that is to my mind the less surprising behavior than the alternative.
If you think the arithmetic mean is sometimes more useful, you must be talking about a different kind of effect -- NOT a different kind of view.
If you can express the sort of filtering you want to do as a function of top and bottom frequencies in Nyquist Lisp code, then do so, with the understanding that the center frequency line that is displayed as a convenience for the more usual filters is not necessarily useful in that case.