I think noise removal should have this reasonable property:
If you apply the effect to the exact selection that was the profile, with zero sensitivity, attack, and smoothing, then the result should just be deamplification. 2.0.5 appears to behave so.
I have broken this property in my build. Pink noise becomes a whole lot of random tinkles.
I think I understand why. Whatever way we extract the statistic from the sound – maximum, minimum, or something else, over some number of windows hopped by so many samples – we should to the exact same to the noise profile when determining a threshold, with a sliding of that set of windows over many places in the profile.
I now think I want median of at least three when reducing the noise, to avoid being fooled by large outliers in background and small ones in foreground.
So the analysis of profile should find a maximum of medians, not as now, of minima.
Even in 2.0.5, if you select a part of your noise profile, with a starting boundary that is not a multiple of 1024 samples from the start, then you MAY get tinkles. Getting the exact alignment of windows the same matters. In fact I get very many in easy examples.
SHORT ANSWER: Narrow spectrum sound, wide spectrum noise, and frequency smoothing cause the sound to be mitigated, but only as it crosses certain frequencies!
The first example shows the results of that bug that blanks out the lowest frequency bin.
The explanation of the second picture is more subtle. The bin frequency for the FFT is 44100/2048 = 21.533 Hz. The chirp starts just a little below that and continues to a little more than nine times that. So you sweep across eight bins. The behavior of the noise removal is different when the chirp is near the middle of a bin and when it is at the boundary. Look at the 2048 rectangular window spectrogram and you see the display varying from something neat where you cross a multiple of 21.533 Hz and something sloppy in between. That is related, because noise removal is using spectrogram data.
Here’s another observation: even in 2.0.5 without anything I did, the envelope of the chirp gets less wobbly as frequency smoothing is reduced, and does not wobble at all with 0 smoothing. But that makes more chattery artifacts audible.
Here’s a curious thing: where the spectrogram of the original gets “neat,” the envelope dips more. I think the explanation is this: the algorithm detects (correctly) that all frequencies not very near the chirp are noise and can be reduced. But then when you frequency-smooth, the chirp gets pulled down, because lots of frequencies within 150 Hz of the chirp were detected as low and made even lower, and that smooths onto the chirp. Where the spectrogram is messy, noise reduction is done with frequencies only far from the chirp.
I have experimented with some success using the Hann window before the analysis, and with other changes, I remove most artifacts even without frequency smoothing. But then frequency smoothing does MORE to change the envelope.
I think frequency smoothing looks like a workaround for artifacts, but as you can see, it can damage a narrow-spectrum sound. I am thinking now that with better avoidance of artifacts by other means, I wouldn’t use frequency smoothing.
Remember the example of noise mixed with a chirp, then trying to subtract the noise with noise removal? I am trying with the Hann window before analysis, and it works like a charm!
As you can see that the Hann window spectrogram has less mess than the rectangular, noise reduction is more even throughout. But frequency smoothing will reduce the sound much more and even silences it with extreme settings.