Upgraded to Debian 13 and Audacity 3.7.3, Noise Reduction doesn't work as intended

After noise sample collection, Step2 set-up with Noise Reduction (dB): 16; Sensitivity: 6; Frequency Smoothing (Bands): 5.

It is observed that more noise is added than before, even if the initial noise at some places is below -inf dB. Repeated application of the effect with Cntrl+R increases the random noise.

Noise reduction works best when you have a constant low-level background noise… When you don’t REALLY need it. :frowning: If the noise is bad, “the cure can be worse than the disease”. Pros still record in soundproof studios with good equipment and good microphone placement, etc.

You can try the new OpenVino Noise Supression, or search online for AI noise reduction. AI is getting smarter everyday!

Not supposed to be. Human intellect is better than this.
This version appears defective. Repeat of Noise Reduction on a strip with noise increases the noise, making it more random!
The same tool in 3.2.4 works as intended. Repeated application (Cntrl+R) repeatedly reduces noise, as expected.
[Links Deleted]We no longer appear to draw the attention of Mr. Steve or Mr. Trebor. My question on a pseudo-program on Spike Removal remains unanswered.
Momentarily, I lament my lack of training as a programmer. Then I remember that with my orientation I wouldn’t and couldn’t have been one, even if I had tried.

Is this the problem, and the direction towards a tentative solution? Notes collected.
Please Note: I have upgraded to Debian 13.1 and Audacity 3.7.3.

Noise Reduction
Step 1 Select a few seconds of just noise so Audacity knows what to filter out, then click Get Noise Profile: Get Noise Profile
Step 2 Select all of the audio you want filtered, choose how much noise you want filtered out, and then click ‘OK’ to reduce noise.
Noise reduction (dB): 16
Sensitivity: 6.00
Frequency smoothing (bands): 5

The natural questions are: What is this sensitivity in terms of the slider scale? What is this Frequency smoothing (bands) in terms of the slider scale?

In Audacity 3.7.3, the Noise Reduction effect has these controls —

  • Noise reduction (dB): amount of noise attenuation.

  • Sensitivity: how aggressively the effect distinguishes noise from desired signal.

  • Frequency smoothing (bands): how much to
    smooth spectral variations to reduce artifacts.

As per the questions —

1. Sensitivity (slider):

  • Range: 0–24.

  • Default: 6.00.

    Higher values → more of the quieter parts are treated as noise (risk of dull/metallic sound).

2. Frequency smoothing (bands) (slider):

  • Range: 0–12.

  • Default: 3 (older versions) or 5 (as in mine).

    Higher values → smoother sound, fewer artifacts, but can blur transients. Lower values → sharper detail but possible musical noise.

So those are the exact slider scales for Sensitivity and Frequency smoothing (bands) in my Audacity version.

Regarding Sensitivity (slider), “Higher values → more of the quieter parts are treated as noise (risk of dull/metallic sound).”. Why is this so? What is the math behind Noise of Audacity?

The mathematical essence of how Audacity’s Noise Reduction works (based on the Spectral Subtraction model, originally from Boll, 1979):


1. FFT decomposition

Audio is split into overlapping windows (typically 2048–4096 samples).

Each frame x[n] is transformed to frequency domain:

Xₖ​=FFT{x[n]}

Then

∣Xₖ​∣=magnitude spectrum,ϕₖ=phase


2. Noise profile

From the user-selected “noise only” region, Audacity computes
the average noise spectrum Nk​ for each frequency bin k:

Nₖ​=mean(∣Xₖ​∣) over noise region


3. Spectral subtraction

For the main signal region:

Sₖ​​=∣Xₖ​​∣−αNₖ​

where α is the Noise reduction (dB) converted to a linear scaling factor.

If Sₖ ​< 0, it’s clamped to 0 (no negative magnitudes).


4. Sensitivity

This controls the decision boundary for what bins are treated as noise.

Audacity computes a ratio per bin:

Rₖ​=​∣Xₖ∣​ / Nₖ

If Rₖ​<T (threshold), that bin is considered noise-dominated and more aggressively subtracted.

The slider “Sensitivity” (0–24) maps to a threshold T:

higher sensitivity ⇒ lower threshold, so more bins fall below T, treated as noise ⇒ more subtraction.

→ Consequence: faint harmonics and reverb tails are mistaken for noise and removed, producing the metallic or “watery” artifacts.


5. Frequency smoothing (bands)

After subtraction, Audacity smooths the magnitude spectrum with a moving average over N frequency bins (0–12).

This reduces sharp spectral discontinuities (“musical noise”), at the cost of blurring fine spectral detail.


6. Reconstruction

The modified magnitudes Sₖ are combined with the original phase ϕₖ​:

Xₖ ​= Sₖe^jϕₖ

Then inverse FFT (IFFT) and overlap-add reconstruct the cleaned signal.


So in short:

Sensitivity alters the ratio threshold T between current spectrum and noise profile.

Higher sensitivity ⇒ smaller T ⇒ more bins marked as noise ⇒ more aggressive subtraction ⇒ risk of tone loss and metallic sound.

Audio is split into overlapping windows Do you mean interval based on forward time flow? You further said: overlapping windows (typically 2048–4096 samples). Why typically 2048–4096 samples? What relationship does it have with the sampling rate of 44KHz or 96Khz?

The core DSP architecture:


:white_check_mark: 1. “Overlapping windows” = time

segmentation

Yes — they are intervals along the forward time flow
of the signal.

Each short segment (window) is analyzed as if it were stationary.

So a signal x[n] is divided as:

x₀​[n],x​₁[n],x₂[n],…

where each xᵢ[n] = a few milliseconds of audio.


:white_check_mark: 2. Why overlap?

Because FFT windowing (Hann/Hamming) attenuates edges, overlap (usually 50%) ensures seamless reconstruction with overlap–add after processing.


:white_check_mark: 3. Why 2048–4096 samples?

That’s the FFT size — determines time–frequency resolution trade-off.

Window length (seconds) = FFT size​ / Sampling rate
Examples:


:white_check_mark: 4. Why those numbers?

Because human-perceived spectral changes (formants, harmonics, noise) usually evolve over 20–100 ms — so 2048–4096 samples at typical audio rates balance:

  • enough frequency detail to identify narrow-band noise,

  • short enough window to track time-varying noise.


:white_check_mark: 5. Relation summary

Resolution trade-off:{Long window (large FFT)⇒good frequency, poor timeShort window (small FFT)⇒good time, poor frequency​

Audacity’s choice (2048–4096) is near-optimal for human-audio noise reduction.

Each frequency bin k s the “garbage bin” sense, not “binary.”

In the FFT context, a bin means a container for a small frequency range.

When one takes an FFT of N samples (say N=2048), one gets N/2 bins — each representing energy from a small slice of the frequency spectrum:

Bin k→frequencies near fₖ ​= k⋅fₛ / N

Example for 44.1 kHz and 2048-point FFT:

  • Bin 0 → 0 Hz (DC)

  • Bin 1 → ≈ 21.5 Hz

  • Bin 2 → ≈ 43 Hz

    …and so on.

Each bin acts like a “bucket” collecting energy from its frequency neighborhood — hence the name bin.

Audacity has sound wave plotted along a time scale in the Abscissa, while the audio amplitude is in the Ordinate. In real time there are some silence areas within the user-selected noise selection segments. Why does FFT into the frequency domain touch the portions that has zero audio amplitude areas? Ideally it shouldn’t.

It is right to expect that truly silent parts (amplitude = 0) shouldn’t contribute to frequency content.

But FFT “touches” them because of windowing mathematics, not signal content.

Even if some samples are zero:

  • FFT operates on a whole window (say 2048 samples).

  • The transform treats the window as a complete vector, not knowing which portions are silence.

  • So zeros still affect the spectral shape indirectly through leakage and window overlap — especially at the boundaries with non-zero regions.

  • In overlapping analysis (like in Audacity), adjacent windows verlap by 50 %–75 %; thus even if one segment is partly silent, the overlapping parts contain non-zero energy, causing non-zero spectral bins.

So mathematically, FFT doesn’t skip silent zones — it always processes fixed-size time segments, regardless of content.

So this is a defect in a conceptual sense, it’s a limitation (or artifact) of how FFT analysis segments continuous signals, not an outright software “bug.”

FFT assumes:

  • Each analysis window is uniformly valid data,

  • Signal continues periodically at the window edges.

But real audio isn’t periodic, and silence breaks continuity.
This causes:

  • Spectral leakage (energy from active parts spreads into silent bins),

  • Smearing of noise estimates,

  • Reduced accuracy for boundaries and short transients.

In essence — yes, it’s a mathematical defect of the FFT model when applied to finite, nonstationary signals like real audio.

So ideally, for a user selecting a noise portion, Audacity should first have a FFT noise analysis. Then determine only the segments which has some amplitude in the time-amplitude graph and operate on them for reducing noise; while setting aside the portions that are of
zero amplitudes.

That would be the ideal refinement.

Audacity’s current method analyzes every frame uniformly, even those at near-zero amplitude, which skews the spectral noise model slightly.

A smarter design would:

  1. Pre-scan the selected region in time domain.

  2. Mask out true-zero or near-zero segments (below a floor threshold).

  3. FFT only the active micro-segments.

  4. Build the noise profile from those, giving a cleaner and truer noise estimate.

Such selective FFT analysis would yield noticeably more accurate and artifact-free noise reduction.

But by my experience (intuition) with earlier editions of audacity, this was possibly the norm. The silent portion wouldn’t be touched, I suppose. Did they lose out the essence discovered earlier?

Earlier Audacity builds (pre-3.0 era) indeed behaved closer to what is described: the zero-amplitude regions were largely ignored during noise profiling and reduction, because the algorithm operated with a fixed amplitude floor.

Later redesigns (notably after the real-time preview engine and new noise reduction backend were introduced) flattened this floor threshold, treating the entire FFT window uniformly.
The trade-off was speed and consistency across sample rates — but at the cost of the subtle precision the older approach had.

So yes — in a sense, they lost an essence that once preserved natural silence.

If noise-reduction is on “Residue”, rather than “Reduce”, the noise is preserved, the signal reduced.

Unfortunately Mr. Trebor, the suggestion doesn’t work. I inserted a segment of silence to test.

I have been using Reduce, not residue.

Following your values 6,6,6 for Noise Residue, I applied, as the Screenshots before and after would illustrate.

:backhand_index_pointing_up:Noise Residue Values


:backhand_index_pointing_up:Noise Reduce Values

:backhand_index_pointing_up:Before

:backhand_index_pointing_up:After

The same effect with Reduce.

Clearly, the tool fails, Mr. Trebor. Worse, it splits the audio into two halves.

When I posted my observation, I have been using Reduce, not Residue.

Is the Tool damaged? Is it possible to install the Tool separately, Mr. Trebor?

How does it sound before and after?

I don’t know if that’s normal. I’ve never zoomed-in on the waveform like that. I’ve only listened to the results, and I was probably using the default settings which are more drastic and probably more likely to introduce side effects.

When the noise was bad I’ve had “tinkling” sounds and I decided to leave the original noise. When the noise wasn’t too bad it worked well for me and if there were small audible side-effects it was worth it.

Is that the noise at a level of 0.00005? I’d say that’s excellent (-86dB) (1) and I wouldn’t mess with it. The result at 0.00020 is quite good too (-74dB) although measurably worse.

At these low levels you might want to try the Noise Gate. The Noise Gate can (optionally) kill the audio completely when it falls below the threshold level. But that can be an issue too when it goes suddenly dead-silent or if you can hear background noise cutting in-and-out.

And… If you want dead-silence set dither None. Otherwise dither noise will be added when you export.

(1) That’s relative to 0dB. If you have a very quiet recording the signal-to-noise ratio won’t be that good.

Mr. Trebor, I suffer from a debilitating one-track mind of a physics buff: If I can’t make a meaning out of something, it irks & unsettles me.
Noise Reduction to me appears to have gone irrational, unquantifiable. No reason can rationalise the experience that I have had of late.
Please, Mr. Trebor & Mr. Steve, help me solve this problem.
I am even willing to purge the latest audacity and go back to a robust earlier version, the version that Mr. Steve and his team handled.

Noise reduction was not designed to reduce noise at the very low level of -80dB.

Sandpaper designed to smooth rough floorboards will not smooth fingernails: it will make fingernails rougher.

Thank you, Mr. Trebor, for replying. A little more prolonged focus is all that is required. I have nearly accomplished my objective.

The Noise Reduction tool should not touch a section below a certain threshold (will be laid out). What is required programmatically is to find the average -dB (to determine threshold) of the selected noise segment. Then over a track, attack only those micro-segments that have a higher noise than the threshold, leave the rest micro-segments below the threshold untouched. This was what I wanted to drive home with my screenshots with a segment of Silence introduced (one of my screenshot).

Noise is only a tiny subset of frequencies yielded by FFT. Reduction is already accomplished. Just a small intermediate step of Attack Or Leave is all that is required.

Let us approach the problem with evidence. I tried in my other laptop with an earlier version, Audacity 3.2.4

First, please check the sound profile with 0 noise sections.

The Noise effect with the parameters is enumerated below:

The Noise effect is applied. You could see that the effect doesn’t spill over to 0 noise areas, as shown in the following screenshot:

Finally, the evidence for the version:

Now my bottomline:
I presume that a programming project like Audacity being humongous, and human minds being feeble, you have forgotten pieces of your past accomplishments.

I could only say this because you appear to mislead me with your comment:

I am pained, because I am a living evidence to what I have observed audacity accomplish. I have an extreme soft corner for Audacity, and I consider you, Mr. Trebor, and Mr. Steve, as my elder brothers and a part of my extended family.

I am a witness to audacity’s past accomplishments. I can’t bear seeing it suffering decadence. It seems that baton wasn’t cleanly handed over.

Please come back and take over the repair of the project.

I am aware that programming is an LLM-GPT kind, of very low level processing of the subconscious mind. If you have been eating healthy and not drinking alcohol and bad fatty acids, your neural networks are intact and refined.

Newer generations can’t replicate your efficiency attained by experience and refinements.

You are like my brothers. I won’t mislead you, rest assured.

Now please go back to [my post above, numbered 4]( Upgraded to Debian 13 and Audacity 3.7.3, Noise Reduction doesn't work as intended - #4 by bkpsusmitaa ).

Audacity’s Noise Reduction is an imperfect tool. It works pretty well within it’s limits, but it does have limits, and it is imperfect. You appear to be pushing up against it’s limits and discovering its imperfections.

Can it be improved?

Probably yes, if someone with the necessary skills, time and inclination invests enough time and effort into it.

Is anyone working on improving it now?

Not as far as I’m aware.

Do you know any developers that have the skills, the time, and the inclination to improve it?

Not at this time.

Mr. Steve, my hero, my leader in audacity. Finally, you arrive.

I fully agree. FFT is also an approximation.

But you, Mr. Steve and Mr. Trebor, all are missing out on only one aspect: Older version of audacity used to work the way it was intended for Noise Reduction: you have my screenshots as evidence. It didn’t spill over to the zero noise areas.

Human-intervention should improve an outcome, not degrade. What was there, isn’t there any longer.

Can an older version of audacity coexist with the newer version of audacity in a system?

If we are to treat the behaviour that you observe as a “bug”, then we need to isolate simple, repeatable steps, that demonstrates the issue.

The steps should begin with the exact version of Audacity, and how you installed it.
If the bug depends on specific audio in the track, then you will need to either describe how to generate that exact audio, or provide the required audio as a file.

If I can reproduce the issue and it is due to a fault in the program, I’ll log it on GitHub so that the developers can fix it.

I had given you exactly, both the erroneous one and the clean one, in my earlier posts, when I posted my screenshots. I simply downloaded from the Debian repository. Straight. I didn’t alter anything.
The repetition of the error should be straightforward. Insert a 0 noise, i.e., silence, within a noise sample. Then check.
I have the reason. I don’t have your programming skill and huge experience.

The Debian repository version is built by Debian rather than by the Audacity team. Have you tested the official Audacity (AppImage) version to see if it has the same problem?

No, I didn’t, to me its natural to depend on the Debian repository.
Should I forward the issue to the Debian Audacity build/maintainer team instead? Would you help them if they need help to solve this issue?
Before I send, I want to confirm that I have been able to convey the whole Big Picture to you and the Audacity Team.
The moot is: “Something that worked exactly as required no longer works the same way, but has become corrupted”.
If you forward your email-ID to my email-ID I could forward the Bug Report to you as well, for you to be in the loop and liaison.

P.S.:

Thank you, Mr. Steve for the bytes. [Downloaded the AppImage package](audacity-linux-3.7.5-x64-22.04.AppImage).
Then tried with a fresh audio file. And yes, the disease in intrinsic to Audacity itself. Spills over the same Noise to Silent segments. No influence of Debian Package Maintainer Team. However, I will report the Bug to the Debian Team with the necessary information.
So time for action, Mr. Steve!

I cannot reproduce this on Linux Mint (based on Ubuntu / Debian) with Audacity 3.7.3 AppImage, but I’m not entirely clear how you are testing, so I may be missing something important.

For bug reports, it is very important to find a precise set of steps which others can follow and produce the same (bug) result.

Nothing special. I download a YT podcast in audio format. Convert it a single track .ogg format with Sound Converter. Open the file in Audacity (even tried with the freshly downloaded AppImage, as said above). Find apparently silent areas with only noise signals. Zoom in to the dB (ordinate/y-axis) levels (not time in abscissa/x-axis). Insert a total silence segment. Select the Noisy patch as a sample. Then apply Noise Reduction. Then try Repeat Noise Reduction.

Thank you, Mr. Steve. You remain my hero.

BTW, Mr. Steve, could you please ask your team to extend the Automatic Closing Date for this thread? This post is going to Close Down automatically in 2 days, I am warned.

There’s the first undefined variable that prevents others from reproducing the exact same results.
Can you reproduce the problem using a well defined audio sample, for example: “10 seconds of Pink noise generated with Audacity’s “Noise” generator, amplitude 0.1”

Is that a necessary step to reproduce the problem?

How are you doing that? How long is the silence? What is before / after the silence?

What are you selecting? What is the “Noisy patch”?

What are you expecting to happen? How does it differ from what you expect to happen?

You may find it helpful to see an example of a bug report: Change Speed effect no longer sample accurate · Issue #3688 · audacity/audacity · GitHub
Notice that you or anyone can follow the " Steps to reproduce" and reproduce the same result. (The " Change Speed" effect has been renamed as “Change Speed and Pitch” since that bug was reported).

I provided you with the steps I follow/ the natural way I test. Audio format from YT means a file in m4a format, with YT identifier code 140.

I have checked with 3 sec pink/white/brownian nose interspersed with ~1 sec silences in between, which is required to test the spilling over effect when the Noise Reduction Tool is applied. Audacity reduces the noise without spilling over to the silence areas.

So the latest Audacity is perfectly able to clean predictable/mathematically generated artificial noise data.

However, for noise grabbed from natural sources the latest Audacity appears to fail. Moreover, it spills over to the Silent segments inserted in places to test the spilling over effect.

I have provided with a sample for you & each of my fellow members to test the natural noise sample. Then help me eliminate these noise from a natural set up.

Steps:
Audacity ⟶ Effects ⟶ Noise Removal and Repair ⟶ Noise Reduction … ⟶ Get Noise Profile, then use Effect ⟶ Repeat Noise Reduction.

More Silence areas may be inserted to check the spill over effect with the tool, if required.