FLAC silence not silent?

Hello,

I’m running Audacity 2.2.2, the latest version for Opensuse 15.2. I tried a simple test: recording silence into a FLAC file. When I import that file back into Audacity, the sound data is all non-zero! Small, but non-zero. Something in the -60dB range. But why is it not zero, or even all that close (I would expect it to be at least -96dB, what I should get with 16-bit FLAC). Help?

What I did:
start Audacity with a blank project.
Tracks->Add New->Stereo Track, at 44100 Hz Project Rate
Generate->Silence to generate 1 second of silence into this track.
Analyze->Sample Data Export to verify 44100 samples of 0. Ok so far
File->Export->Export Audio, saving as FLAC file, 16 bit depth, level 5 compression, called it sil.flac
closed Audacity without saving project
Restarted audacity with a blank project
File->Import Audio… to import sil.flac
Analyze->Sample Data Export to verify data. almost all entries are NON-0, most are +/- 0.00003, 0.00006 0.00009

Bummer! So much for lossless being lossless
Also interestingly, doing this three times to generate 3 files generates 3 different pairs of Frame Size Min/Max (as shown by exiftool).
Is there some random seed in FLAC generation? Why wouldn’t 44100 zero samples compress identically 3 times in a row?

I don’t have another way to read FLAC file sound data to see whether the non-zero data is coming during the writing of the file, or the importing of it (or both). I’ll attach my file here, so if someone else can tell whether this flac file encodes 44100 “0” data entries or not, that would be helpful. I would think that 44100 zero entries would compress really REALLY well, and the file is 27kbytes, so not all that small. Hence, I suspect the file is being written with non-zero data.

I suspect rounding errors from audacity’s internal 32-bit float to 16-bit integer, but a 32-bit float 0 should convert exactly to a 0 integer, I would think. And regardless, there’s 23 or 24 bits of precision in a normalized IEEE float, plenty to exactly encode all 16 bit integers. What’s going on? Can anyone else reproduce this? Is the Opensuse version a bad build? Is this a general problem with Audacity and FLAC? Is there another mechanism to write truly lossless FLAC files from data captured in Audacity?

–Dave

It’s [u]dither[/u] which you can turn off.

The theory is that you should dither whenever reducing the bit depth and it’s supposed to sound better than quantization noise. Audacity is working in 32-bit floating point and it doesn’t keep track of the original bit-depth so you have to keep track and then dither or not.

But certain effects or operations can increase the effective resolution so you might want to dither. For example, if you import a 16-bit file and fade it out, the floating-point fade-out will go below -96dB and below the 16-bit quantization noise. So, it MIGHT sound better dithered.

But at 16-bits or better and normal listening conditions you can’t hear the quantization noise or dither so IMO, it’s not a big deal one way or the other.

Or when you record from an analog source (especially with a microphone), the analog (and acoustic) noise will exceed the dither noise (at 16-bits or better) so it’s self-dithered and there’s no need need to add more dither-noise.

At 8-bits you CAN hear quantization noise. Like regular analog noise, quantization noise is most noticeable when the signal is quiet. But unlike analog noise it goes-away completely with digital silence. Someone here was recently working with 8-bit files and they preferred to turn dither off even though dither is supposed to be better. I’ve never done that experiment.

Thank you Doug. I wish I’d asked sooner!

Shows how naivete can get you into trouble. Gee, I thought, I’m starting with a 16-bit source, using a 32-bit intermediary, and saving at 16-bit. What could possibly go wrong? Dither. Harumph.

Live and learn. And cry a little.

–Dave

You can turn dither off in Audacity … Dither - Audacity Manual
if reduced FLAC size is important …

Thank you, Trebor.
I’m not crying over a few (cheap) bits of memory.

The crying is because I’m just starting post after 8 months of recording, and
instead of having hundreds of gigabytes of clean 16-bit sound, I now learn
that every single sample I recorded was corrupted because I failed to get a
setting correct. So much for my first experiment with digital recording.

The good news is I’m unlikely to make the same mistake again!

–Dave

corrupted” is a strong word for dither-noise, it’s not conspicuous.

If you must have absolute flat-line digital-silence,
a noise-gate could be applied to your existing recordings …
https ://manual.audacityteam.org/man/noise_gate.html

Gating to absolute silence isn’t necessary a good thing: speech can become jarring/choppy.

It should be about -84 dBrms, and the frequency range of the noise heavily biased towards frequencies that are least audible. At normal listening levels the dither noise should be inaudible, so although not ideal, it shouldn’t be spoiling the listening experience.

Yeah, the man page for dithering also states that it has a de minimis effect. Bt in my experiments generating silence, writing to a file (once),
reading in that file and scanning it with Silence Finder, I have to “raise” the threshhold for silence to -67dB to get silence finder
to identify it as silence. At -68dB, there are at least brief glitches of sound that break up the (originally completely) silent block. So that’s where the 60dB number came from – OP_PEAK in the mid-to-upper 60’s even if RMS is in the mid-80s. Single-bit noise at 16-bits should be in the mid-90’s, though, shouldn’t it? Do you really need to go up 30dB to mask rounding error effects? I guess smarter and more knowledgeable people than myself say yes.

The painful part is that it was an unnecessary mistake on my part. A too-cursory reading of dithering led me to believe that dithering was only done if there was actual rounding during 32->16 bit conversion. I knew I wouldn’t have any rounding, so I got sloppy. I could have read the manual more closely – it’s all documented – and three different workarounds exist (off, rectangular, or using 16-bit internally). I got distracted by the promise of lossless file processing and conversion, and typical experience with many free software packages and their often-hotly-debated default settings to be ideal for simple uses of the program. In this case, it is set for typical use, and my simple use (with insufficient care by me) exposed a side effect. No one to blame but myself.

But the violation of expectations – the whole advantage of digital processing was bit-perfection – really stings. Hence my use of the word “corrupted” (which I stand behind). Audacity had an exactly correct number and deliberately changed it (at my misguided request!) to an incorrect number, not just once or twice, or under unusual circumstances, but every single sample. Irreversibly. I just didn’t see that coming.

This is a subject that has been under recent discussion and is currently being addressed by the developers. See: