(Audacity 2.3.1 on macOS 10.14.5)
I was testing the losslessness of the FLAC coder/decoder in Audacity using a 32-bit WAV created with random noise generated within Audacity itself, and I got some tiny roundoff errors when comparing the original WAV (32-bit float) with the decoded FLAC signal.
I read about it here: https://xiph.org/flac/faq.html#general__samples_fp
It seems that FLAC doesn’t (and probably never will) support floating-point signals, so these errors must be happening because there are float-to-int and/or int-to-float conversions going on before/after the codec.
So, shouldn’t Audacity warn about this to the user upon exporting floating-point audio to a lossless format?
I mean, some people are probably using this without realising that they are actually getting some (albeit tiny) losses on exporting to a format they might perceive as being completely lossless.
Cheers,
Vasco.