I’m curious about how Audacity 3.1.3 (for Windows 10) downsamples to a WAV file when the Default Sample Rate for recording is greater than the Project Rate for export. For example, suppose one samples at 88.2kHz and exports to 44.1kHz. Since this is exactly half the recorded rate, one might imagine that Audacity would just throw out every other sample, but that can’t be the “best” way. There probably needs to be an anti-aliasing filter involved, correct? On the other had, if the recorded sample rate is 96kHz and the target is still 44.1kHz (not an even divisor), something more complex clearly must happen. Can anyone tell me what?
Since there would be computation going on (in the default 32bit float) in the above case, even with dithering set to “NONE,” one assumes that all the less significant bits would have become fully populated so that “Lossless Audio Checker,” for example, would not be able to find the exported file “Clean” (not upsampled), correct? In this case might it help to set the Default Sample Format to some binary value, “Signed 16-bit PCM” for example?
I’ve been puzzled about how to verify the actual bit depth coming in from an external ADC – there doesn’t seem to be a way to get Audacity to tell me directly. Even with export to the same sample rate and bit depth, “Lossless Audio Checker” sometimes appears to give the wrong answer. Might a pure binary Default Sample Format solve this problem and even allow manual inspection of individual samples in the exported WAV file (with what tool I don’t know)? – Clark2
The short version:
Audacity uses the SoX Resampler library for resampling.
The long version: