BUT, the same documentation says that in most causes Audacity cannot actually record at anything higher than 16-bit (and almost no analog to digital converters sample at higher than 16-bit anyway), so it seems really confusing that we need to always use 32-bit float if the actual audio is only at 16-bit.
I am not asking for a detailed explanation of the internal workings, but in a real world situation, how does this apply? For example, I routinely record audio on two different computers, and then do editing and sound cleanup on a third computer:
- An computer A, I record streaming audio which is generally modest quality, but rarely requires any editing other than splitting files and normalizing sound levels.
- On computer B, I record LP and tape conversions. These files should be much higher quality, and also often require a lot of post processing.
- I move all the recorded .wav files to computer C for any editing and final library management.
Finally, if a save a final recording as a 16-bit 44100 format FLAC file, and then at some later time decide I want to do some additional editing, why should I then let Audacity re-convert that same audio back into 32-bit float? My uninformed logic just says that would be a pointless conversion.