I have tried searching, but I either cannot find the right info or I am just too dense to understand this. The documentation, and many posts on this forum, clearly state that Audacity uses 32-bit float for internal processing, and it is strongly suggested that we not change that in the Quality settings. I got that.
BUT, the same documentation says that in most causes Audacity cannot actually record at anything higher than 16-bit (and almost no analog to digital converters sample at higher than 16-bit anyway), so it seems really confusing that we need to always use 32-bit float if the actual audio is only at 16-bit.
I am not asking for a detailed explanation of the internal workings, but in a real world situation, how does this apply? For example, I routinely record audio on two different computers, and then do editing and sound cleanup on a third computer:
- An computer A, I record streaming audio which is generally modest quality, but rarely requires any editing other than splitting files and normalizing sound levels.
- On computer B, I record LP and tape conversions. These files should be much higher quality, and also often require a lot of post processing.
- I move all the recorded .wav files to computer C for any editing and final library management.
So should I be constantly saving ALL recordings in .wav 32-bit float format until I have completed final editing, or are there some lower-quality files where it just will not make any difference at all?
Finally, if a save a final recording as a 16-bit 44100 format FLAC file, and then at some later time decide I want to do some additional editing, why should I then let Audacity re-convert that same audio back into 32-bit float? My uninformed logic just says that would be a pointless conversion.