I’m mixing samples from multiple sounds into a single stream. For the sounds, I use whole numbers to track the current sample. What I am considering is tracking the amount of time elapsed with a float, and then calculating the current frame with something like “time / sound sample rate”. The benefit of this method is that I can use any sample rates I would like for sounds, and even change the playback speed. Would this be a good idea, or is the loss in precision critical for clear audio? I will also note that I increment the elapsed time with a float as well, so “elapsed += 1.0 / stream sample rate”.
I created a script to try to see how much precision is lost (44100 samples at 44100 Hz returned 1.0000000000004803), but I’m not currently able to tell if it is significant. The final results were close, but now I worry that maybe the precision of floats is not linear, but ‘saws’ or something, which would cause uneven playback.