Is there a noticeable difference in sound quality between 48 khz and 96 khz sampling frequency?
Would there be any point in recording with 96 khz but playback in a lower rate (if at all possible) ?
96 vs 48 khz would effectively double the data volume to process/write to hard disc (with the same bit depth) right?
Would that translate to half the number of tracks with 96, before capacity limitations and stuttering sets in in playback?
Think I saw someone advice against using more than 48 khz sampling freq, in some other thread.
Can you hear the difference ? …
I can only tell them apart by frequency analysis graphs.
No, cant hear any difference between them…
That’s not why you use extreme data settings. 32-bit floating, seemingly a waste of bits is valuable during effects and processing when you need to dig up a very quiet performance out of the mud and there are still significant bits in the sound to do it. Some sound processing causes extreme variation in volume and low bit depths may not be up to it. In some cases, it also allows you to go over “zero” and rescue clipped performances.
In 44100, tones above 17 KHz are a guess. Literally. Beyond 17 KHz unconditional Nyquist accuracy goes away, but that’s OK, because nobody can hear up that far. That’s slightly higher than FM radio. 48000, the television standard is better, it starts guessing at tones above 18.5 KHz.
96000 keeps its unconditional frequency integrity out to 37 KHz, more than enough slip room if you need to do frequency shifting or other tricks that need ultrasonic “sound.” Need to pitch everything down one octave for some reason? 96000 is the only one that will survive that. 44100 will produce garbage at tones above about 8 KHz.
Koz