Rate and depth matching

I an confused a bit and would like opinions please. For now, this is really about maximizing quality when the final mix down is CD quality. Also, this is not about using an analog signal to the PC/Audacity…

Audacity’s recommendations/defaults are 44.1/32 Float. Using an iface (whether it is a usb mic or something else), the incoming signal to the pc has already been converted to Digital (DAC in a iface, or in the USB mic itself). Shouldn’t audacity’s recording settings be set to match the incoming usb signal?

Example: my Iface, has a hard set depth of 24, and prefers 48.2. Shouldn’t the track/project/default be set to match that? Another way, what good is a higher sample rate or depth, it is higher than what the incoming signal is, as it will never be “better”.

Thanks… Ernie

Example: my Iface, has a hard set depth of 24, and prefers 48.2. Shouldn’t the track/project/default be set to match that? Another way, what good is a higher sample rate or depth, it is higher than what the incoming signal is, as it will never be “better”.

I’m not sure what you meant by “prefers”…

As far as I know you can’t really change the bit depth of the hardware. Audacity uses floating-point internally for “technical reasons” and you can convert losslessly from 16 or 24-bit to 32-bit float and back.

The sample rate may also be sometimes fixed in hardware and the drivers do a pretty good job of hiding any conversions (just as bit-depth conversions are done automatically without telling you). Sample rate conversions are NOT mathematically-perfectly lossless or perfectly reversible and IMO it’s “good practice” to avoid “unnecessary conversions” if possible.

But, conversions between 48 and 44.1kHz (either way) usually ARE audibly perfect. Pro studios use 24/96 so virtually everything on CD has been downsampled and that probably includes older recordings originally recorded to analog tape. Technically/theoretically Audacity’s “high quality” conversion setting is better than letting the drivers do it so if your interface is working internally at 48kHz you might want to record at 48kHz and change the project rate to 44.1kHz before exporting, but it’s not worth loosing sleep over…

As long as you’re at “CD quality” or better, conversions are almost never the weak link or anything to worry about… Most sound quality issues are on the analog-side.

the incoming signal to the pc has already been converted to Digital (DAC)

Actually the ADC. :wink: The DAC is for playback.

Your interface may be 24-bit 48000 Hz (48 kHz). It’s very unlikely to be 48.2 kHz as that’s not a standard sample rate and so is likely to cause problems for most software.

Converting from 24-bit to 32-bit float is totally lossless, so no problem there.
32-bit float is better when working with the audio as it allows better precision when processing, is usually a bit more efficient for modern PCs, and does not clip at 0 dB, and much better low level precision.
Audacity works internally in 32-bit float, so by converting (losslessly) to 32-bit from the start, there is only one sample format conversion (to 16-bit) on the final mix-down / export. If you record at 24-bit, then there is a conversion from 24-bit to 32-bit float and back to 24-bit every time you apply any effect or process to the audio.

Yes you could record at 48000 when recording from a 48kHz device, and then convert to 44.1 kHz (required by audio CDs) at the end, or you could record at 44100 at the start and not need to convert again. Either way there is one conversion from 48000 to 44100. The marginal benefit of converting to 44100 at the start is that it’s a little bit easier for the computer, a bit less RAM required, and a bit less disk storage space, but it’s unlikely to be a significant difference.

These days it’s common for studios to use 32-bit float / 96 kHz, though you’re correct that 24-bit was popular for a very long time, primarily because Pro Tools software stuck with 24-bit for a very long time (it now supports 32-bit float).

I don’t know any professionals these days that use 24-bit when mixing, because of the risk of clipping.

These days it’s common for studios to use 32-bit float / 96 kHz,

Right! But I believe the ADCs/interfaces are 24 bits and I believe the raw-original files are saved as 24 bits so maybe I should have said record at 24-bits.



Off topic, but I’ve wondered if the “tradition” of pros recording at around -18dB is related to Pro Tools originally working/mixing in 24-bit integer…

Yes. Virtually all pro quality audio interfaces are 24-bit.

Possibly. Most studios have their “house rules” for exactly how, when, where and what format backups are done.
One thing for sure - they all make backups :wink: