Can't upsample to 24 bit depth.

Hello,

I just picked up Audacity today and have been over a few tutorials. I am able to upsample to 96K sample rate but I am unable to upsample to 24 bit depth. I can do 16 and 32 bit depth but no luck w/24. Did i miss something somewhere? I read the default is 32 floating. I am, however, only upsampling to listen to 24/96 rates on my favorite cds. I will not be doing recording or anything like that. If anyone could shed light as to why 24 bit depth is out of my reach that would be great.

Thanks a lot,
Robert

I’m not sure exactly what you are trying to do, or, more importantly, “why”. I get the impression that you are wanting to change the format of your CD tracks to 24 bit / 96 kHz WAV files in the hope that it will improve the sound quality. It won’t improve the sound quality. If that is what you are trying to do, then don’t - you will just be wasting your time because at best the sound quality will be unchanged, and at worst the sound quality will be worse than the original.

If I misunderstand what you are trying to do, please try to describe in more detail and I’m sure we will be able to help you accomplish the task (I’m just not sure what “the task” is).

If I’ve got it right about what you are trying to do and you want to know why it will not improve the sound quality, or you want to prove to yourself that it does not, then we can help with that.

Hello, and thanks for the quick response.

I’m VERY new to this and still feel a bit out of sorts. But from what I’ve put together, granted, without a lot of background information, is that bit depth and sample rate can act as a resolution tactic in the upsample. I come from the visual world, and upsample is used all the time. As it is in the TV world. So, if i’m hearing you right than a higher bit depth and sample rate is only useful for the recording process and does nothing when upsampling?

Sorry my questions may not be clear, I’m still not quite sure EXACTLY what i’m asking considering how new it is to me.

Thanks,
robert

Increasing the number of samples per second (increasing the “bit rate”) can be useful for a few specific types of processing, for example if you need to align the phase of two separate recordings. It allows finer changes to be made in the time domain. If you think of digital audio as dots at regular time intervals on the analog waveform, then increasing the sample rate puts more dots (shorter time intervals) on that analog waveform. When recording, having more dots (samples) per second means that you can record higher frequencies. However, putting more dots onto a waveform that has already been digitally recorded makes no difference to the waveform - it just makes more dots on the waveform that is already recorded. Doubling the sample rate will add an extra dot between each existing dot on the recorded waveform, but makes no change other to that recorded waveform. Thus, when converted back to analog (on playback), the reconstructed analog waveform is exactly the same as it was before you added the extra dots. Using a very high sample rate rarely has any benefit, as even 44.1 kHz can accurately represent frequencies well over 18 kHz, which is much better than loudspeakers can perform, and better than most adults hearing.

Increasing the “bit depth” (“bit format”) is an increase in the number of bits per sample. This is very useful when processing audio because it allows the processing to be done more accurately, which is why Audacity uses 32 bit float by default. Processing audio changes the sample values in some way, for example, “amplification” multiplies each sample value by a specified amount. When processing in 16 bit format, the processed sample values are rounded to the nearest 16 bit value. This rounding is called “quantize error”. By increasing the number of bits per sample, quantize errors are reduced. For 32 bit float format, the quantize errors are so small that they are totally insignificant (totally inaudible), whereas 16 bit rounding errors may be just about audible (but very subtle because even 16 bit is a high quality format). If you are not processing, then increasing the bits per sample does nothing to the sound quality - it simply pads the sample values with additional zeros.

By analogy, say that you have three sample values: 23, 47, -16
If we were processing, say, reducing the volume by half (divide by two), then our original version would give rounded values because we are using whole numbers:
12, 24, -8
and if we then amplified to double the volume (multiply by 2), then we can see the errors:
24, 48, -17.

Now lets do the same again, but first we will increase the resolution:
We can increase the precision by writing them as: 23.000, 47.000, -16.000
Note that the values are exactly the same as the original sample values, so we have added nothing to the “quality”.
Then when we divide by 2 we get:
11.500, 23.500, -8.000 (working to three decimal places)
Then when we multiply by 2:
23.000, 47.000, -16.000
We have retained the full fidelity, and this is the benefit of processing in a high sample format.

Does that help?

Steve,

Thank you for you insightful and detailed response. I’m sure it took some time, and for that i’m grateful. It was very clear and very much to the point. To the degree where I think I understand a greater part of the audio mystery.

It got me thinking (if I may use a visual analogy). In applications like photoshop, and a host of other third party plugs, we interpolate or ‘inject’ the pixels with new pixels. What the software does is that it takes half of one pixel, say green, and half of the color next to the green pixel and mixes them together in order to create a new pixel/color. Now, it’s not adding more detail to the image, it is only interpolating it. You can however, increase a 11x14 inch image to a 4 ft. size image with this process. Though, if you stand too close you will see the upsampling. It doesn’t make the image ‘look’ better and with more detail… It just makes it bigger.

As per your example, regarding putting dots on a waveform, I was clearly able to see how it relates to ‘interpolating’ visual data.

So, with that, thank you once again…you’re alright in my books (and you saved me a lot of time)!

One difference between audio & visual is the audio digital-to-analog converter. It has a smoothing filter and you get an analog waveform with essentially an infinite number of interpolated points. (8-bits still sounds lousy, but the waveform out of your DAC is “smooth” analog. :wink: )

Another difference is that “CD quality” (16-bits/44.1KHz) is already better than human hearing. Imagine you are watching your TV from across a football field… At that distance Blu-Ray won’t appear any sharper than VHS tape.

Some audiophiles (and even probably most pro audio engineers) will argue that “high resolution” sounds better. But the guys who do [u]scentific, blind, level-matched listening tests[/u] have demonstrated that there’s no audible difference.

The pro-studio standard is 24-bit/96kHz, and there may be some benefit to recording at 24-bits since pros generally record at around -18dB. At -18dB you are loosing 3 bits of resolution.

One difference between audio & visual is the audio digital-to-analog converter.

Just like the visual world. You inject pixels getting a number of interpolated points, but the light coming from the image is “smooth” analog…in a manner of speaking of course.