Can I set the _real_ sampling bit depth?

Hi, I’m using an old G5 iMac with OSX 10.4. Audacity 2.0.0 runs fine, and I wish to congratulate the people who made this possible, as well as your support to Tiger, which is quite uncommon nowadays. Thanks!!

I’ve a question, though:

When I digitize a sound, the new track is created with the default 32 bit FP format. That’s fine, but… what’s the real bit depth being used for sampling? According to my iMac specs, its builtin sound card supports PCM16 and PCM24, but not 32bit floating point, so I guess Audacity is sampling in either PCM16 or PCM24 and then storing the result as 32bit floating point. Can I know if my samples are being digitized as PCM16 or PCM24? How?

You are correct. You sound interface is sending either 16-bit or 24-bit PCM to Audacity, and Audacity is converting that to 32-bit float.

Go to Audacity menu > Preferences > Quality and set the bit depth to whatever you want. I routinely use 16-bit PCM since my final destination is either CD or MP3, and I am not doing a lot of editing other than deletes and fades, so don’t feel I need the extra headroom that I’d get with 32-bit.

– Bill

Thanks Bill. But… let’s suppose I set 24bit PCM there in the quality preferences. How do I know that I’m really recording at 24 bit, rather than 16bit and resampling to 24 on the fly?

I feel like I miss a setting for the real recording sample format. The quality preferences are for the stored track, not for the real recording format (otherwise I wouldn’t be able to select 32bit FP, since my iMac doesn’t support recording in 32bit fp AFAIK).

Correct.

Audacity does not directly control the bit format of the audio that is captured. Audacity requests the bit format according to the Quality settings, it is then down to the computer sound system to send audio data in the requested format, or if unable to do so (for example if 32-bit is requested) then to substitute a format that it does support, which Audacity will then convert to the format specified in Preferences > Quality.

If your system settings specify 24-bit, then the actual capture should be 24-bit. If Audacity Preferences > Quality are set to 24 bit or higher (32-bit float) then Audacity should receive the data in 24-bit (and if Quality is set to 32-bit float, convert from 24-bit to 32-bit float). Unfortunately this does not happen on Windows due to a limitation in the version of portaudio that Audacity uses. As far as I’m aware, this limitation only applies to Windows.

If you want to experiment, see this post: https://forum.audacityteam.org/t/count-different-levels-used-histogram/23908/1

More to the point - what is the practical difference between recording 24-bit vs 16-bit?

Thanks a lot steve, also for the scripts you posted. I’ll try them.

I believe there should be some way of knowing the real sample format which is being used while recording. Otherwise, some user might believe he has a 24bit track while it’s actually a 16bit track stored as 24bit.

For example, it could be an small label next to the mic levels meter, just informing the real bit depth coming from the input source.

Is there some feature request for this? If not, I’d suggest it.

(btw, sure, maybe there’s little practical difference between 24 bit and 16 bit, but there’re some cases, when you want to do some special processing to some special source, where you really need to know what’s happening behind the scenes…)

Audacity may not know the originating bit depth.
When you record into a 32-bit track, the actual bit depth of the track audio data is 32-bit float and this is displayed in the Track Panel on the left end of the track.

If, for (a hypothetical) example, Audacity requests 32-bit data, but the sound card (audio device) only has a 16 bit ADC (Analogue to Digital Converter) then the raw capture will be 16 bit. However, the sound card driver may respond to Audacity’s request by padding the 16 bit data to 24 bit, so you would then have 16-bit data, padded to 24-bit, received by Audacity as 24-bit and converted to 32-bit float. If Audacity displayed 24-bit (the actual format that it receives) then that would be misleading, but Audacity would not know that the original capture was 16 bit because by the time it gets to Audacity it is 24 bit.

There IS a practical difference between 24 bit and 16 bit data capture. My question was not rhetorical. The difference is dynamic range. Consequently, the practical issue is “how much dynamic range do you need?”

Well, yes, the audio card driver can “lie” by padding the input, but… somehow I believe it’s best to provide as much details as possible in an audio app. For example, I’d prefer to have a combobox with the sample formats supported by the audio card (yes, even if the driver pads it… if a user is interested in these details, such user already knows what sample formats are really supported by his card, and seeing the list of formats found by Audacity would be a symptom of “everything looks fine in the system” or “there’s something misconfigured”).

Anyway, Audacity is great :smiley: I just miss a bit of lowlevel info regarding what the card driver supports.

Look in “Help menu > Audio Device Info”

This is what I get. No mention of sample formats, although maybe CoreAudio virtualizes that, I’m not sure:

==============================
Default capture device number: 0
Default playback device number: 0

Device ID: 0
Device name: Built-in Audio
Host name: Core Audio
Input channels: 2
Output channels: 2
Low Input Latency: 0.010000
Low Output Latency: 0.001361
High Input Latency: 0.100000
High Output Latency: 0.013605
Supported Rates:
8000
9600
11025
12000
15000
16000
22050
24000
32000
44100
48000
88200
96000
192000

Selected capture device: 0 - Built-in Audio
Selected playback device: 0 - Built-in Audio
Supported Rates:
8000
9600
11025
12000
15000
16000
22050
24000
32000
44100
48000
88200
96000
192000

Available mixers:

Available capture sources:
0 - Internal microphone
1 - Line In

Available playback volumes:
0 - PCM

Capture volume is native
Playback volume is native