How to check if a file is a "real" 24bit?

Hello,

I have 2 files:

  • 1 is presented as 16bit, 44,1 kHz, 501kb/s
  • 2 is presented as 24bits, 44,1 kHz, 1206 kb/s

My ears tell me that they are identical. And my eyes too because in Audacity, the wave form and the spectogram are 100% identical.

Does that mean the file presented as 24bits is just a 16bit that has been “upscaled”?
Is there a way to check that? How?

Should I delete the 24bits “useless” file that takes twice the space on my hard drive? Or is it really better? I have been told that most of the so-called 24 bits sold on the digital market are “fake” 24 bits, and only upscaled 16 bits.

That’s not right for PCM audio.
16-bit at 44.1 kHz is 705.6 kb/s for mono (1411.2 kb/s for stereo).


That’s not right for PCM audio either.
24bits at 44,1 kHz is 1058.4 kb/s for mono.

What format are the files?

Hello. Thank you for your reply.
Well, right or not, that’s the properties of the files. Files are FLAC.

With another software, file 1 is:

Durée : 235 secondes
Canaux : 2
Bits par échantillon : 16
Taux échantillon : 44100 Hz
Taille fichier : 15210470 octets
Bitrate moyen : 515 kbps
Taux de compression : 36%

File 2:

Durée : 235 secondes
Canaux : 2
Bits par échantillon : 24
Taux échantillon : 44100 Hz
Taille fichier : 37357856 octets
Bitrate moyen : 1267 kbps
Taux de compression : 59%

Ah, that explains it. FLAC files squash the data together to reduce the file size without losing any sound quality (“lossless compression”). The numbers seem about right for stereo files.

16 bit is sufficient to handle around 100 dB of dynamic range, so unless you’re listening to the loud parts at pain threshold, 16 bits per sample are sufficient. 24 bit gives you even more dynamic range so it can produce both sounds that are too quiet to hear and loud parts that are painfully loud.

High bit depths are helpful during production (Audacity uses 32 bits per sample), but somewhat overkill for listening to the finished product.

Thank you for this explanation.
Is there a way to check if this 24bit version of the same track is really 24bits or is it a 16bit that has been “upgraded” or re-exported as 24 bit?

My ears tell me that they are identical.

That’s not surprising since “CD quality” (16-bit/44.1kHz) is better than human hearing. :wink:

Often a good-quality MP3 (lossy compression) can sound identical to the original (in a proper blind listening test).

And my eyes too because in Audacity, the wave form and the spectogram are 100% identical

The waveform doesn’t have enough visual resolution to show even 16-bits and I doubt the spectrogram does either. The spectrogram sometimes shows a difference at higher/lower sample rates (96kHz vs 44.1kHz) and often the spectrogram will show a difference with MP3, but that doesn’t mean you can hear the difference.


Does that mean the file presented as 24bits is just a 16bit that has been “upscaled”?
Is there a way to check that? How?

It’s not always easy… I found a plug-in called [u]Bitter[/u] that’s supposed to that. If you Google, you can probably find similar tools.

If it was “simply” upscaled the 8 least-significant bits will be zero and that would be easy to test. Those zeros take just as much space in an uncompressed file so the bitrate is the same for a “fake” or real 24-bit WAV file. A bunch of zeros are easy to compress so the FLAC would be the same size (and bitrate) as the 16-bit file.

But, simply changing the volume by 0.1dB, or dithering, or changing the sample rate, or almost any slight-effect will fill all of the bits with data. It wouldn’t be hard to “fake it”.

I tried an MP3 (ripped/converted from a CD) and Bitter shows it as 32-bits which is “mathematically true” but meaningless in terms of audio quality/resolution so it’s not hard to fake-out Bitter.

Should I delete the 24bits “useless” file that takes twice the space on my hard drive?

You already said you can’t hear a difference so that’s probably an “emotional” decision. If you are doing any editing processing there might be an advantage leaving at the highest resolution.

I have been told that most of the so-called 24 bits sold on the digital market are “fake” 24 bits, and only upscaled 16 bits.

Probably not “most” if you’re buying from legitimate suppliers. The pro studio standard is 24-bits/96kHz so they certainly can distribute 24-bit files. And, some people share true-24-bit “vinyl rips” even though vinyl is not as good as CD.

Okay. Thank you.
My idea is to make editing process with it indeed. So, I suppose it will be better to edit it with that 24bit version.
It quite confirms what I was thinking, it’s almost impossible to check for “real” 24bits.

Often a good-quality MP3 (lossy compression) can sound identical to the original (in a proper blind listening test).

Yes, but. MP3 is designed to be an end-product. Make an MP3 and full stop. If you make an MP3 from an MP3, the sound quality is going to degrade and if you do it a third time, you might destroy the work.

Koz

Here is a simple test based on the idea that an audio file won’t compress well as data. A file just “converted” from 16 to 24 bits will contain a lot of redundant least-significant 0 bits since the samples will just have been left-shifted by 8 bits. First decompress a single channel to PCM using ffmpeg:

ffmpeg myfile.flac -map_channel 0.0.0 -f s24le myfile-ch0.pcm

Then compress it using zip:

zip myfile-ch0.zip myfile-ch0.pcm

If the reported compression is in the single digits, then it’s bonafide 24-bit. If on the other time it compressed 30% or more, it’s definitely gimmicky.

That is not going to work if the audio has 24-bit dither applied.