Importing ALAC and AAC: Shooting yourself in the foot?

OS: macOS 10.12.6 or 10.8.5 (happens on both systems)
Audacity: Version 2.2.1

Default Sample Format is set to “32-bit float”, in the preferences. This is usually reflected in the the track control panel, too. However, after importing a (16-bit) Apple Losssles or AAC file, Audacity will indicate “16-bit PCM” here. Both file types have the extension “.m4a” under macOS.

When I convert the ALAC or AAC file to any other 16 bit format such as WAV, FLAC or AIFF before feeding it to Audacity, Audacity will use 32-bit float, as expected.

I have compared the sizes of the resulting project and data files in both cases: When opening an ALAC file and saving it as a project, the resulting amount of data is approximately only 50%, compared with the case of reading the same (!) 16-bit audio data from an AIFF file and saving it as a project. So there is not just a wrong information shown in the track control panel; the problem seems to be for real.

(Still worse: When opening such a saved 16-bit PCM project again, Audacity will misleadingly show “32-bit float” in the track control panel.)

On macOS – in contrast to Linux and Windows – ALAC and AAC files are decoded by means of the operating system; the FFmpeg library is not required. At least, that’s how I understand it. Could it be that the bug is rooted here and is macOS-specific? Yet there is – of course – no reason to degrade the computational accuracy to 16-bit fixed point just because of this.

I agree this is confusing, but it is working “as designed”.
Audacity uses a number of different third party “importers” for importing audio. If Audacity is set to use 32-bit float by default and the importer supports 32-bit float, then you will get 32-bit float. If the importer send the audio data in a lower format, then Audacity makes no attempt to convert it to 32-bit unless you explicitly (manually) change the track format to 32-bit float.

To manually change the track sample format (after importing), see: http://manual.audacityteam.org/man/audio_track_dropdown_menu.html#format

“… as designed”? This is utterly stupid design! :imp:

When I select a default format of “32-bit float”, I expect Audacity to respect this, regardless of how it is achieved. I don’t care whether the conversion from 16-bit fixed point to 32-bit float is done by the “importer” or by Audacity itself.

Or, with other words: What are “preferences” good for, as long as they are not respected?

As I have said myself many times. However, the ‘best’ behavior (for sound quality) is not quite as simple as you are suggesting.

Audacity works internally in 32-bit format, so if any processing is applied, then the audio is processed as 32-bit float, and the resulting samples will (mostly) be values that cannot be expressed exactly in 16-bit. Thus, if the audio is subsequently exported as 16-bit, the sample values need to be rounded in some way to 16-bit values. Mathematical rounding does not work well with audio because it causes “quantization noise” (which can sound like an unpleasant “rasping” sound when the audio level is very low), so audio applications use a special kind of rounding called “dither”.

In an ideal world, if the audio file is uncompressed 16-bit format and only simple cut/paste type edits occur, and the export format is uncompressed 16-bit format, then “perfect” quality would be achieved by not converting the format. If there is no processing, then 16-bit sample values remain as exact 16-bit values, so dither is not required (should not be applied).

Being able to correctly handle both 16-bit integer and 32-bit float data goes pretty deep into the Audacity code - it’s difficult and risky work, and very few users will ever even notice the difference. For those that do care, we just have to ensure that our tracks are always 32-bit float unless we have a specific need to do otherwise.

Of course, if you know any really skilled C++ developers that are willing to take on a difficult and risky job with no pay and little reward, then please do send them our way :wink:

In the vast majority of tasks to be performed with Audacity 32-bit float is the best choice, and that’s why one can chose between integer and floating point formats, with 32-bit float being the default setting. That’s perfectly ok! The section of the manual decscribing the Preferences menu says:

Default Sample Format: Offers a choice of three sample formats or bit-depths. This affects both imported and newly recorded material [,…]

There is absolutely no indication that certain file types (ALAC, AAC) under certain circumstances (macOS) do not fall under this rule when imported.

This is entirely correct, but I did not suggest something like "auto-conversion to floating point as soon as the first multiplication or division occurs " – even though that would be pretty. I just suggest a predictable behavior, which follows the description in the manual.

That can be corrected.

There is a highlighted paragraph that describes the issue as it relates to Ogg files (which I believe is the same on all platforms), but I guess the person that wrote that did not have a Mac (and I’m mostly on Linux).

Could you post a short ALAC file that demonstrates the issue (just a 4 or 5 second “Chirp” will be fine) so that I can check to see what happens on each platform, then at least we can have the documentation accurate.

No, that’s the wrong way. ALAC and AAC should be converted automatically to 32-bit float, if this is the default format. There is no reason to treat ALAC in a different way than, e.g., FLAC, or to treat AAC in a different way than MP3, just because the conversion needs to be performed in a different module of the software.

Will do so, in a few minutes!

Test file (the jingle of our radio station), in AIFF format:
http://www.cq131a.de/x/0-Jingle-0.aiff

Screenshot from Audacity after importing this AIFF file:
http://www.cq131a.de/x/AIFF.png

Same audio as Apple Lossless:
http://www.cq131a.de/x/0-Jingle-0.m4a

Screenshot from Audacity after importing this .m4a file:
http://www.cq131a.de/x/ALAC.png

PS:

Same audio as FLAC:
http://www.cq131a.de/x/0-Jingle-0.flac

Screenshot from Audacity after importing this .flac file:
http://www.cq131a.de/x/FLAC.png

Thanks, got it.

One more thing if you could:
In a new Audacity project, import an ALAC file, then look in “Help > Diagnostics > Show Log”.
Does it say:

Opening with libav

The last 4 lines of the log file, on a machine with macOS 10.8 and FFmpeg library installed:

23:42:46: File name is /Volumes/was/tmp/test/0-Jingle-0.m4a
23:42:46: Mime type is *
23:42:46: Opening with libav
23:42:46: Open(/Volumes/was/tmp/test/0-Jingle-0.m4a) succeeded

The last 5 lines of the log file, on a machine with macOS 10.12 and FFmpeg library NOT installed:

23:46:47: File name is /Users/was/tmp/test/0-Jingle-0.m4a
23:46:47: Mime type is *
23:46:47: Opening with libav
23:46:47: Opening with quicktime
23:46:47: Open(/Users/was/tmp/test/0-Jingle-0.m4a) succeeded

NB: Same result in both cases.

Thanks for the details.
I’ve raised this for discussion on the QA mailing list.

I am very disappointed that the bug persists with 2.3.0.: Opening an m4a file (ALAC as well as AAC) leaves Audacity in 16 bit integer format. (I had to use the ffmpeg libary in order to open m4a files, since built-in support for m4a seems to be missing).

There is absolutely no reason, why Audacity goes to 16-bit-integer mode after reading an m4a file, whereas 32 bit-floating mode is used in all other case. Is this so hard to understand?

I’ve just noticed: After importing an AAC file, Audacity 2.3.0 does switch to 32 bit floating point, if this the default format. Don’t know why I got this wrong… I’m sorry. Importing AAC into Audacity is something I don’t do frequently.

Apple Lossless (ALAC), however, is still affected by the bug. Note that the FFmpeg library is present, and that the extension is “.m4a” in both cases.