I am converting edited wav files to the 4 sizes of mp3 files so a web developer can test file sizes / load times for use on my website. I am checking the exported files using the Media Info website and the analysis there doesn’t match the format options I selected when exporting. I am using bit rate mode - preset, variable speed - fast, channel mode - joint stereo.
When I export quality 320 kbps, Media Info says bit rate mode - constant, 320 kbps (this one looks as expected)
When I export quality 220 - 260 kbps, Media Info says bit rate mode - variable (I selected preset)
When I export quality 170 - 210 kbps, Media Info says bit rate mode - variable (I selected preset) and 152 kbps (not in range selected)
When I export quality 145 - 185 kbps, Media Info says bit rate mode - variable (I selected preset) and 119 kbps (not in range selected)
LAME VBR (variable bitrate) is “quality” setting, not an actual bitrate setting and the range is approximate. The bitrate you get depends on the program content. If something is “simple” and “easy” to compress you’ll get a lower bitrate and if the sound is “complex” and “hard” to compress you’ll get a higher bitrate. Apparently, you file is easy to compress.
If you want a particular bitrate you can use constant bitrate or average bitrate. Average (ABR) is also variable but it allocates the “bits” where they are needed most throughout the file, changing the bitrate moment-to-moment depending on the complexity of the sound, for an overall average that you’ve selected.
I did some further reading and I’m beginning to understanding that using the Audacity defaults, bit rate mode - preset, variable speed - fast, channel mode - joint stereo, with the LAME encoder, is the best method of converting wav files to mp3s available, and the reliability and file size vs. sound quality are equal to what a qualified audio engineer with any software would do. Am I on the right track with this thought?
Using the old standard of “CD quality” (I thought it was 256 kbps but cannot seem to find a definitive answer), and considering the newer Audacity technology, does anyone know what minimum value for Media Info bit rate with the above settings would be equal to “CD quality”?
As Media Info says bit rate mode - variable, then a bit rate value in kbps (today I see 267 kbps with extreme, 193 kbps with standard) I’m wondering if these represent a minimum bit rate across the entire recording, or if it is more complex that that. Perhaps a 193 kbps vbr mp3 created in today’s technology sounds as good as or better than a 256 kbps CD from years ago.
The audio on CDs is not compressed and none of the audio data has been “thrown away”.
MP3 is a form of “lossy compression”. It’s called “compression” (data compression) because it reduce the amount of data. It’s called “lossy” because MP3 encoding deliberately discards some of the data as part of the compression process. The MP3 algorithm attempts to throw away the least important (leas audible) parts first. The lower the bit rate, the more data has been discarded, hence lower sound quality and smaller file size.
One can expect that 256 kbps MP3 will sound as good as CD, 99.9% of the time to 99.9% of people.
One can expect 320 kbps MP3 to sound as good as CD virtually 100% of the time. Very few people can tell the difference in properly done double blind test, and I don’t know anyone that can hear the difference 100% of the time.
My rough guide to the MP3 “Preset” quality settings:
Insane: Indistinguishable from CD
Extreme: Excellent quality. Probably indistinguishable from CD. Generally considered to be “transparent”.
Standard: Very good quality.
Medium: Good quality.
This assumes a “first generation” copy of audio that has been normalized to a reasonable level. Note that “lossy” compression is like photocopying. Just as making a photocopy of a photocopy of a photocopy … will eventually degrade to trash, so does making an MP3 copy of an MP3 copy of an MP3 copy… Each time the audio is encoded, a bit more data is discarded and can never be recovered.
LAME is the only major MP3 encoder that is still being developed. Trials in more recent years indicate that for higher bit rates, modern versions of LAME produce better quality for the same file size (or smaller file size for the same quality) as the Fraunhofer “reference” implementation of MP3 encoding.
If you select “Standard” preset, the sound quality is likely to be on par with 256 kbps from early MP3 encoders.
If you select “Extreme”, the sound quality is likely to be a bit better than 256 kbps from early encoders.
If the audio is “easy” for the encoder to compress, the “Extreme” preset could produce 100 kbps or even less. The presets aim to use the least “kbps” necessary to achieve the target “sound quality”. For audio that is highly complex throughout, the Extreme preset is likely to require around 256 kbps to achieve “transparent” encoding.
This prompted me to look at other music files on my computer, on some cds, and do to do some further reading. A lot of the music on my computer is m4a, I checked some of the files with Media Info with varying results, but generally high bit rates and small files, seems to be about 2 MB / minute of audio. Some websites say m4a is a smaller file size and higher sound quality compared to mp3. Is that true? If so, I’m wondering if mp3 has any advantages that m4a does not have for use in the music player on my website.
but generally high bit rates and small files, seems to be about 2 MB / minute of audio.
The bitrate in kbps is kilo_bits_-per-second. There are 8 bits in a byte, so divide by 8 to get the file size in bytes-per-second. (That doesn’t include embedded album artwork or other metadata that adds to file size without changing the bitrate.)
Some websites say m4a is a smaller file size and higher sound quality compared to mp3. Is that true?
M4A (AKA MP4 or AAC) was designed to be the successor to MP3 so it’s supposed to be better. But here’s the thing… If a 256kbps MP3 is transparent (sounds identical to the uncompressed original), which it often is, a 256kbps M4A can’t be "better’ and a 360kbps MP3 can’t be “better”. (Technically/mathematically they are all lossy and every sample gets changed.)
And, if there are audible compression artifacts, at something like 256kbps, those artifacts often don’t go-away at higher bitrates (but the artifacts might not be present in M4A).
M4A is also supposed to be more tolerant of multiple generations of lossy compression. If you think about it… Opening an MP3, editing it, and re-exporting it at the original bitrate, doesn’t require any additional audio information to be thrown-away so the additional damage is a side-effect of the encoding.
It may also be easier to get gapless playback with M4A.
There were some “quirks” recently with iTunes and M4As (made with Audacity/FFmpeg). See [u]this thread[/u].
And, there are some “difficulties” setting the bitrate with FFmpeg. (I don’t remember the details at the moment, but I think there is a work-around.
*When exporting AAC (MP4) audio files the only control is a quality slider. However this slider has no control over the actual bitrate: stereo files are exported as CBR 196, mono files as CBR 98
Export your audio files as WAV or AIFF and use iTunes to convert them to AAC
As a rule of thumb, uncompressed CD quality stereo audio is around 10 MB/minute.
When MP3 became popular as the de facto standard for Internet audio, the recommended compression was 128 kbps, which is about the minimum for OK quality stereo music, and works out at about 1 MB/minute. That’s a compression ratio of about 10:1, and is close to the limit for most encoders / formats before the losses become really noticeable. 2 MB/minute is likely to be around 256 kbps, which should be very good quality for MP3, or AC4, or OGG.
For archiving audio, FLAC is a good choice. It only gives a modest amount of compression (up to about 2:1), but it does not discard audio data. It is a “lossless” compression format. (It also has good, well defined support for metadata).
To make any reasonable comparison, you would have to say which encoders are being used. The old Xing encoder was extremely fast back in its day, though the sound quality was definitely a bit less than the reference model (Fraunhofer). Using modern encoders, there’s very little sound quality difference at high bit rates (around 256 kbps) between the best encoders for MP3, AAC or OGG.
At very low bit rates, there’s a relatively new format called Opus, which is unbelievably awesome for very low bit rates (below 64 kbps), but is not yet widely supported. Whether or not it ever becomes a mainstream format is yet to be determined. https://en.wikipedia.org/wiki/Opus_(audio_format)
From a technical point of view, that does not sound very likely.
There’s much hype about compression formats, and there’s been big money for Apple in getting people to buy into the iTunes ecosystem.
Meaningful comparisons require carefully controlled test conditions and the elimination of bias, which is difficult and time consuming to do. Advertising is much less expensive for a company that carrying out a proper study.
To achieve exact gapless playback, the exact amount of encoder delay must be written into the file. As far as I’m aware, the core specifications for MP3 and AAC (M4A) do not do this.
For both AAC and MP3, there’s a “fix” available, which is implemented in some codecs, whereby the encoder delay is written into metadata.
As a matter of interest, when Xiph.org wrote the specification for Ogg Vorbis, they were aware of this limitation in other lossy compression formats and designed the core specification for Ogg Vorbis to include the encoder delay. Ogg Vorbis can go gapless playback by design.
CD quality is 1411 to keep it in the same measurement standard. Also important: WAV stays there. As you edit and export new MP3s, the quality goes down. Every time you export a new one, it also exports the previous export compression errors.