Audacity's export with LAME is considerably different than other tools. Why?

I’m trying to convert a WAV file (decoded from FLAC) to MP3 CBR @ 320 and so far, ffmpeg has been doing all I need. However, I can see a very consistent difference between LAME CLI, ffmpeg and Audacity with LAME when comparing with a few tools:

Tools

  1. LAME 3.100 CLI
  2. ffmpeg 4.2 (Lavf 58.29.100/LAME 3.100) CLI
  3. Audacity 2.3.2 (LAME 3.100)

RESULTS

MediaInfo

  1. LAME: Constant/320/Encoder Settings: -m s -V 4 -q 3 -lowpass 20.5
  2. ffmpeg: Variable/320/
  3. Audacity: Constant/320/Encoder Settings: -m s -V 4 -q 3 -lowpass 20.5

Spectral Analysis

  1. LAME: Almost nothing between shelf and cutoff
  2. ffmpeg: Almost nothing between shelf and cutoff
  3. Audacity: Considerably more happening between shelf and cutoff

Fakin’ the Funk*

  1. LAME: 256kbps
  2. ffmpeg: 192kbps
  3. Audacity: 320kbps

*I know that this tool is not a really good indicative and is probably misreading info, but it seems to a least notice the visible difference the Audacity file has in it’s spectrogram

Commands/settings used for the encodes:

LAME: lame -b 320 -q 3 -m s .wav .mp3
ffmpeg: ffmpeg -i “.wav” -c:a libmp3lame -b:a 320k -compression_level 2 -cutoff 20500 -joint_stereo 0 “.mp3” -y
Audacity: Import .wav file > Export as MP3 > Insane preset, Stereo, Default Variable Speed



What I’d like to understand is why all of this results are coming different, if they are all, in theory, LAME and why Audacity produces such a different spectrogram that I can easily point out between the 3, but can’t tell what are the other 2. Whatever settings it’s using with LAME, I want to replicate that.

Thanks

One difference with Audacity, is that Audacity is downsampling from 32-bit float (Audacity works internally in 32-bit float). When Audacity first started using LAME, floating point PCM input either didn’t work, or was buggy (I don’t recall the exact details, though I do recall that the problem affected many audio apps). Audacity may still be downsampling to 16-bit prior to passing the PCM data to LAME.

Please describe in detail what you are observing. Could the difference be due to dither?

I think I may not be knowledgeable enough to do so, so I’ll post the spectrograms and try to point it out:

These are all the spectrograms I took: https://imgur.com/a/7KRCuxG

I’m referring to those elements between the 16kHz and 20kHz area. I think this is easier to visualize in these two images, but you can notice the pattern in all of them:

https://imgur.com/njWbyDB
https://imgur.com/QLD5ou8

Yes, that looks like “dither” noise. Don’t worry about it - the “damage” is insignificant compared to the unavoidable damage caused by MP3 encoding.
Strictly speaking, I don’t think it should happen, so I have logged it on the Audacity bug tracker.