If you’re still here…
You are confused, but I’m not sure if you’re confused by the terminology or the concepts -
Are you really trying to change the musical pitch? Did you try a more-drastic pitch change to test your batch process? Or, did you try it “manually” without the batch processing?
(I’m a Windows guy, and I never use batch processing/macros so I can’t help with that…)
Note. Most of my music files have a frequency of 441hz, not 440
How do you know, or why do you think that?
A = 440Hz is the modern-Western tuning standard. [u]Ths chart[/u] shows the musical notes & frequencies. As you may know there are several “A” notes on a piano and A4 is tuned to 440Hz, and A5 is 880Hz, etc. You’ll notice that 432 is NOT on the chart. It’s in-between notes on the standard scale.
441Hz is not standard either, although some pianos (or other instruments) may be tuned slightly-sharp (intentionally or unintentionally).
The clock (oscillator) in a soundcard could also be slightly-off causing a 440-to-441Hz pitch (and timing) shift. That wouldn’t “show-up” in Audacity or in any analysis of the file… It’s not an issue with the file, it’s just a playback issue.
You can re-tune a piano to A=432Hz, but of course you have to re-tune all of the strings accordingly and it takes a long time. It’s quick & easy to re-tune a guitar. Horns* & woodwinds can’t be re-tuned because the notes depend on the length of the “tube” and the spacing of the holes.
Of course it’s easy to re-tune a recording with Audacity or other digital processing, or you can simply speed-up or slow-down an analog recording to change the pitch.
Most music contains a subset of musical notes, so although virtually all music is tuned to A=440Hz, there are lots of songs that don’t contain any A-notes or they may have A-notes in higher or lower octaves and none at 440Hz.
When you play an A-note on the piano or on a trumpet there are harmonics (multiples of 440) and it’s these harmonics that make a piano sound different from a trumpet.
Most “real music” contains multiple-simultaneous notes (and chords), instruments, and frequencies. If you open a song in Audacity and go to Analyze → Plot Specturm, you’ll see the full-range of frequencies and 440Hz won’t necessarily stand-out.
A “normal person” can’t listen to a song and hear/know the tuning standard. A musician will only notice the mis-tuning (or odd tuing) when they can’t play-along in-tune. Someone with [u]perfect pitch[/u] might hear the mis-tuning, or they may not… “Perfect pitch” isn’t really perfect, and they may just identify the note as the nearest standard-note.
You can also use Generate → Tone if you want to create 432, 440, and 441Hz tones. I don’t have perfect pitch and I can’t hear the difference between 432 and 440Hz (without a reference).
The tuning standard is NOT part of the normal metadata. Nor is the musical key or the time signature, etc.
44.1kHz (44,100Hz) is a sample rate. Analog audio is [u]sampled[/u] (digitized) at a known sample-rate and it has to be played-back at the same sample rate or the speed & pitch will be wrong. Some common sample rates are 44.1kHz (CDs), 48kHz (most video), 96kHz (professional studio recording). 432 or 43.2 are not standard.
It’s possible to change the speed & pitch by messing-up the sample rate and sometimes that happens accidently, but it’s rare.
The sample rate has to be at least twice the highest audio frequency. The “traditional” range of hearing is from 20Hz to 20kHz, so the CD sample rate of 44.1kHz allows the CD to contain sounds beyond the frequency limits of human hearing. The standard sample rate for telephone is 8kHz which limits the audio to 4kHz. That’s good enough for voice but not good enough for high-quality music.
If you re-sample PROPERLY the sound doesn’t change (assuming you stay at “CD quality” or better). You can change the Sample Rate at the lower-left part of the Audacity window and that will be used when you export the file. If you open a 44.1kHz file and export-as 48khz, the sound won’t change but the file will have more samples so it will be slightly larger (assuming an uncompressed file).
The sample rate is NOT part of the normal metadata. It’s included in the [u]file header[/u] so when you open or play a file it automatically gets played-back at the right speed and the listener doesn’t have to think about it.
192kbps is a bitrate in kilo_bits_ per second. There are 8 bits in a byte, so you can calculate file size if you know the bitrate and playing time. (Except metadata, especially embedded artwork, will make the file larger without affecting bitrate.)
With compressed files, the bitrate is a rough indication of quality. i.e. Lower bitrate = smaller files = more compression = more data thrown-away = lower quality.
We usually don’t talk about the bitrate for uncompressed files but it can be calculated. For example, CDs are 44.1kHz x 16 bits x 2 channels = 1411 kbps.
With lossless compression, bitrate is obviously not a measure of quality. It’s just an indication of the amount of compression.
I suspect you are also confused about [u]Normalization[/u] . Normalization is an amplitude adjustment (unrelated to pitch) and it’s an odd coincidence that you’d normalize to -2.04dB, which is exactly the “number” as your -2.04% pitch shift. -2dB is a little odd to begin with and 2.04dB is unusually precise.
*The sliding trombone is an exception because it can play any frequendy/pitch within it’s range.