Thanks to everyone who has been giving me tips. Here is my first track, I am looking for feedback how it sounds and any other advice.
I did EQ, Limiter, Stereo emulation, and normalization.
Thanks to everyone who has been giving me tips. Here is my first track, I am looking for feedback how it sounds and any other advice.
I did EQ, Limiter, Stereo emulation, and normalization.
one tiny bit of clipping …
It may have not been clipping before you converted to mp3.
When saving as mp3 leave a little headroom, say -1dB, before converting to avoid clipping on the mp3.
IMO the attack of the accordion is to steep : the onset of most chords on it are jarringly abrupt.
This jarring effect can be reduced with more dynamic-range-compression (see attached mp3).
Where the singer’s gets too-loud between 32-33sec on “1_partial.mp3” that can
be reduced using the equaliser on the loud frequencies 1.3KHz-1.8KHz which are
a bright-spot on the spectrogram view.
I just noticed the ‘show clipping’ option. When I do that i see the track as normal but a few spots had a red line thru. That’s the clipping I assume. How do I fix that? Envelope tool?
“Amplify” the track to -1dB , (rather than the default 0dB) , before you save a copy as MP3 format …
That -1 dB headroom is usually enough to avoid clipping during the conversion to MP3.
If you have the time could you explain how you did the “dynamic-range-compression”
Audacity comes with a compressor, and something called “SC4” which is also a compressor, ( I used SC4 on the remix above).
Other compressors can be installed in Audacity, like the popular “Chris’s Compressor”, which is more user-friendly than SC4.
If you’re on Windows OS there are many VST compressor plugins , e.g. BlockFish.
elance, I gave it a go as I did on your “cashSample” in your other thread on “Enhancing '70’s cassette recordings” and seeing this is the second one of yours that sounds soft and murky, I was wondering if you’re using headphones or speakers for judging your edits and/or if you’ve gotten a feel for establishing points along an EQ in locating where the murkiness is created. I’ve never had a use for plotting EQ editing points using Audacity’s Spectrum Analysis tool and rely on listening through headphones.
We may be hearing differently and since you’re not giving any feedback on exactly what you’re hearing from our posted edits (too shrill? too soft?), it makes any other approach pretty much like working blind. IMO I don’t see how using a compressor or any other sound dynamics/loudness shaping tool other than an EQ is going to fix the murkiness.
1000hz signal is often used by audio component manufacturers to determine db performance especially in speakers and there’s a good reason for that. It’s a terribly loud, high energy portion of the audio spectrum (loud horns) and you’ll notice I’ve pulled back quite a bit in the EQ screengrabs I’ve posted of your mp3 samples. Also note that a lot of the body of the sound regarding ambient, background detail like vocal echos that give a sense of space hover around 300-700hz. I’ve often mistaken this midrange detail to reside in the higher frequencies like 3000K-6000K but now figured out that there needs to be a balance between the two. 1000-2000hz when too loud destroys this ambient detail due to its high energy which takes a bit of effort locating on an EQ as the cause for the lack of clarity along with too much bass especially nearing 250hz (and too much reverb) which can murky up and reduce volume to the 300-700hz region.