HI… I am not an audio engineer (my father was). I have some home experience with Audacity… mostly with converting vinyl to flac, manually removing pops and clicks and, some live recording. I have a question regarding EQ, and please pardon me if this is a common question, or one that has been answered previously, as I cannot come up with some good search terms, usually not a problem for me , but I am stumped…
I have some old Grateful Dead flac concerts and one show, in particular, sounds very thin… way too high in the 1-2 kHz range (?)… I have experimented with different EQ curves, both in Audacity and iTunes and can make it sound, really, pretty darned good… but… I don’t have a way to really know if I am getting all that I can out of the sound, or whether my manually derived curves are wildly inaccurate, etc…
I have analyzed the audio spectrum with the analyzer, but frankly I don’t have the depth of experience to make any sense out of it… Anyway, here is my question(s)
Using an analogous situation with .raw and ,jpg files in Photoshop, there is the option of “auto enhance” … and this can make many photos come alive and so much better before processing… Is there a similar process in Audacity to use with audio files? If so, where would I look? If not, how would I intelligently proceed…
Just a suggestion of where to start, search terms… or if anyone has experience with this, I would be grateful for any suggestions… Thank You!!
There isn’t really a “right” answer to this. To use your analogy, if you apply photoshop auto enhance to a Dali or a Hopper it is likely to look dull, but the same enhancement on a Caravaggio or Rembrandt is likely to look pale and washed out. As a very general rule, the spectrum of the track as a whole would not normally have big humps or dips, but would show a fairly smooth spectrum like this (below) but there are always exceptions. Let your ears be the judge.
Just another quick question or two… once I apply the EQ curve, it throws a lot of the file into clipping. Do you folks just apply negative amplification first? But, yes, you very specifically and completely solved my problem… Certainly applying digitally generated EQ curves to 40 y.o. analog live recordings on, I think 1/4 in 7 1/2 ips (maybe 15, IDK) magnetic tape is anything but an exact science… but it’s nice to have some kind of “compass”… and, well, it’s fun. And, I get to listen to some awesome music.
Back to my original analogy… if I had a frequency analysis plot of basically what I was shooting for, wouldn’t it be reasonable to think that an algorithm could be applied to the original sound file… vis a vis an image being automatically “enhanced” in photoshop… I guess that is why the interest in EQ curves (I gotta learn how to do that importing curves thing, LOL)… but in a more reverse engineered (?) manner…
If the audio track is in “32-bit float” format, you can apply the negative amplification before or after applying the Eq.
If the audio track is in 16 bit or 24 bit format, you must apply the negative amplification before applying the Eq.
This is a major advantage with 32 bit float - the format allows audio to go over 0 dB without damage, though you should still bring the level back down to a valid range below 0 dB (using negative amplification) as sound cards do not support over 0 dB.
You need to a bit careful - even if you are using 32 bit float format, some effects will permanently clip at 0 dB and it will not be possible to fix it with negative amplification after the event. The “Bass Boost” effect does this, but this effect will be replaced in Audacity 2.0.3 (due for release later this month).
You can check the bit format by looking in the track control panel on the left end of the audio track (below the track name and above the “solo”/“mute” buttons).
32 bit float is the default format, but some types of audio file may import as 16 bit. In these cases I would highly recommend that you change the format to 32 bit float before you start work on the track. To do this, click on the track name, and from the drop down menu select “Set Sample Format > 32 bit float”.
Yes that is theoretically possible, but for Audacity that would be a feature request. Some other audio editors do have that feature, though the results are not always as good as might be expected. Just as good photographer would never rely on automatic colour balance, a good sound engineer would never rely on automatic Eq (though it can be useful as a “quick fix”).
EQ Matching: SmartEQ function to sample the frequency spectrum of a given piece of audio with a pleasing equalisation, then bring a second piece of audio to the same spectral content. Useful for most cassette recordings and other other medium- and low-fidelity sources. Also usable for sfx. Note: where frequency content falls to the noise floor, this may increase noise level heavily in an attempt to compensate. This is easily addressed afterwards with the Noise Removal, or can be addressed during frequency correction with more coding. Possibly could be incorporated as a third radio button in the current 1.3.x equaliser, with a “Get Frequency Profile” button and ability to add useful profiles as a custom preset.
Would you like me to add your vote for this feature request?
Sure, not a bad idea to see if that would be a feature worth considering.
Certainly, part of the draw that Audacity has for me is the ability and the opportunity to really roll up my sleeves and go as far as the rabbit hole takes me… I have, I don’t what the correct terminology is, but I have zoomed down on the wave form to individual data points to resolve a click or pop in a highly valued vinyl recording that has not been re-released digitally and is otherwise unavailable, or those that are, are not of sufficient quality… OTOH, I have just accepted the results of the automated de-noise and click removal and been happy enough not to go to the additional work of doing it manually… Just, as when I am in photoshop… I may have a few dozen jpegs I just want to liven up a bit with auto enhance, or there may be that one photograph that I want just…exactly…right and will spend the time I need to manually enhance it.
But, hey, I am absolutely blown away by this technology that is available to us… simply for the asking!! And if Audacity isn’t interested in developing that feature, I’ll just learn how to do it manually… though… in photoshop, while the auto enhance is not always an improvement… it DOES, often, point me in the right direction and can save me some additional processing time.
Thanx a bunch, it’s been a pleasure… Now back to that Dead recording, LOL… there are a lot of songs that need tuning up!