Inverse eq based on frequency analysis

I have situation where I want to calibrate my special microphone setup, frequency response is quite wacky as seen in screenshot:

Image presents white noise recorded with my setup

There’s export button to export this data as .txt and with bit of awk magic I’m able to produce eq.xml which I can import into audacity (I’m using dailybuild)

Problem is that I’m too stupid to scale values of outputted values so it would make inverse match in eq of this presentation. What I want is to flatten original recording with this eq curve.

So, is there any easier way to accomplish this kind of inverse eq based on frequency analysis of my white noise recording?

Thanks for reading :slight_smile:

I’m thinkin’ that’s a feature request. Count your blessings. Those tools didn’t used to have grid graphic overlays.

I, too, wanted to link the two tools together. It’s still manual, but it’s a lot easier with the grids.

Just a note, you know that display can get a lot more involved. You picked a graceful “size.” If you crank that number up to get very much better accuracy, you get something like this…

That’s one piano note. See the “size” value.

I’ll check the request page.


An interesting idea. It has been discussed on the forum previously, but I don’t know if anyone has actually done it yet.

Off the top of my head, the first thought that comes to mind is that when converting from the spectrum data, you will need to decide on a lower threshold value.
Let’s take an extreme example - you start with the spectrum of a sine wave.
There is one frequency value for the sine wave frequency, and all other values should be -infinity. (in practice they’re not because the analysis is not exact).
“Compensating” to create white noise would then require infinite amplification of all frequencies except for the sine wave frequency, which is clearly nonsense.
It would therefore seem reasonable to accept that below some dB value there is no meaningful audio data. You may decide on, say, 48 dB below the maximum as the cut-off threshold below which you do not attempt to “correct”. The max peak in your example is about -24 dB, so my suggestion is that you do not try to correct anything below -72 dB.

Next, to calculate the required gain, let’s say that you are aiming for an output level of -30 dB across all frequencies.
If the spectrum level is -30 dB for a specific frequency range then you do not need to correct.
If the spectrum level is less than -30 dB for a specific range, then you need to boost.
If the spectrum level is greater than -30 dB, then you need to cut.

So for a spectrum level of “X”, the EQ will need to be ((X + 30) * -1)
For examples:

If spectrum level = -50 dB
EQ level needs to be ((-50 + 30) * -1) = +20 dB

If spectrum level = -10 dB
EQ level needs to be ((-10 + 30) * -1) = -20 dB

In order to obtain reasonable low frequency figures you will need to use a large FFT size in the Spectrum analysis, but then you will get a huge number of control points in the higher frequencies, and the figures will be jumping up and down all over the place. To obtain a reasonable number of control points, you will need to “thin” the high frequency data, and take average values.

So for a spectrum level of “X”, the EQ will need to be ((X + 30) * -1)

The Mac version of that is to do the analysis with graceful curves like you have, Copy > Analysis, Open the Equalizer and Paste.

Effect > Equalization will then assume a curve that most nearly “corrects” the analysis. The outer edges of the control and effect values will be truncated to those most used, valuable, or achievable. Clearly you can’t go to infinity on any of the values, but if you have a lumpy room with 20dB or 30dB variations across the spectrum, it should be possible to flatten it out with very little effort.

And the buttons should be pretty while it’s doing it.

You can use assumptions like very few rooms have surgically sharp frequency responses – they’re mostly sloppy and broad typical of walls and ceilings. You can make the accuracy go much higher on a slider and use the greatly increased processing time to automatically limit the task.

[Now processing. Come back in two weeks.]

The PhD version of this tool has a Create Equalizer button within Spectrum Analysis.


Yes, this was discussed on the forum a while ago. The topic then - as I recall - was equalizing one clip to match the EQ of another clip. I have a spreadsheet that does that - it creates the XML you need to paste into a plain text document then wrap in the <curve name=> and tags then save as “filename.xml”. You can then import that into the Equalization effect (requires 1.3.13). If there is interest I will upload a zip of the spreadsheet and brief instructions.

– Bill

I decided to post my microphone project as its almost finished, only some finishing touches are missing, like wig etc :slight_smile:

As now project is almost finished I have already begun thinking about calibrating those mics left/right channel for real. I’m going to use my krk rp6 g2 studio monitor to playback white noise and record it with the properly aligned head. I’m also thinking of using blankets etc. trying to minimize room effect on the sound… Too bad I dont have access to proper silenced room and equipment to playback and record white noise, but I’m confident results will be good enough…

Here is shell script I meantioned eariler for generating audacity filter .xml from file (test.txt) which has frequency and amplification separated on it. You can generate filter file from audacity Frequence Analysis tool using export button, just remember to delete first line from the file.

echo “”
echo " "
awk ‘{print " <point f=“” $1"“” " d=“” $2"“/>”}’ test.txt
echo " "
echo “”

That works pretty well. As expected the very low frequency end is not very good, but a quick manual adjustment brings it under control.