I am using the 1.3.3 beta version with linux ubuntu studio. There is a plug-in called Impulse convolver (under effects-plugins in the 151-165 range on my installation) that I would like to try. I can mark a part of an audio track, select the plug-in and then I get a meny that allows me to select on out of 21 Imoulse IDs. My question is, how can I know what each of these IDs are, and is it possible for me to use my own Impulse IDs?
The reason why I ask is that I am looking into room correction by convolving the sound signal from a CD player with the rooms impulse response, and if it is possible for me to use a recorded .wav file as an Impulse Response Audacity it would be a nice tool to test the effect.
“room correction”? I hope you don’t mean you’re trying to cancel out the effect of the room you’re playing the music in to get a “flat” response from your speakers.
To answer the question, no you can’t use an impulse you’ve created unless it’s compiled into the plugin. You can email him and ask him for advice, he’s probably a reasonable man.
Hello and thanks a lot, the link answered all my questions.
And yes, the idea is to try to correct fo the room response, I am not very serious about it, but though it would be fun to try. I though audacity could provide a simple tool to test it, but I am looking at some convolver programs running under Linux for the final use, if I ever get that far.
However, it sounds like you discurage the idea all togeher?
Well, I will never discourage experimentation. But physics says what you are trying to do is all but impossible.
The problem is that the Room Impulse will be different at all points in the room, and at all frequencies.
In order to get true cancellation, all of the following would need to be true:
The impulse would have to be recorded with a perfect microphone.
The click for the impulse would need to be generated by your speakers.
You would have to invert the impulse (this is the easy part).
Your ear would need to be in exactly the same position as the microphone was when the impulse was recorded. You’d have to plug up the other ear, too (it’s going to be hearing a slightly different room response).
Now I think you should be able to apply that reverb to the sound and room tone would cancel out completely (as long as you get the reverb volume correct).
So, you’re limited to putting your head in a sling and listening with only one ear.
If any of the above are not true, then the inverted room tone reverb will not reach your ear at exactly the right time and you will still hear some of the room tone. So some of the room tone frequencies will cancel out, but others will be doubled (comb filter, anyone?).
It’s an interesting idea, but it only works in such specific circumstances that it’s effectively impossible.
It’s much easier and more practical to build an anechoic chamber.
I wouldn’t be quite that harsh. Trying to mess around in the time domain with room acoustics is pretty much out, but that isn’t a bad way to try and compensate for speaker deficiencies - if you know enough about what is wrong with the speaker response then you can start to correct for it. Measuring is still hard, but can be done with techniques like pseudo-random sequences and more convolution.
If I got back some years to when I worked behind the scenes in the early days of digital audio, I remember an active room EQ system and a demonstration at our head office.
The effect was remarkable.
Get the room right.
If that is not practical active EQ can make a major difference but I hope obviously can be only be taken so far. The usual a subtle touch is best.
This kit comprised a control box with a pile of 56000 and a laptop to compute the settings. A microphone listened to test signals sent to the loudspeakers and if I recall correctly it took about a half hour to crunch the numbers.
I think this was tried at Abbey Road, one of the rooms there but did not make much difference, the point being, why I mention it, go back to (1).
Effect of AB on choral, they get up and line up, amazing and this was a good setup anyway. The room shape, lived in, wasn’t.
So the answer is yes some good things can be done but the practicality is another matter. I suspect it needs a dedicated processing box in the reproduction chain.
Last thing, don’t assume EQ means frequency, it at least as much means time delays. Just twiddling a graphic equaliser does not give a good result.
I find gverb etc rather artificial. With modern multi-threaded processors it should be possible to do a time domain or frequency domain convolution in reasonable time. A hand clap can generate a reasonable impulse response if the hall is otherwise silent. Could correct the spectrum with similar hand clap in anechoic conditions. It would be possible to do the convolution in Matlab or Octave but better if built in to audacity.
There’s plenty of other Reverb effects that work with Audacity.
For example, a popular free reverb for Windows is Anwida DX Light. A couple of popular ones for Linux include Calf Reverb and Freeverb.
Thanks for that, but they presumably use modelling rather than actual concert halls. I seem to remember trying to install Freeverb but it not being easy on my Linux box.
The main focus of my post was on convolution and why we don’t have it. (possibly inciting the response “then write it”!).
Even with modern processors, good, high quality convolution reverb is still pretty heavy (and Audacity does not yet support multiple processors), but yes, if you think that you can write a sufficiently efficient convolution reverb…
Probably the easiest way to get a working install of Freeverb on Linux is to install a package that includes it (for example “cmt”).
If you use Jack, Freeveb can be used as a real time effect in Jack Rack (it will be listed as an “Uncategorised” effect if you have it installed).