Hi folks, new user here.
My question is about frequency compression. I need to write a plugin to shift lower frequencies up, and higher ones down - around a defined center frequency, with an optional factor to increase the amount shift based on distance from center (e.g. logarithmic).
Obviously this would be terrible for music production because it would destroy the necessary frequency relationships that help to make it music. But it’s not for music - it is for processing natural and ambient recordings, to bring the frequency response that a mic can capture, into a comfortably human-audible range.
Searching Google and this forum site in particular, I have found absolutely nothing, nada, not even close. (As far as code ideas - much less an actual plugin!)
The idea behind this would be similar in concept to dynamic range compression, except in the frequency domain rather than amplitude. And simpler. I have to believe this would be simple to write. …But although I’m a programmer comfortable in multiple languages, I’m not familiar with programming for audio at all. (I’m also not trying to be an audio programmer - just the minimal to write this simple plugin.)
After I get this working as a basic plugin, I might extend the idea to quasi-real-time processing. (To augment my War Machine armor’s light frequency compression HUD, which enables me to “see” radio through visible to gamma radiation. JK. Actually just for research purposes.)
Thanks in advance for ideas or pointers!
Yes, well. Whereas frequency compression in the visual domain turns radio frequencies into something you can see, doing it in the audio domain turns sound into garbage.
This should be interesting to watch. I think this a cousin to the early research people did with ring modulators to make more efficient use of transmission bandwidth. Transmit the intelligence without necessarily the actual sound.
Not necessarily any more than dynamic range compression must turn music into noise. It’s all about proper application and source material. (Also to your point about radio, applying this to EMF would probably take some serious brain adaptation, and for everyday outdoor experience in a city, would probably be a visual mess. Unless the center visual frequency was compressed less, and the higher outlying frequencies compressed gradually more and also attenuated in amplitude to avoid making a big visual blur.)
Likewise, I plan on making the frequencies near the center shifted less, with logarithmically more shifting relative to how far from center a frequency band is. (And also probably attenuated more relative to shift amount, to avoid a sonic mess, depending on the source. I once manually processed a quiet outdoor soundscape in a very crude approximation of this several years ago, and it was quite revealing. Lots of dynamic range compression also helped in that case, as did a pristine and high-sample rate source.)
Thanks for the feedback!
What are the frequency ranges you want to convert from – i.e what is the original sample rate?
There’s actually a lot source material available on the web.
Most algorithms are FFT-based.
For example, if you calculate a mel frequency cepstrum, you have to re-map the linear FFT bins to the Mel scale. This scale is linear up to 1000 Hz and afterwards logarithmic.
Julian O. smith uses a first-order allpass filter to map from linear to Bark scale (100 Mel = 1 Bark). You’ll probably need a higher order allpass because the lower frequencies are also compressed.
The results of those calculations (if FFT is envolved) build the base for analytic interpretations. In your case, you would simply apply an IFFT to bring the frequency domain values back into the time domain.
That would all be fairly easy in Nyquist (Audacity’s internal plug-in language). But I assume that you want to write a stand-alone plug-in.
Hence, you have to include a library with Filter and Fast Fourier or discreet cosine transformation capabilities. Or use Mathlab/Octave.
Theoretically you could adapt a “phase vocoder” algorithm (http://www.dspdimension.com/admin/time-pitch-overview/)
This is not an easy programming task.
It might be easier in this case because the sound has not to be shifted in a musical sense.
The task seems similar to the rendering of whale song to an audible frequency range.
Both results won’t represent the reality and the outcome is only subjectively adapted to our hering experience.
We can hardly judge without further input.