I’d like to share my plugin. First of all it’s not a Nyquist plugin, but a LADSPA thing, written in C and compiled with MinGW. I’m not quite sure how compatible it is with various systems. It’s developed on an Acer mini laptop with Win8.1. But I’m willing to share the code - or the idea, if someone would like to convert it into a Nyquist plugin. I tried to get into Nyquist, but it seemed to be too tricky to get down to sample level.
Anyway, what my plugin does is it rotates the stereo image. Imagine each individual instrument or voice at its own position in the stereo image. Then all instruments start turning to the right, but the rightmost one turns at the endpoint and starts moving back, against all others. After a while most of the instruments move to the left while the last one still moves to the right.
I like to think of the rotation as an orchestra standing on a rotating plate, on the front rim of it, along a 180 degree arch. Then the plate starts to rotate. After 90 degrees all instruments sound from the middle point to the far right. After 180 degrees the original stereo image is reversed. After 270 degrees all instruments are from the middle point to the far left.
Well, this all is what happens in theory. Or just in my imagination. When one particular voice has reached the far right point and starts turning back, it sounds a bit weird. There is a kind of a phase error, but you can imagine it being the instrument facing in the wrong direction. Anyway, it all sounds like all individual voices are oscillating back and forth, right and left, each voice in its own phase.
So far my plugin rotates a marked section one full revolution (or several). The user interface has the following controls:
Rotate left instead of right (not yet implemented)
Smooth start and end (not yet implemented)
Number of rotations (there’s a small error here, set it to 2 to make 1 rotation. If you set it to 1, nothing happens)
This is my first LADSPA project and I don’t really know much about the LADSPA API. Below is a link to the dll file. There’s a lot to do before it’s ready, but if you dare to download it and try it out, I’d like to hear comments and suggestions.
I managed to create exactly the same wave form with your plugin. Don’t know exactly which effect it was. Some of the rotating things. Though when I marked only a small section, some 10 wavelengths of a saw tooth wave, it didn’t turn the whole thing around. So far my plugin does whole turns (360, 720, 1080 degrees…) on a marked section, no matter how small or big section. Though I’m planning to implement other options, too. Like oscillating only say 45 degrees.
This was for me of pure academic interest. I have no idea of what this could be used for.
The rotation is in degrees per second and not cycles per selection.
The simple reason is that one can use it to determine the optimal value for the vocal removal. This is, if the increasingly rotated sound (1 degree per second) has the lowest vocal level after 22 s, you would enter 22 for the voice removal.
So, variable rotation, vocal removal, 2 x undo and vocal removal with the found value.
Looking at the wave forms and phases and comparing them, it seems we are doing the same thing on the rotating stereo. Before I started to develop this, I tried to search on the net for some info about the effect. I couldn’t find anything, probably because I didn’t know the right words. Is this a known method and how useable is it? I wonder if the phase error would make it useless. I mean, when you twist a center panned sound 180 degrees, left has become right and right has become inverted left, causing a non-distinctive direction of the sound - in theory. There’s this same “phase error” in everything between 90 and 270 degrees. Nevertheless, it kind of works. You kind of hear the back-and-forth movement. I have tested this only with computer generated music (trackwise panning, midi, soundfonts and such). I’d like to try it on a good quality acoustic stereo recording. XY stereo miking or binaural.
The phase error does not always occur at the same place. It depends on the recording and the stereo width.
Small playback devices with built-in speakers invert (at least partially) one channel to simulate speakers that are wider apart. It is a kind of transferring XY to MS stereophony.
Stereo imaging is a very complex thing.
You have to know how many sources are there, instant width and instant position. Not to mention psychoacoustical perception like Haas effect and alike.
And if you succeed in the end, the result won’t be equally suited for speakers or headphones.
I don’t think that there is much information about this effect available. It is normally used passively to interpret sample values as 2-D coordinates in e.g. Yellyfish to display overall “stereoness”.
A diffused stereo image is one way to indicate “behind”. However, it is normally a question of frequency particularly boosting 1 kHz or cutting 315 and 3150 Hz.