time-stretching by waveset repetition

Hey!

Before again diving into lisp style code for writing plugings to use in audacity. I am sorting out some options for non-realtime processing of time-domain sound manipulation with the use of wavesets ‘Trevor Wishart’. For writing plugings for non-realtime sound processing I was suggested to look into nyguist method. Another option maybe is with portsf. Is it possible to render a new wavefile from a new constructed audio-file after applying a waveset transformation?

Some basics.

  • Analysis to get the sample-index of the zero-crossing of every waveset
  • Storage of the sample-index in a buffer.
  • Creation of a variable length temporary buffer/ or better the new sound-file
  • According to the amount of repitions of a waveset write it to the temporary buffer.
  • Repeat for every waveset the last step.
  • Replace or create sound-file with the result of this processing.

I am wondering this is possible with the use of nyquist in audacity.

With kind regards,

Marinus

The short answer is “yes it is possible”.

A slightly longer answer (with links to relevant documentation):

Nyquist is a programming language that has been shoehorned into Audacity to provide a simple way to write custom scripts and plug-ins. http://audacityteam.org/help/nyquist
Nyquist is also available as a standalone version without Audacity http://www-2.cs.cmu.edu/~music/music.software.html .

How plug-ins generally work is that Audacity passes audio data from the audio track selection and passes it to Nyquist in the global variable “s”. If the audio track is mono then “s” will be the sound (sounds are data types in Nyquist), or if the audio track is stereo then “s” will be an array with two elements. Each element is a sound. After processing, the returned result from Nyquist is passed back to Audacity. If the returned result is a sound then Audacity will put it into the selected audio track. http://wiki.audacityteam.org/wiki/Nyquist_Plug-ins_Reference#Nyquist_Variables

Nyquist can only process one Audacity track at a time, If two sounds are required for a plug-in it is often easiest to use a stereo track with one sound in each audio channel (see the Vocoder effect for an example).

It is also possible for Nyquist to read and write audio data directly from/to disk, (though this is rarely used in Audacity plug-ins). http://www.cs.cmu.edu/~rbd/doc/nyquist/part6.html#42

Accessing and Creating Sound: http://www.cs.cmu.edu/~rbd/doc/nyquist/part8.html#81

Hi Marinus and Steve!

Only to make the rest of the forum know what we are talking about, here two links to Trevor Wishart’s Homepage ant the Books he has written.

I think was Marinus wants to do is a form of Granular Synthesis, not with arbitrary ‘grains’, but with clearly defined ‘wavesets’, starting and stopping at zero-crossings (if I have understood this right). Nyquist has no automatically built-in support for granular synthesis, but the missing functions could by written by hand. Here is how this could work:

  • Analysis to get the sample-index of the zero-crossing of every waveset - Not impossible but must be written by hand. The implementation of Nyquist in Audacity has the problem that for sample-based analysis to find the zero-crossings, the entire selected sound from the Audacity track must be loaded into memory. This will crash Audacity with long sounds if Audacity runs out of memory. CMU Nyquist does not have this problem.


  • Storage of the sample-index in a buffer - I’m not quite sure what you mean here. With Nyquist you could either store all samples between the zero-crossings as a Nyquist wavetable to use it in a wavetable oscillator, or you could store the start and stop times of every waveset or the times of all zero-crossings in seconds or as sample numbers in Lisp variables or a Lisp array. The ‘most natural way’ with Nyquist would be to store the wavesets as wavetables and use them in wavetable oscillators. This way you could pitch every waveset up or down an arbitrary amount of Hertz or MIDI steps, and also repeat every waveset an arbitrary number of times, i.e. you could play wavesets like notes, where Nyquist ‘MIDI steps’ and ‘notes’ are just simply floating-point numbers, not limited to 128 MIDI key numbers. Everything needed for this is already built-in.


  • Creation of a variable length temporary buffer/ or better the new sound-file - Nyquist produces sounds on a sample-by-sample basis (not really, in blocks of 1024 samples each). CMU Nyquist tries to write out the samples into a soundfile as fast as possible to free the memory. Nyquist in Audacity tries to return the samples to Audacity as fast as it can, where it’s left to Audacity to write the samples into the project’s audio files.


  • According to the amount of repitions of a waveset write it to the temporary buffer - No problem, that’s what Nyquist does by default.


  • Repeat for every waveset the last step - Yes, of course, no problem.


  • Replace or create sound-file with the result of this processing - Nyquist in Audacity automatically returns the temporary sample buffer back into the Audacity track. If the returned sound has a different length than the originally selected sound then the length of the Audacity selection is automatically changed to the new length.

The only thing that would need to be written from scratch ist the function to split a sound into wavesets and to find the zero-crossings. Nyquist has built-in functions to fade from one waveset to the next during composition.

If you want to perform the proceedings into two steps:

  • Decomposition - Split an arbitrary sound file into wavesets and store every waveset in its own sound file.


  • Composition - Combine an arbitrary number of wavesets from separate soundfiles into a new bigger audio sound file.

Then this task wold probably easier to solve with the standalone Nyquist version:

The Nyquist 3.0x zip-file from the download page contains a file “lib/gran.lsp” with some basic example code for granular synthesis.

  • edgar

Hallo,

Yes Edgar. It is a form of Granular Synthesis. Curtis Roads gives also a description of the distortion techniques wich can be done wavesets. As far as I know in the http://www.composers desktop project and a quark written using SuperCollider language the “https://github.com/supercollider-quarks/Wavesets wavesets” are implemented. The first is developt by Trevor Wishart himself.

The analysis of the zero-crossing of the is pretty easy. something like: if (x(n) > 0 && (x(n-1) < 0) is true then its a zero crossing.

With the storage buffer I ment storing the sample index.

I will let it know when the implementation is going to be done in nyquist. There are still options open since it’s also part of a study project. Anyway I will post code results when it is done using c or nyquist.

Thnx!

In Nyquist you could do something like:
(if (and (> (aref samples n) 0)(<= (aref samples (1- n)) 0))
where samples is an array of samples grabbed from the sound.

See here for how to grab an array of samples from a sound: Nyquist Functions
Depending on exactly what you want to do, processing the audio in chunks will probably be more efficient than processing the entire track in one go, though there still needs to sufficient available RAM to hold the entire selection.

For a buffers you could perhaps use lists.

If you want speed then it would probably be better to program in C++ rather than Nyquist (though I’d be very interested to see an Audacity/Nyquist implementation).

I only wanted to say that with Nyquist it’s also possible to store a sound e.g. in a variable, analyze and store the zero-crossing indices in other variables, and then write functions like “play a part of the sound from zero-crossing number two to zero-crossing number seven” (or any other zero-crossing indices) and then construct this way a new sound out of parts of the original sound. It’s not absolutely necessary to use wavetable oscillators fot this.

  • edgar