I’m wondering whether it is possible to ‘code’ an entire track (or multiple tacks), like "from timestamp 0 to timestamp 10 a chirp from 440 Hz to 880 Hz … " with all additional information needed for a chirp (or other type).
I want to make an audiofile in which the frequencies at specific moments come from a (stochastic) formula (made in excel at the moment) with a chirp in between these points. I tried it manually, very labour intensive, and practically impossible to change as points are very much related. And trying out various sets of data is kind of the core what I want to do.
And example of what I wanted to ‘smooth’ out (it’s now going from semitone to semitone): https://www.youtube.com/watch?v=m2x85LB0ZO8. Every white dot is a ‘moment with parameters’.
For example, this code may be run in the Nyquist Prompt effect (see: http://manual.audacityteam.org/man/nyquist_prompt.html)
The “data” list contains a sequence of frequency and duration pairs, where frequency is in Hz and duration in seconds.
So in this specific case, a sine wave is produced that starts with a frequency of 1000 Hz, which then rises over a period of 1 second to 2000 Hz, then over the next 1/2 second the frequency falls to 500 Hz and stays at 500 Hz for 1 second.
Extrapolation between the data points is linear, as can be seen in the track spectrogram:
If you have further questions about programming Nyquist scripts, we have a forum board here: http://forum.audacityteam.org/viewforum.php?f=39
Temporal blurring of any audio is possible with Audacity’s “Paul Stretch” …
But I think you maybe looking for glissando : where the frequency glides between notes.
The easiest way to do that is have your program to interpolate between the points.
If I’ve interpreted your equations correctly, this gives the music for one Earth year, compressed into 36.5 seconds.
“basehz” is an arbitrary frequency offset.
I have to take a good look at the second set of code, because the result sounds very different from what I had in mind - maybe because it’s quite slow (and then the semitone restriction makes nice dissonants)
But the fact that I can program in Audacity opens a lot of possibilities - kind of a box of Pandora as there are just so many hours in a day.
Ultimately I would like to find a way to combine audio and video, where synchronicity is vital. Or maybe something with motion capture where the position of a hand leads to sound. All based on formulas/logic.
Xenakis would have loved this, I think (have you read “Formalized Music”, contains Fortran code (that I haven’t deciphered yet))
PhotoSounder was one of the applications I’ve researched for my wish “to make pictures sound”. Audiopaint was the only one that had a free version that you could actually do something with. On my YouTube channel I have a few videos that where done with it (‘Star Wars’ and ‘La Tour Eiffel’).
Now I want to go random through a picture, pick up information (color, position) and transform those to musical parameters. And those I might feed into Audacity. Once I get the hang of the Nyquist language .
And the actual picture is kind of not in the picture anymore… (although I want it back at some point)