For what I read the api only offers a zombi mouse pointer and nothing else, so it’s almost useless for audio processing.
About the plugin programming options the only that hosts python is vamp, but I’m not sure what vamp is able to do. I read through the vampy man page, and it doesn’t make clear about it’s capabilities, the only clarification is that it is centered at spectral analysis and does a neat job at reading some floats returned from python, which I’m not interested at.
Just give me some send audio to python, then paste audio from python and I will be uttermost happy. I’m trying to deal with sal/lisp meanwhile, but I’m not doing good at sample processing by now.
I’m getting the times of the samples wrong, and I’m reluctant to understand all the time logic in this right now, I’d wish there was a simplier way to convert from sound to array and back without taking account lengths and times.
And don’t say I’m not trying, right now i replaced the slice method by square window multiplication:
set w = pwlv (1, get-duration (0.1) ,1 , get-duration (0.15) , 0 , get-duration(0.25), 0 )
return track * w
I don’t know if there’s something wrong in the file I’m working with, but I’m getting different lengths every time, at first using duration 1 triplicated duration, now seems that total duration is near 0.25. What’s worst, track * w returns just track but half of w is 0.
You have to take some account of lengths and times because if you try to make an array too big, Nyquist will run out of memory (Nyquist can only access a maximum of 2 GB).
For a short selection (say up to about a million samples), you can load samples from a mono track into an array, and then convert it back to a sound and return it to Audacity like this:
(setf num 1000000) ; number of samples to fetch
;; Number of samples may be more than the selection length
;; So limit the number of samples if the selection isn't big enough.
;; LEN is a system variable that = the number of selected samples.
;; The number must be an int, but LEN is a float, so use TRUNCATE
(setf num (min num (truncate len)))
(setf step num) ; step size if we grab another array
;; Get the samples
(setf my-sample-array (snd-fetch-array *track* num step))
;; Just to prove that we do have the sample values. let's
;; multiply each value by 0.5
(dotimes (i num)
(setf (aref my-sample-array i) (* (aref my-sample-array i) 0.5)))
;; Covert back to a sound
(setf srate *sound-srate*) ; The sample rate
(setf starttime 0) ; Must be 0 (in most cases)
; Do the conversion back to a mono sound
(snd-from-array 0 srate my-sample-array)
That code you shared is like poetry for programmers. I still have to try it however.
A different approach can be multiplying by an envelope window, got on this yesterday while talking with you, but had no luck dealing with pwle to make that window since it’s input parameters are totally warped. You start reading the description of pwle and it’s all candy until you start reading past the first phrase:
The breakpoint times are scaled linearly by the value of *sustain* (if *sustain* is a SOUND, it is evaluated once at the starting time of the envelope).
Each breakpoint time is then mapped according to *warp*.
The result is a linear interpolation (unwarped) between the breakpoints.
The sample rate is *control-srate*.
All that sounds like swearing to me, so much complexity, many embedded parameters, for sinthetizing a simple function! And anyway what does it mean by time? I still can’t get it to work, I know that sustain = 1 , then warp = a swear value and control-srate = 2400 ! How do I make a window of aperture = len / n starting at time len /m inside an audio of duration = len ? I mean by window a single square pulse like this: _____****________________
so that I can multiply by *track* to get only that windowed part?
And by the way what’s the difference between setf, setq, and set ?
You’re not transforming sustain or warp so you can ignore those bits for now.
“Linear interpolation” basically means “drawing a straight line between adjacent points”. So if you have a point with value=1 at time=1, and another point with value=0 at time=2, then the signal is “interpolated” along a straight line between those points. For example, at time=1.5, the value will be 0.5.
The sample rate is *control-srate*.
For efficiency, Nyquist uses a low sample rate by default for control signals. This lower sample rate is called the “control rate”, and it can be accessed through the system variable *control-srate*. By default, the control rate is 1/20th if the audio sample rate. For example, if the track’s sample rate is 44100 Hz, the the default control rate will be 2205 Hz.
For now, just use setf.
The name comes from “Set Field”, and it can be used with list and array elements as well as variables. It can do everything, and more than SETQ, so SETF tends to be used whenever you need to set the value of something.
It might be a good idea to start a new topic in which you describe clearly what you are trying to do. It is often a lot easier to follow concrete examples rather than talking syntax and concepts.