Improvement to the vocoder effect

I read the code for vocoder, and I don’t fully understand it, but I think I see an inefficiency in it because of a repeated calculation. This version, with one less call to bandpass2, might improve it a bit. I verified identical output to the old version on certain inputs.
vocoder2.ny (4.69 KB)

Thanks Paul.
Yes I’m seeing a speed improvement of about 5 to 10%. The most speed improvement that I’m getting is with the number of vocoder bands set to minimum (10).

While we’re at it, it may be a good idea to try and unscramble the code a bit. This isn’t any faster than your optimisation, but perhaps a bit clearer what the function is doing?

(defun vocoder ()
  (do* ((i 0 (1+ i))
        mod-envelope  ; local variable for filtered envelope of left channel.
        band          ; local variable for band-passed audio.
        (result 0)    ; result must be initialized because you cannot sum to NIL.
        (q (/ (sqrt 2.0) (/ octaves bands)))    ; quick approximation of q for given bandwidth.
        (p (+ (hz-to-step 20) (/ interval 2.0)) ; midi step of 20 Hz + offset.
           (+ p interval))
        (f (step-to-hz p)(step-to-hz p)))
      ((= i bands) result)            ; DO for each band then return 'result'.
    (setf band (bandpass2 s f q))     ; make audio calculations within the DO loop for efficiency.
    (setf mod-envelope (lowpass8 (s-abs (aref band 0)) (/ f dst)))
    (setf result
      (sum result
          (bandpass2
            (mult mod-envelope (aref band 1))
            f q)))))

(Personally I much prefer to use brief meaningful names for variables and avoid using single character names, although “p” and “f” are probably not too bad since they have quite limited scope)

An observation about the above code: “declaring” the variables “band” and “mod-envelope” in the loop bindings gives slightly better performance on my (Linux) machine than binding them only within the loop body. I’m not sure why that happens - any idea? Is that also the case on Windows (or whatever OS you use)?

I believe that a let or let* INSIDE a loop is a thing to avoid with this lisp implementation, better to reassign something declared at loop top or outside. Is that correct? It seems much of the Lisp layer of the library is written that way by Roger. I actually read the sources for the interpreter and I think I see why – there is garbage to collect whenever a variable declaration is discarded, and that is what happens in each loop pass that has an inner let. This is unfortunate for code elegance, but so what.

Here is another thing to fix that I didn’t: if you apply the effect to a mono track, the effect fails with no informative message about how to use it properly. I think the cause is at the line

; if track volume slider is less than 100 percent decrease track volume
(if (< track-vl 100) (setf s (vector (aref s 0) (scale track-vol (aref s 1)))))

and two similar if’s after that do not check that s is a vector before aref-ing it. Those lines should move into “the program” at the bottom of the file.

I have not yet tested performance of your changes as you suggested.

I was not trying to change everything for clarity – I indented function vocoder better, I added the variable band to avoid the repeated bandpass calculation. I combined three setf-s into a single serial setf in the superstitious belief that it’s marginally faster, which may be only negligibly true, and I don’t mind to see that change reversed again by you.

I don’t get yet why the q factors are chosen as they are.

It is interesting to see so complex an effect in so little code, and interpreted code at that. Why I’m not shelling out for nonfree software – why pay for something I can’t extend! :slight_smile:

I suppose I don’t need to explain that I applied the two effects to identical inputs, inverted one, and mixed, and verified with sound finder and looking at spectrogram, that the result is all silence.

Thus, no change in results.

A good way to check that two tracks (32 bit float) are identical is to invert one, Mix and Render (to new track) than open the Amplify effect. If the selected track is absolute silence, then the Amplify effect shows the current level to be “-infinity”. I think this method is foolproof, so a good one to recommend if anyone asks.

If the value of the binding is constant, then definitely yes.

;; slow version
(dotimes (i 4000000)  ; big enough to time
  (let ((a 4))
    (if (= a 5)(print "oops"))))
"done"



;; Faster version
(let ((a 4))
  (dotimes (i 4000000)  ; big enough to time
    (if (= a 5)(print "oops"))))
"done"

If the value needs to be recalculated each time, then I think that it is probably better to make the bindings within the DO loop rather than a separate LET. So I prefer:

;; a little quicker
(do* ((i 0 (1+ i))
      (a 4)
      (b 0 (* i a)))
     ((= i 1000000) "done"))

to

(do* ((i 0 (1+ i)))
     ((= i 1000000) "done")
  (let ((a 4))
    (setf b (* i a))))

or

(let ((a 4))
  (do* ((i 0 (1+ i)))
       ((= i 1000000) "done")
    (setf b (* i a))))

but probably more important than a fractional difference in speed, how easy is it to see what the final value of “b” is? Which of the above are equivalent?

(do* ((i 0 (1+ i))
      (a 4)
      (b 0 (* i a)))
     ((= i 100) (print b)))

(do* ((i 0 (1+ i)))
     ((= i 100) (print b))
  (let ((a 4))
    (setf b (* i a))))

(let ((a 4))
  (do* ((i 0 (1+ i)))
       ((= i 100) (print b))
    (setf b (* i a))))

I’m impressed. Definitely worth doing but not something I’ve got round to :blush:

That makes sense. I’ve not studied where automatic garbage collection occurs, but important if the code needs to be as efficient as possible (not usually a major concern).

On the subject of speed and efficiency, you may not be aware but Nyquist (in Audacity) runs much slower than on Windows (though most built in effects run a little quicker on Linux, even without specifying any build optimizations).

When Nyquist first appeared in Audacity, I don’t think that it was anticipated that it would have a great deal of practical use, but was rather though of as an interesting feature for geeks that want to experiment with simple DSP. While I agree that Nyquist in Audacity IS and interesting feature for geeks, I think that it has been proved over and again that it really is an important and extremely useful feature. (Your thoughts about this?)

For an “experimental” feature, error checking and usability are probably not the major concern, but shifting the emphasis away from the idea of “experimental” toward “practical” plug-ins, the user experience becomes much more important. Gradually wondering around the garden toward the point … Yes, I think it is much better to include error checking - I hate seeing that unhelpful “Nyquist did not return audio” message :wink:

This whole section:

; if track volume slider is less than 100 percent decrease track volume
(if (< track-vl 100) (setf s (vector (aref s 0) (scale track-vol (aref s 1)))))

; if radar volume slider is more than 0 percent add some radar needles
(if (> radar-vl 0) (setf s (vector (aref s 0) (sim (aref s 1)
  (scale radar-vol (osc (hz-to-step radar-f) duration *radar-table*))))))

; if noise volume slider is more than 0 percent add some white noise
(if (> noise-vl 0) (setf s (vector (aref s 0) (sim (aref s 1)
  (scale noise-vol (noise duration))))))

should really be moved into the main body of the program, and “S” tested for stereo before it gets there.

I think that is true (though quite a small difference).
Also, PSETQ is fractionally quicker than multiple SETQ statements, but a single SETQ or SETF with multiple symbol/value pairs seem to be the quickest.

I’m quite happy for them to be combined. Personally I find it slightly easier to read if each has its own “setf”, but your indentation makes the code clear.

If I recall correctly, the usual formula for converting between bandwidth and q for a bandpass filter is (Lisp notation):

(/ (sqrt (power 2 N))(- (power 2 N) 1))

where “N” is the “octave bandwidth”.

For values of N less than 1, a reasonably close approximation is:

(/ (sqrt 2.0) N)

This vocoder effect has been written for between 10 and 240 bands across the range of 20 Hz to 20 kHz, so the approximation should be accurate enough.

Here is another version that rewrites “program” at the bottom, and follows your suggestions for function “vocoder.” I changed one comment of yours, and also changed some single ; to ;; or ;;; according to conventions. The title of the effect is still not “vocoder.”
vocoder2.ny (5.17 KB)

Do you mean, Nyquist in Audacity runs slower on Windows, than some other standalone Nyquist does, on Windows.

Or do you mean, Nyquist in Audacity runs slower on Windows than it does on Linux.

I have had some ideas for hacking the interpreter to be faster. An “easy” idea is not to let interpreter garbage wait for collection, but to give it back to the allocator at once. A more ambitious idea would be to abandon the mark-and-sweep collection for generational. Not trying them yet. That would likely consume all my hobby time and put my narrations in hiatus…

other way round.
Nyquist in Audacity runs slower on Linux than Nyquist in Audacity on Windows.
I’ve not compared standalone Nyquist on different platforms, but for Nyquist in Audacity the difference is very large, sometimes as much as 10 times slower on Linux. For example:

(dotimes (i 10000000 a)
  (setq a i))

On my Linux computer, the run time is about 19 seconds (debug build). It is probably a bit less slow for a release build but I can’t test that right now. From previous tests I’d guess about 15 seconds.

On Windows XP in a virtual machine (same hardware) using the release version of Audacity 2.0.5, the run time is less than 3 seconds.

I was curious about vocoder to begin with, because the splitting up of the signal into many bands is something in common with my declicker. In my case I examine the bands to figure out what intervals to filter, and how, in the original. But this reassembles a signal from the splits in a weird way. I am surprised that understandable speech can still result.

The method I use to find bands may be ridiculously complicated and expensive. I may research other ideas. I have to learn the math for real.

That is crazy. Somebody should research why. xlisp is all written in platform neutral C. Are you sure there isn’t just some silly mistake in the make files? This bit of C amid all the C++ might have been compiled specially and wrongly.

But you should compare release to release to be sure.

“That is crazy”: Yes, sure is.
“Somebody should research why.”: I’ve brought it to the attention of the powers that be. Actually, I’ve just noticed that it was not logged on bugzilla, so I’ve now added it.
“xlisp is all written in platform neutral C”: Yes, sure is.
“Are you sure there isn’t just some silly mistake in the make files?”: No. That’s a bit beyond my skill set :wink:
“But you should compare release to release to be sure.”: It’s been that way for as long as I recall with little difference between one version and another.

One thing is certain, the issue is specific to Nyquist in Audacity. If I run:

(dotimes (i 10000000 a)  (setq a i))

in stand-alone Nyquist on the same Linux machine, it completes in about 2 seconds.

Trouble is that few of the Audacity developers know much about Nyquist, and few develop on Linux.
How good are your C skills?

I think I am very good writing and debugging C but not necessarily up on differences between compilers and their options.

What you say about standalone Nyquist working fine makes me suspect some mistake in compiler options in the Audacity build.

There is Nyquist and there is xlisp – what you wrote is purely an exercise of xlisp.

Another project I might try is a little bit of multicore exploitation in Nyquist… but that isn’t properly Audacity.

Some calculations like convolution might be of the “embarrassingly parallel” sort that could be “easily” annotated with OpenMP pragmas.

Just musing. What can you tell me about concurrency elsewhere in Audacity.

Currently Audacity does not support multiple processors.

Over the years there has been some exploratory work on this. Here is one of the older discussions: http://audacity.238276.n2.nabble.com/multithreading-td2240954.html
Also some work using SSE (http://audacity.238276.n2.nabble.com/Requested-EQ-SSE-threaded-patch-tt7560515.html)

Although this is way beyond my rudimentary programming skills, processing the channels of stereo tracks looks (intuitively) like it would benefit a lot from parallel processing.

The good news is, I’ve located the problem. It’s due to a function call to the undocumented and obsolete function in wxWidgets “wxYieldIfNeeded()”.
I’ve added that information to the bug report, so hopefully it will be fixed soon.

I suspect that wxYieldIfNeeded() is specific to Windows, so in the mean time I’ve commented out that line in my own copy (on Linux) and all seems to be well.