You’re closer than you might think: (snd-biquad sound b0 b1 b2 a1 a2 z1init z2init) is nothing but a direct Lisp wrapper around:
snd_biquadfilt(sound_type s, double b0, double b1, double b2, double a1, double a2, double z1init, double z2init) in “biquadfilt.c”
This direct calling of C code is the reason why the SND-… functions crash with wrong-typed Lisp argument values. A further complication is that the C code in “biquadfilt.h” and “biquadfilt.c” is not written by a human, instead it is auto-generated by the intgen macro-processor (see “intgen” Appendix in the Nyquist manual) from the definitions in “biquadfilt.alg”, so beside C you also need to understand intgen, what I only partially do.
Yes, “negative infinity overflow” was the first thing that came to my mind when I read “returns only -1 samples” in the original description of bug 152.
This is not an easy thing because IEEE Inf and NaN are treatended inconsistently with different C compilers on different operating systems.
The problem is that the mathematically correct behaviour of floating-point numbers would be:
- number > most-positive-float => error: positive floating-point overflow
- number < most-negative-float => error: negative floating-point overflow
- 1.0 / 0.0 => error: division by zero
But IEEE floating-point numbers are designed to be used in physics simulations, so the values Inf (infinity) and NaN (not-a-number) were introduced to avoid floating-point errors, what has to the consequence that IEEE floats have a very weird behaviour with very big or very small numbers:
- number > most-positive-float => +Inf
- number < most-negative-float => -Inf
- 1.0 / 0.0 => +Inf
- 1.0 / Inf => 0
- Inf / Inf => NaN
- 1.0 / NaN => NaN
You remember the “1 / 0Hz = infinite time” discussion? Mathematical division is only one of many cases where Inf and NaN must be treatened differently than “real” floating-point numbers, that are not at all clearly defined in the IEEE standard.
See David Goldberg What Every Computer Scientist Should Know About Floating-Point Arithmetic for another myriad of details.
The problem is that in the real world the behaviour of IEEE floats is mathematically incorrect with values that do not fit into the specific floating-point format (32-bit, 64-bit, etc.), and to write code that catches these errors is many times slower than just simply accepting these errors because IEEE floats are implemented in hardware, not in software. The only other way is not to use IEEE floats at all (see e.g. http://gmplib.org/), but with the inevitable disadvantage that using software floats will make Nyquist many times slower.
I will ask some software engineers here (who know this better than me) what to do in such a case. Every C standard library provides IEEE floating-point “traps” to catch such errors, but this will probably end with platform-dependent #ifdef code.
This is never a bad idea because Nyquist lacks a lot of argument checks for wrong values, what has to the consequence that you get huge error backtraces where you first have to travel back several miles until you find the real error.
IMO every Nyquist high-level function (the functions that are intended to be called directly from user code) should have tests at least for obviously wrong argument values. The problem here is that XLISP and Nyquist are designed as experimental languages where doing experiments always includes doing wrong things (and hopefully learn something out of it), so Roger deliberately omitted these tests. The other reason is that argument tests always introduce restrictions, so finding good argument tests without introducing unnecessary restrictions is often more difficult than designing the language itself.
Once again a megaton of details to consider for a problem that initally looked soooo simple … 