This is an abstraction of what I am trying to do. Certain combinations of values for mm and nn are poisonous, others not. It is not obvious what the pattern is yet, but I started running into this only when I made mm much larger than before.
Here is a simpler way to demonstrate the crash. As written it does not crash. Change the definition of nn as commented and it does crash, and apparently for any larger value.
Is it significant that this least value of nn that causes a crash is one half of the step size?
If I fix nn at 110 and vary mm, I get a crash for 550 (another multiple of half the step size! – and I presume for all all larger mm) but not for 549 (and I presume all smaller).
And notice this strangeness: If use-convolve is nil, there is no crash. Yet if snd3 is defined by convolve, there is a crash, even though it has been completely evaluated first by snd-flatten.
Now take this version, and just vary multiplier. 5 is the least value causing a crash, but there is no crash with 6 or 8. I did try up to 11 and got crashes for 7, 9, 10, 11.
You refer to the third of my four messages above. I said any positive value for the HALF-step, which means an even value of the step. I just reconfirmed that there is a crash with half-step of 109.
then snd4 is returned to the track.
Then if you run snd-avg:
(snd-avg s 440 220 OP-PEAK)
it completes correctly with no error.
There should really be no difference between splitting the process in two like this and your original code, yet the original code reliably crashes.
My current guess is that convolve is doing something bad and that snd-avg is then stepping in it.
I think that to sort this one out will require running the code in a proper debugging environment (one for Rodger).
Something about the intermediate result is foiling snd-avg, for sure. Strangely it remains so even though snd-flatten is first done to the result of convolve.
This bug makes me unhappy enough that I will trywriting my own snd-avg using snd-fromobject
Another detail worth mentioning. It appears, from the time it takes to crash, that something is wrong at the end of the sound returned from snd-avg. The crash happens later if the bad sound is longer. The progress indicator for my plug-in advances almost to the end before crashing.
What’s faster as a workaround is to call snd-xform with block size equal to step size and again for snd-xforms of the sound that remove a multiple of step-size samples. Then use snd-fromobject to interleave the samples of the several snd-avgs.
Trying another workaround. Compute a convolution with a big window as a sum of convolutions with pieces of the window, shifting the summands appropriately. Should give identical results, but avoiding large windows if I use sufficient pieces.
It seems after all that large convolution windows are the source of the trouble, though only when snd-avg is applied, but then even if the convolution has been the input to sum, snd-xform, or snd-flatten.