Hi all, I’m new to Audacity and I’m a bit surprised to see (please correct me if I am wrong) that there are no interpolation options when time-zooming a waveform:
Other programs have this as a basic feature. My favourite oscilloscope plugin, Signalizer, allows for up to four interpolation options including linear and Lanczos (which is the most interesting for being the closest option to an ideal sinc based reconstruction filter):
Linear:
Lanczos:
I see this has already been discused in the past with no result. This feature would be very convenient for deep dive sample analysis. It can for instance help detect possible clipping in the DAC stage for non-clipeed samples (see the Lanczos example above, the expected analog output reaches higher levels than those indicated by the individual samples).
That’s a good thing, thanks Steve. But still a more sophisticated interpolation, mimicking what a real DAC would generate on its output, would be greatly appreciated. If you look at my examples, the linear mode doesn’t accurately represent what the output would be. On the contrary the Lanczos interpolation (a simple and well know formulation) reconstructs with a lot of robustness what is expected from the DA converters; no matter if the individual samples change their time locations, the final waveform remains constant.
Anything other than the stem plot will only be an approximation of what the DAC does. Sinc Interpolation would be a good approximation to the theoretical analog waveform (ignoring time shifts / phase shifts / noise / distortion /… that a real world DAC would introduce).
The code would also need to be very fast, so as to not reduce performance.
The major hold up, as with many features, is not having anyone with the time, interest, knowledge and skills, volunteer to write the code. We are a very small team, and there is always much to do.
Hi Steve I understand. In which language is Audacity coded? I could try to write a routine always having fast display in mind. Anyway think that when interpolation comes to play, is when the user has set such a high time-zoom rate that only few samples are displayed at a time (maybe 100 samples max?), and only for those few samples the interpolation needs to be calculated. I don’t think there is a performance issue here at all; Signalizer calculates the Lanczos interpolation while playing the sound in real time! that means thousands of samples per second. For Audacity it would be far less CPU consuming, it’s just for static displaying purposes.
If you could give me some basics about how samples are encoded in Audacity (float array?) I can try to help with this. Maybe not in Audacity’s native language (I don’t have a C compiler at the moment), but an easy to port prototype in Python or similar. I really think it’s of general interest for anyone interested in DSP and DAC theory, not just because I need or want it.