I have recently been experimenting with drawing waveforms, and digging in to the Audacity source code for (invaluable) tips and tricks. I think I now have a good grasp of what’s involved in drawing waveforms efficiently, basically:
Compute and cache a reduction, by extracting maxima/minima from blocks of (say) 256 samples.
Render this data by drawing a vertical line between each max/min pair.
Clip the drawing to the damaged region wherever possible.
One thing about which I am not so clear, however, is how to render the data when the number of samples per pixel is greater than the chunk size used to calculate the reduction. Presently I am achieving this by performing an ‘on-the-fly’ reduction from the reduction every time the waveform is resized such that (samplesPerPixel > chunkSize). Even though this is very fast, it occurs to me that this strategy might not scale so well, so it seems that either:
I need to cache multiple reductions (one per ‘zoom’ level?).
Some other ‘cleverer’ strategy is possible.
I know that Audacity caches summaries derived from chunks of 256 and 64K samples. So is it using the first mechanism described above? Having read the relevant portion of A Fast Data Structure for Disk-Based Audio Editing (Mazzoni & Dannenberg, Computer Music Journal Summer 2002) I am not so sure… unless the algorithm outlined there no longer applies? Or does Audacity actually do something cleverer?
Many thanks in advance for any enlightenment on this topic!
Well after having done a bit more digging in to the code it seems that the algorithm outlined in the article is implemented in Sequence::GetWaveDisplay, which is called by WaveTrack::GetWaveDisplay, and located in Sequence.cpp.
Thanks for listing the relavent details, this post turns out to be very useful. I’ve been trying to understand the working of the Audacity as I am working on a project where I have to display the audio and the traditional methods take bulk load of time and processing capabilities for the rendering and to my surprise Audacity achieves the same in fractions of mins, which is incredible.
I read the full paper and the abridged version of it and trying to implement the same in Java. Although, I have an idea of the algorithm behind it, when I looked at the code, I couldn’t understand all parts of it. I am interested in using the algorithm just for the purpose of rendering the waveform ( for now ), I was just curious in which language have you implemented it ? C/C++ ?
Do you think you can help me out in implementing ?
PS: I dont’t really a complete idea of the source structure (Class dependency/relation ) of Audacity but I willing to know it. Unfortunately, dont really know where to start
Thanks for the pm. Unfortunately, this forum doesn’t allow me to pm you as apparently I am not privileged to compose (huh??! o_O). I am not that frequent to this forum as well that is why I was not able to reply you right away.
For the impl, I was able to render the waveform roughly as Audacity does (not fast enough!). But I still have many things that I was not able to do. It was because I had less time to work on it. It will be helpful if you could possibly outline the way you managed to render the waveform. I am actually writing a blogpost here (http://rand0mbitsandbytes.blogspot.de/2012/02/playing-audio-in-java.html).
Please send me a pm to my personal email address ( tckb[dot]504[at]gmai[dot]com)
Sadly, when we did allow new forum users to send PMs, the feature was abused by a few individuals that thought it appropriate to send spam and viruses to other users via PMs, so we have reluctantly been forced to prevent new forum users from sending PMs.
This is a community forum and is intended to be helpful for all, so we like to encourage discussions to take place in open forum. It’s a nice place to share your expertise and experience