How does Audacity render waveforms?

I want to render waveforms in an application I’m writing. I’ve been googling around for some libraries to help me do that, but haven’t found anything.

What does Audacity use for waveform rendering? I was hoping I could use Audacity to do something like “waveformgenerate infile.wav outfile.png”.

Audacity uses WxWidgets for its GUI elements. The actual method used to render waveforms is rather complicated as Audacity uses block files for the data rather than the original data stream. The waveform graphics are encoded into the .AU data files. There is an overview of Audacity’s architectural design here:

I don’t think that you will find anything as simple as that in Audacity, though you are welcome to browse through the source code: Audio files are read into Audacity and exist as raw 32 bit data within the Audacity project, so there is no direct WAV>PNG.

FMOD might be a better and simpler candidate to look at I’m not sure if it still does, but the FMOD download used to come with a visualisation example. It is also has the necessary support for handling audio files.

This may not be relevant to what you’re trying to do, but I’ve been experimenting with some different waveform visualizations, based on the code from If I ever get off my ass and finish what i started, I was going to propose some new ways to display waveforms in Audacity.

The basic way that most people do it is to divide up the waveform into chunks, one for each pixel of output, find the maximum and minimum sample values, and then draw pixels accordingly. This has a number of flaws, but it’s common.

I’m interested, too, in how Audacity does it. Could you point me to the code that does the rendering?

Try here for a starting point: