Display true waveform instead of connecting the dots

Wrong wrong wrong! :wink: If you don’t believe me, perhaps you will believe one of the top digital audio engineers from Xiph http://xiph.org/~xiphmont/demo/neil-young.html)

That is not as simple as it might seem.

Firstly, the reason that Audacity uses “linear interpolation” (joins the dots) in the drawn display is for speed. Linear interpolation tends to be much faster than other forms of interpolation, so you don’t need to sit waiting for long while the waveform is drawn.

The second reason is that there is no “true” waveform. Strictly speaking, digital audio is just the dots - the lines are for convenience, but do not really exist in the digital data. Conversion from sample values to analogue waveform is handled by the D/A converter in the sound card. Precisely how, and how well it does that conversion is down to the design and build quality of the sound card. Theoretically it could use linear interpolation, in which case the analogue output would look exactly like what you see in Audacity. In practice, D/A converters usually use some form of reconstruction filter that limits the frequency bandwidth and so avoid aliasing distortion. There is no way that Audacity can know what the D/A converter will actually do, so really what you are asking is that Audacity should draw an idealised analogue representation of the digital signal, but I’d guess that would be quite slow to calculate - certainly a lot slower than linear interpolation.

I’ve read several articles that discuss this issue, but in reality it is rarely a problem for normal audio.
There are three main reasons why it is rarely a problem:

  1. In normal audio, the amount of energy in the very high frequency range is usually tiny, so even if inter-sample clipping occurs, it will almost certainly be inaudible.

  2. Clipping distortion of “normal” waveforms sounds bad because it creates harsh high frequency overtones. Where inter-sample clipping could be a problem would be with very high frequency tones, but in these cases it is impossible for higher frequency harmonics to be created because they would be beyond the Nyquist frequency, so a high quality D/A converter those overtones should be completely eliminated by the D/A ant-aliasing filter, In effect all that would occur would be extremely mild peak limiting.

  3. Apart from very high end D/A converters ($1000’s) most D/A converters have poor performance very close to 0 dB. Even for high-end semi-professional equipment it is common for D/A converters to clip a little below 0 dB.

The moral of this story - for best quality, always leave a little headroom in your final 16 bit mix-down. (For WAV format, 1 dB headroom is likely to be enough to avoid risk of conversion clipping - for compressed formats such as MP3 it is better to allow a little more headroom). This advice knowingly flies in the face of popular modern mastering techniques that frequently favour “as loud as possible”.