super-sampling: render the waveform in 4× or 8× time resolution and then down-sample the image
probably doing it in the time dimension would change the appearance the most,
thus maybe do oversampling at different degrees for time vs amplitude
but due to the way audacity shades the waveform, looks off or need overhauling,
(how would a sample percentile-based shading look like?)
issue/problem is that the edge looks more vivid/contrasted than the vertical-centre of the waveform,
which might throw off/counter the effects of anti-aliasing (tendency for fuzzy edges
Waveform anti-aliasing was ruled out a long time ago because it conveys less information, requires more processing power, and slows down rendering.
“Subpixel rendering” has been considered, but to date the developers have not had time to implement it. One of the lead developers did produce a proof of concept, and it looked terrific, but since then he’s been too busy with more pressing matters, such as improving stability, developing “scripting” capabilities, and managing new releases. I believe that he still wants to implement this, but there’s only so many hours in a day (and he works for a living).