Source separation is an interesting topic. The problem is, as Gale mentioned, that it's not an easy task. ISSE isn't too bad for what it is (it excels at noise reduction / hum removal) but vocal isolation takes a fair bit of time to get satisfactory results.
A man by the name of Jean-Louis Durrieu had written a Python script which automates the separation of vocals (it spits out the isolated vocal estimation and the attenuated vocal track estimation) and it does an impressive job at tracking the vocal harmonics and consonants in mixed recordings. For the curious, it's called separateLeadStereo and can be found on GitHub.
On the commercial side of things, you have ADX Trax Pro by Audionamix, and Hit'n'Mix by Neuratron Group. ADX is designed to do single elements, such as a lead vocal / guitar solo, whereas Hit'n'Mix tries its hand at full mixes. Hit'n'Mix is extremely promising, but the program suffers from a few issues; primarily the quality of when you solo an element. That, and it miscategorizes instruments considerably, but it usually gets the bass notes fine (probably because the bass is lower in the mix in most songs). That, and the interface is very pretty (it's an over-glorified spectrogram with colorful shapes which represent the notes as they'd appear on a normal spectrogram). When you think of it, there's really no easy way for Neuratron Group to fix the miscategorization of notes since the human voice hits all of the frequencies which instruments do. I'm assuming the algorithms categorize the instruments by frequency range, which honestly that's the only way I can think of it being done.
All in all, I would absolutely LOVE to see something native to Audacity which allows users to do source separation, but at the same time I understand it'll take a lot of time, dedication and money to get it done. Best of luck to the developer who is interested in this fascinating (and difficult) subject.