Realtime plug ins in playback

this is something I love about Ardour and find myself thinking this one feature would make Audacity so much more useful when processing audio
is there a technical reason why this is not implemented?
keep up the great work!
ciao!
KIM

This is one of the most requested features.

There may be some underlying technical reasons, but more than that there is a basic difference in concept. Audacity is essentially an audio editor and Ardour is not. In Audacity, if you cut part of a track, then Audacity does exactly that, it removes the cut section from the track. This is not so with Digital Audio Workstations (DAWs) such as Ardour. When you “cut” part of a track in a DAW, the track is not really cut - the “cut” samples are simply skipped.

There are some advantages to editing rather than real-time manipulation - one of the main advantages is performance. Real-time processing requires more processing power from the computer. It is possible to run Audacity and apply dozens of effects to tracks on computers with very modest specifications.

I do however agree that it would be great to have the option of running at least some effects in real time. In particular I would love to see real-time Eq (tone control). Audacity does have a tiny bit of real-time ability, but it is limited to track “volume” and “pan”. I would love to see a real-time 3 band parametric Eq added.

agreed and thanks for the response – two thoughts:

  • Audacity can be used as a multi-track environment - example: I use Audacity for doing sketch-pad experiments and quick editing (track splitting in Audacity is my best friend! :wink: ) of field recordings on a SDHC card, then layering these sounds together to see how they fit tonally – so it would be great to be able to apply realtime EQ or compression to hear how it sounds…since I am not dealing with ‘musical events’, being able to listen to minutes+ of something with EQ really makes a difference in how I later approach dealing with that soundfile

  • Mastering: I was working in Peak on OS X until I went Linux for all my studio and performance work (I do keep a Powerbook with Max/MSP loaded on it)…I’ve used Peak for years and got used to the ability to apply VST’s to my material in real-time…I also tried using Amadeus Pro for mastering work but the fact that it also doesn’t do real-time audition of effects was a show-stopper for me and I only used it for making sonographs – I would LOVE to be able to use Audacity for an editing/mastering environment on Linux as there is no real tool for this on Linux

  • CPU Horsepower: most laptops these days are more than powerful enough to run a sound editing application with plugs – in fact I could run 3 plus on my creaky old Powerbook 1.67GHz G4 without too many dropouts and glitches…my Dell Studio 15 has a Core2 Duo 2GHz CPU and I’m sure it could handle this task without a problem

but in any event – keep up the good work! I’ve come to really like Audacity and evangelize it :slight_smile:

It certainly can and I frequently use it as such, however that does not negate the premise that Audacity is in essence an audio editor. Eq is the one effect that I really have trouble with in the absence of real-time processing. Whilst the spectrum analysis tool can provide visual clues it is still difficult to tune in equaliser settings without being able to hear what you are doing while you are doing it. The effect “preview” is helpful, and can be set to preview longer sections (Edit menu > Preferences > Audio I/O) but is still more cumbersome than real time processing.

Look into “Ardour” with “Jackd” and “JAMin” (also “Jack Rack” for additional effects). You don’t have to use Ardour, any audio playback/recording applications that support jackd will do, but Ardour offers the greatest power and flexibility.

That may be a creaky old laptop for you, but many Audacity users are still running machines that have CPU speeds in MHz.

Rock on :smiley:

steve are you a developer or moderator for this forum? just curious

Moderator, (though I have one very small patch included in the source code). Mostly just an enthusiastic user.

I’ve worked as a sound editor in film and music at Skywalker, Saul Zaentz and Thomas Dolby’s Headspace/Beatnik and recorded/composed over 40 albums of electronic music over a 30 year career so I’m making these statements based on observation and not conjecture or speculation

I find your distinction between Audacity and Ardour perplexing since many of the pro sound editors I know use Pro Tools as a non-destructive sound editor…there seems to be more sound editors moving from old school destructive ‘editors’ like Peak, Soundforge to non-destructive editors such as PT for stereo/mono sound file editing
and as a sound designer - having the ability to edit and apply effects in real time (in one environment) is crucial when deadlines are looming and you don’t have the time to guess whether a single setting on a plug will work when applied to an entire track

and yes I do use Ardour, Jack and Jamin for some of my work - and they are amazing :slight_smile:

also, on what do you base your statement that many Audacity users still work on MHz machines? is there a poll or something on the forum?

cheers!

The distinction that I’m making is mostly a historical one. As you will remember, back in the olden days we used tape machines to record with and a razor blade for editing. Then came along computers and revolutionised the world of audio production. The first DAW I ever used was a Sound Designer II running on a MAC with an orange CRT display. At the time it seemed awesome. Transition from recording on tape to recording digitally was relatively painless as the DAWs of the day were modeled on the traditional analogue machines, at least in terms of signal routing.

“Digital Audio Workstation” was a term generally reserved for a complete system, software and hardware. Although the DAW included audio editing software, audio editing software alone was not a DAW. While the DAW was replacing analogue tape recorders, the audio editor software was replacing the fiddling about with razorblades (an essentially destructive process, particularly if you got your fingers in the way).

As time has gone on, the distinction between Audio Editor and DAW has become increasingly blurred. Take for example Adobe Audition (formerly Cool Edit Pro). When using Audition as the software component of a DAW for recording/mixing, there is a multi-track view that provides real time processing, though the amount of actual editing that you can do is somewhat limited. However it also has an “Edit View” in which detailed destructive editing of single tracks can be performed. Why did they decide to separate these two types of audio manipulation? Was it for technical reasons, or to suit the expectations and familiarity of their users? I suspect it was a bit of both.

Syntrillium were by no means the only people to separate out recording/processing from (destructive) editing. This distinction can be seen in many other products. Cubase still includes a separate “sample editor”, as does Logic 8 (even though they advertise as “single window editing”)

So really, the (admittedly fuzzy) distinction that I was making was between the “tape recorder metaphor” that DAWs have traditionally followed and destructive sample editors.

No, there is no poll on the forum, but there is an increasing number of people using low powered Atom based machines (based on some of the posts that we have had, plus things I have seen on other web sites), and I personally know several people that still use old PIII or PII’s with Audacity.

Most estimates place the number of computer users in the world somewhere below 10% of the world population while much of the remaining 90% can not afford any kind of computer. Audacity is a worldwide product and makes audio editing available to anyone that can afford the basic hardware.

separate out recording/processing from (destructive) editing

ah OK this might be where the confusion is coming from.
By non-destructive I mean changing the representation of soundfile in a DAW without effecting the actual file itself…not recording/processing which all editors from Alchemy to WaveEditor have done.

in fact, speaking of WaveEditor check out
http://www.audiofile-engineering.com/waveeditor/techspecs.php
where they list ‘Non-destructive & destructive editing’ on their feature list
AND it does real-time processing of sound files using AU and VST

yes – historically sound editing software has used the linear tape recorder with razorblade model – but this is quickly changing

I loved Cool Edit Pro and wanted it to be ported to the Mac so badly in the late 90’s. I never used the Audition version as I left the game sound grind in 2001 and went back to working on Mac laptops.

it took me a long time to warm up to Peak and the only reason I used it was because it was the editor of the house in some of the places I worked
and I had a copy for personal use

don’t know if you remember but there was a great little DAW by Digidesign called Session - which I used a lot for sound design. Very simple and well designed…reminded me a little of Deck.

%: even if you are counting netbooks methinks MHz machines would still represent < 50% of the Audacity customer base
but it would be interesting to put a poll up and see what kind of horsepower users have

Cheers!
:slight_smile:

Vote added to WIKI Feature Requests - and transferred this post to Audio Processing.

WC

As lead developer of Ardour, I’d just like to note that Ardour has always run plugins in realtime and was developed for years on a 400MHz system (admittedly dual processor). Realtime plugin support doesn’t imply substantial processor usage - it depends entirely on what the plugin does. There are plugins that can tax a modern CPU, and others that would be scarcely measurable even on a 300MHz system. The issue of whether to support realtime plugins should not be decided by processor capabilities, since it is fundamentally an architectural question.

Conversely, that’s the reason why Ardour does not have a sample editor?

Absolutely correct. We are just not interested in trying to provide an architectural framework for altering the data in existing files, particularly when there are several other excellent free/libre tools that can do (most) of the required task. What we are interested in is providing a way to fork-out to such tools (user’s choice, etc) when such actions are required by the workflow.

Is there still no good way to do that? I’ve not used Ardour a great deal due to not having a suitable machine, but what I’ve seen of it so far I think is marvellous and certainly want to use it more in the future.

What I’ve been doing so far when I need to fiddle creatively on a sample level scale is to Export from Ardour, Import into Audacity, then Export from Audacity and Import back into Ardour. Not very convenient, and made worse by having to shut down Jackd before opening Audacity, but fortunately I’ve now got Audacity to work with Jackd, so that’s one stumbling block out of the way.

I bet you’re sick of hearing comparisons with CoolEdit Pro, but the ability to open Audacity from Ardour, edit a bit of sound then Export it back to where it came from would be the answer to a dream.

We have discussed this idea for a long time, and have a good idea of how to do it. The key problem comes whenever we want to fork out to an editor that doesn’t have any concept of “just edit part of this file” (i.e. all of them). If the source file is small, then having ardour write it all out, or just part of it out, and then invoking the editor on that new file, is fine. However, if the file is very large, and the region to be edited is also large but not equivalent to the full extent of the file, then writing out the data again is pretty wasteful of disk-space. There seems to be no way of avoiding this.

In addition, there is the problem that ardour’s native audio file format (regardless of the bit depth or file header format) is 1 channel per file. There are almost no editors that can handle editing “1 sound” that is made up of N different files, 1 per channel (snd can do this). Therefore, there is a compromise to be made somewhere when we “reimport” the edit result back into ardour - we can leave it multichannel (having “exported” it for the editor), or we can reconvert it back into 1 channel per file again. Once more, for small files, this is a non-issue, but there is nothing in ardour to stop this being done on multichannel files that are hours long, at which point it becomes a major performance penalty.

If anyone in the audacity community has thoughts on how to handle this 2nd issue, I’d love to hear them.

I think it would be definitely worth speaking directly to the Audacity developers about this if you are not already doing so.
A couple of ways of contacting them:
https://lists.sourceforge.net/lists/listinfo/audacity-devel
feedback [AT] audacityteam [dot] org

As I mentioned, I’ve not yet used Ardour a great deal so I’ve not spent much time studying how it does things. Could you say a bit more about how it handles data, or perhaps point me in the direction of some documentation about this.

I believe that Audacity can open multiple files on the command-line and will import all of them into the same project, so there may be some opportunity there.