hello hello hello - is it possible to write/compose a ‘control track’ of, for example, volume commands or effects - flanger, phaser slow changing type things - and then apply it to a track of audio(or various different tracks of audio) afterwards. ??? if so how?
I can’t answer your question (I’m just another new Audacity user), but I was curious about what you want.
Are you asking for something like a “metacommand track” to be added, so that the audio tracks can be left in their original states and the metacommand track will be used to process the original tracks during playback? That sort of approach sounds intriguing. The original audio tracks are never changed or lost, and multiple versions of a song could be directly compared to one another by recording multiple metacommand tracks and activating them one at a time. The raw tracks could be heard by deactivating all metacommand tracks.
If that’s what you were getting at, it sounds good to me!
well I was wanting to use as an experimental compositional tool although being able to use as a comparitive tool would be really helpful too. Its like I want things moving gradually from foreground to background and vice versa (for example as volume commands) and then to be able to apply the same commands to a different track of audio maybe a bit out of sync. Obvioiusly there’s ways to achieve that w out ‘meta command’ but would be damned handy for making slight variations or like suggested doing comparisons,…
?
Unfortunately not. It would be great to have real-time effects, but those take up lots of processing power (especially when you use many tracks as I do) and would require all the plugins to be re-written to work in real-time.
I’m not sure if this is on the table for the future. You can ask at the “Adding Features to Audacity” forum, it would be interesting to read that discussion.
If we were planning for maximum backward compatibility, I’d suggest not implementing the “metacommand track” in real time. A real time feature does seem to be a waste of CPU and a waste of the coding required to adapt the rest of Audacity to that feature. Instead, it might be better to have Audacity render the results of the audio tracks plus metacommand track in the form of a new Audacity project. Each of the metacommand tracks could display a Render button that would “activate” the metacommands in that track by applying them to the audio tracks and writing a new project with just new audio tracks. The new tracks could be written one at a time, saving on CPU - if it takes a while to render, that’s probably OK, as long as the resulting tracks are synchronized the same as the original tracks. After rendering each metacommand track in the original (or “source”) project, you end up with multiple “target projects” that could be used to produce MP3s that could be burned to CD for your informal focus group to compare over their favorite beverage.
Not sure if the plug-ins would need to be rewritten with that kind of workflow. alatham, can Audacity itself and the plug-ins be commanded to do their job over a specific time interval of a track via some programming object like a Java bean? If they can, the plug-ins might not have to be rewritten and the metacommand functionality might layer nicely over the rest of Audacity.
Don’t take any of this too seriously - I’m just throwing out ideas. Does any of this sound do-able?
Your idea does sound do-able, but by the time it’s implemented they might as well have made the effects work in real-time. I like your idea for users who don’t have powerful machines, but there’d be no reason to disallow real-time effects if the users CPU can handle it. I’m not sure how much horsepower that would take though.
As far as I know (and I’m not an expert or a developer, I just play one on TV), coding Audacity in order to use these enveloped effects would not be much different from coding Audacity to do them all in real-time. In other words, coding for real-time effects would allow the developers to implement both ideas.
Don’t quote me, but I’d bet some of them would have to be re-coded, and I don’t think there would be any guarantee of using all the Nyquist plugins in real-time. I’m sure some of them are inefficiently coded. I’m also sure some of the Nyquist plug-ins are simply not coded with this sort of functionality in mind, so they wouldn’t work at all.
I can’t comment on other types of plug-ins, I simply don’t know enough to say.
This is a question for a better programmer than I.