Realtime, Non-Realtime, Non-Destructive and Destructive Effects

I am Returning to Audacity after a few months gap due to other digressions. So I apologize at the outset, if I have not responded/replied to valuable insights posted various courteous expert posters, before posting this one. I am hoping to catch up soon.

Below is a long note, yet I cannot express everything that I can and want to say. For brevity’s sake I will let other thoughts and deliberations to enter the mix through discussions.

Thanks in advance.

br Sri.

Realtime and Non-Realtime effects

1) I do understand the basic nature of Real Time non-destructive and Non-Realtime (holistic, destructive) effects.

For example (and I am taking this example deliberately)
consider a simple/trivial effect.

Volume at any point T in the track is

Vol(T)= function (CurrVolume(T) + Delta)

That is, the new volume is based only on the current volume and not dependent on preceding or succeeding points. We can assume Delta is a constant, a valid and good case.

This computation can be done in real time.
Hence a Realtime Effect.

2) Now let us consider a variation of this, where Delta is a multiplier.

Again not a problem. Let us say Delta is picked by scanning the whole track, ahead of time, and is picked so that no clipping occurs place anywhere on the track. Sounds familiar? – a simple case of Normalize.

(Or for the sake of argument, we could even consider that Delta is a vector of track min, mean, median and max volumes).

Clearly the Vol(T) function needs pre-scanning the whole track for establishing Delta. So, arguably, this is not a true real time effect!

3) Now for normalize we have to use Effects->Normalize.

Which is also destructive, unnecessarily.

4) My point is that

We could still do the Normalize as a non-destructive effect, if Delta is computed as a preprocess (and repeated as needed).

5) Want to discuss the following

a) Please do not ask for why I need this. You can read all my prior posts and see that there were are always valid requirements.

b) I do understand and do not expect others to agree with my requirements. But that is not relevant to this discussion.

c) Also please do not suggest others ways to accomplish this. As this discussion is about how to make more effects real time and non-destructive (by using preprocessing or other ideas, as needed).

d) Please also spare "very few users would needed this and so why we spend time on that". that is not the focus here.

e) This is intended to be a technical discussion about the possibilities, without advocating any priorities for implementation, if at all.

f) I think this may be “dejavu” as the team might have actively discussed these issue while implementing Real Time Non Destructive effects. If there are posts related to these, appreciate your pointing me to them.

Here is one I found, somewhat related to this.

g) Ideally I would like to see all the feasible effects, to become available as non-destructive Realtime effects. Hopefully team is already working on this and is populating the RoadMap.

It is just that Non-Destructive Realtime Normalize is my first priority need of this kind. and so using it for this write-up.

Some native Audacity effects are available in realtime …
audacity native effects realtime

Trevor

Thanks for your post.

Yes I am aware of all the Audacity effects that are available as real time non destructive effects. I use them, as well as many others, extensively.

My discussion was about those that are not available as yet, or may be declared as out of consideration. I am simply conjecturing here without the full deliberations of Audacity Engineering Members.

Particularly I was highlighting the simplest example of Normalize Effect. How a Real Time Non-Destructive version may be created from the Non-Realtime Destructive version.

One thought that occurred to me is look at the NYQQUIST scripts/codes behind these [current] effects and craft [almost] Real Time Non Destructive versions. I will have to brush up on my NYQUIST as a first step, as I have not used it for 1+ years and will be fully rusted now.

br Sri.

Under the hood, we have the following types of effects:

  • Stateful effects: Those are the default (destructive) version of effects.
  • Stateless effects: Those are the realtime effects. Their statelessness means that multiple instances of the effect can be open at one time. Normalization is not possible with this effect as they only have access to the current playback buffer, so a couple dozen ms at best. Converting an effect from stateful to stateless is non-trivial; some notes about it are here: 4 steps to make an effect stateless | Audacity Dev
  • Clip/track/project properties: These are “effects” that live outside the effect framework; an example may be the non-destructive clip trimming, stretching and pitching, or the per-track pan and volume sliders.

A normalize function would need to go into the latter category. Functionality-wise, I’d probably want to modify the track’s volume slider based on what it sees in the track. Note that to properly normalize the output mix, the function would essentially need to mix all tracks to figure out how loud things should be.

For inspiration, you may want to check out Hindenburg audio - that editor has an auto-level function which modifies the clip’s volume upon import.

1 Like

LWinterBerg

Thanks for your detailed comment along with more pointers.
Will read and assimilate it first, a wonderful education here, before posting my thoughts. There is a nice framework here.

I am not able to see the text of your post while I am writing this. So I am just recalling…

You mentioned something about “Normalizing the output”.
Just wanted to know whether you are using this in the same way as I was using “Normalize”. As they are different from my perspective.

Another point… about idempotency (in a reduced way).
Are the application of Realtime effects requires Idempotency to hold?
That is, the real time effects can be applied in any order at any point in time… and they will be equivalent. i.e Order independent.
Clearly applying them more than one time – will not produce same result. Another aspect of idempotency.

br Sri.

LWinterBerg

Regarding Normalize…

I was using it as applied to a specific track (per parameters) rather than Normalizing a mix of tracks, your usage.

br Sri.

Depends on the effect. Amplifying first by +6dB and then by -6dB has the same effect as amplifying first by -6dB and then by +6dB. However, amplifying before a dynamic compression step (compressor, limiter) has a different result to doing it afterwards.

LWinterBerg

I was asking a deeper question, but may be I did not word it right, and you provided the correct but the basic answer.

Consider Track T with three real time effects F1, F2,F3

These effects may be applied in two different manners:

Both in Functional Notation

A) output=T.F1.F2.F3 (chaining)
One after another. F2 is applied on T.F1 etc.

B) output=combine(T.F1, T.F2, T.F3) the functions are applied in parallel on the Track, and Combine does the “mixing”.

Here effects can be applied in any for the same output.

c) I would assume Audacity does (A) like most effect chains are normally applied in DAWs.

In both he cases T is preserved and only the Output is used for rendering. Clearly, the Outputs are likely to be different in A and B.

I tried to follow the code pointer you provided me. I could not abstract yet, the principles from the code. Will figure out.

br Sri.

it definitely is an effect chain, so A

1 Like

LWinterBerg

Thanks for the authoritative confirmation of what I was only assuming.

Appreciate if you are able to point to a good easy (certainly judgmental, and assume marginally better than a layman) to follow example(s) of NYQUIST scripts for an effect that is available in Non-Realtime and Realtime versions that can help me to convert, say normalize, into a Real Time Effect (making appropriate limitations and tradeoffs as necessary).

Nyquist scripts cannot be realtime at the moment. Making them realtime-capable requires major changes to both the implementation, and perhaps also the language itself.

Thanks a lot for that important information. So, unlike normal effects real time effects are directly written in native code ? Interesting. I would not have guessed it. Thanks again. Sri.