# How can I get the attack time of an audio signal?

Hi there,

Hope somebody can show me how to get the attack time of an audio signal (a single note performed by only one instrument). I attach an example of the audio signal.

Easiest to see if you “Normalize” the track first.

Select the “attack” (rising) section, then read the length from the Selection Toolbar.

As you can see in this screenshot, the attack starts at around 7 ms from the start of the track and has a duration of around 36 ms.

Hi Steve,

Thanks so much for your advice. In this case, it is very clear. Just wondering if there is an analysis method that helps to accurately measure the envelope or I should try with matlab, r, etc?

Thank you!

Hi Steve,

What I mean is that I can’t see clearly that the attack time starts at 7ms. Should I try to run a piece of code instead maybe?

Thanks a lot,

If you were to use MATLAB or similar, you would need to devise an algorithm to determine the start and end of the attack phase. In the “0dn_cnt.wav” example, the attack phase is easy to see, but to define an algorithm is very much more complex.

We can see it your example that the “natural” start of the attack is a little earlier than the start of the waveform. By eye we can extrapolate the slope of the attack to infer a start time of 7 ms. Similarly, we see the initial (curved) slope has a noticeable bend (at the end of the selection), which is the obvious end of the attack phase, even though this is not the maximum level of the sound.

1. Find the slope of the attack (following peaks)
2. Extrapolate the slope back to where it intersects the “silence” line.
3. Determine a point where the slope has reduced in steepness enough to be NOT the attack phase
4. Allow for curves in the attack (curved in either direction)

You then need to implement the algorithm as code, test and debug it.
Is it worth the effort?

There are free “transient shaper” plugins where you can vary the attack by ear …

We should consider the view, too. Waveform view only sees sound down to about -25dB to -30dB before it becomes undetermined (invisible). You can easily hear into the -60s or lower. DB view displays the whole thing, but you stop being able to see wave details enough to give meaningful numbers.

Why are we doing this? It’s not unusual for people to try to do surgical analysis on sound that resists surgical analysis.

Audacity is a production editor and does things because they sound good.

Koz

Unless you zoom in vertically