Audacity and note determination

First of all I wanna say “Thank you” to people, who develops this fantastic programm. :slight_smile:

I am a student and I am doing a univercity project related to a digital signal processing. I have one idea which I want to implement with Audacity, but I`m not sure it is possible or will I handle it through my programming skills. I want to ask you some questions about Audacity, spare my idea and, maybe, will ask some help later.

So, I want to write a programm, which analizes a simple music signal, the frequencies in it, suggest the notes and build a Music XML file. I am familiar with programming(C, C++, C#, Java, Python) more or less, depends on language and have already compiled the source code(in fact it’s the biggest solution I’ve ever seen) :smiley:

I guess there already are tools in Audacity that can analyze the frequencies in a signal. But is it possible with them to analyze the note? If I record one note, played on my guitar, will I be able to see the frequency of it? If not, is it possible(I think it is) to write some code that will determine that “main” frequency of a note in a signal recording, like a tuner, but not realtime as it does.

The next step will be to produce information like this. It is a MXML format, which is used to represent the “note sheet” from grafic view. I want to do the “reverse”. From music to MXML and to grafics.

    <score-part id="P1">
  <part id="P1">
    <measure number="1">

The tempo, rythm will be suggested at the beginning of the recording or after it becouse its important only for making MXML, not the sound analisys. So the user have only to set the values of tempo he wants to play and maybe some other options.

I don’t know how I will implement it to life(a plug-in) or completely another programm without unused functions of Audacity. And I don’t think I will do it well, if it hanlde with couple of notes or a scale it will be enough for me. It will already be a progress of my skills.

That’s how I see it. Please, tell me if that is realizable and how hard it will be to code. If is, I will ask you about inner structure of Audacity source code and wxWidgets(I coded mainly in console mode, used grafics in C#(XAML, Windows forms) so it is new for me)

First, yes.

That’s the piano note “G1” and all the overtones and harmonics that let you know that it’s a piano. Amplitude up along the left and pitch increasing left to right along the bottom. I use this illustration to show people why you can’t “filter out a musical instrument” by just filtering out the single note it’s playing.

Analyze > Plot Spectrum.


Determining notes reliably is quite a complex matter, and becomes many time more complicated if there is more than one note playing at a time (polyphony).
There is a “pitch detect” Nyquist plugin which is able to analyse the pitch of single notes reasonably accurately:
There is a bit of additional information about that plugin here:
It is based on the “Yin” function in Nyquist

Nyquist is a scripting language built into Audacity - simple commands may be run directly from the Nyquist Prompt effect (, or more complex things can be done by writing plugins (

Thank you for now. I will check the information

Hello again
It’s been much time since I’ve started this topic, but now I have enough time and willness to implement my idea(at least part of it) and learn something new.

So, I checked out this plug-in and it suits me well as a base, but I need some more additional functions and I want you to help me implement them.

1)Analyzing all the recording and detecting all the pitches existing.
In this signal I have recorded two notes - E4 and E5(see the attachment). Is it possibble to detect them both(or no matter how many) and show info that was detected 2 notes in the signal - E4 and E5?

2)And if so - the time. (If there is a time scale in Audacity, so I think we can analyze and process its values). The next target will be corresponding the note to its length. (For example from the beginning to next high amplitude, if its difficult to calculate the end of the note).
The result wil be like: A4 from 0 to 2 sec and B4 from 2 to 4 sec(whole notes in 120bpm).

If thing are possible to implement, even if it will work partially or not stable I will try to move to the next step.
I have a month to complete my univercity project (both theory and practice) and in fact I have to carry out something in general(practical part). It can be the algorithms, parts of program, not fully workable, but I have and idea and I want to try to implement it and if someone finds it useful he can improve it, create something within Audacity or beyond.

Some more open source note detection algorithms can be found in Vamp Plugins.


Thank you.
And what about the plug-in?
Is it possible to add this features to it?

What features? The detection of note length etc in order to produce an xml file in the end?



Well, you can essentially do anything what’s possible with e.g. C++.
The only restrictions are:

  • execution time-- probably not so important for non-realtime.
  • poor GUI support (you have to enter the saving path directly into a text box, there’s no save dialog available)
  • You have to write the XML support from scratch.


  • It’s easy to try out different things since the code hasn’t to be compiled.
  • Over 1000 pre-defined functions with Lisp or DSP background.
  • There are also some score functions available, however, I’ve never tried them out–they are mainly for creating sounds and not the opposite.
  • you can return the notes as labels and compare them to the wave form (or better, the spectrogram)

Do you have any plan in mind?
I presume you don’t want to start with transcribing a whole symphony within the first week.