Monitoring time difference between two points on waveform

I am conducting a physics investigation in which I need to monitor how the time difference between two sounds changes in a waveform plot. Currently I am determining this simply by highlighting the region between the two sounds and reading off the time length of the selection. Please see the attached screen shot for an example of this. My question is: Is there a way to do this automatically (i.e. not manually)? For example, is there a way for Audacity to highlight the maximum sound volume levels so that I can more easily find the time difference between them? Alternatively, is there a way to export the raw data from Audacity (i.e. values for sound volume and time) so that I can export it into another program for analysis? When I tried exporting data, it did not export values of sound level and time but rather some other values to do with frequency.

Thanks in advance for any ideas.
Screen Shot 2015-06-15 at 5.41.26 pm.png

“Sample Data Export” (Audacity Manual) exports “sample values”. If you are not sure what that means, read up about “PCM” digital audio. This provides the most accurate measurements because you are working with the raw data, but it is only suitable for short sound periods because of the massive amount of data (by default there are 44100 samples per second in each audio channel).

How much accuracy do you need?

Thanks for your reply. I have used the export sample values tool and it has been useful for extracting raw data from Audacity. However, as you say, the sampling rate produces a huge amount of data - far more than I need. For a plot roughly one minute long, it is around 2.9 million samples. I do not need this level of accuracy, but I do need to export data for the entire 1 minute plot. I have tried lowering the sampling rate, but I noticed that sound is no longer recorded very well. Is there any way that I can lower the sampling rate to generate less data while maintaining the quality of the recording? What would you recommend for this purpose?

Thanks for your help on this!

What level of accuracy do you need?

What is the minimum gap between successive peaks?
What is the minimum amplitude of a peak? (if the absolute amplitude is not important, it may be easier if you normalize to 0 dB before analyzing: http://manual.audacityteam.org/o/man/normalize.html)

The minimum time difference between successive peaks is approximately 0.005 seconds. As for the amplitude, I attached a screenshot of a 1.8 second sample of the data pasted into LoggerPro. I actually only need the maximum points on this graph because that is all that is necessary for me to monitor the time difference between the two sounds. Because the peaks are roughly the same volume, is there any way to set a “threshold” volume that allows me to discard all samples below the threshold?

Thanks again.
Screen Shot 2015-06-15 at 6.38.27 pm.png

Try running this Nyquist script using the Nyquist Prompt effect (http://manual.audacityteam.org/o/man/nyquist_prompt.html)

The first two lines set the threshold for the peaks (must be a little below the peaks that you want to measure), and the minimum time between successive peaks (in seconds).
The output is a list of time values (in seconds) for the on-set of each detected peak.

On Windows, it is not possible to copy text from the output, but it is possible to copy from the debug window. On Linux it is possible to copy from either. I don’t know whether you can copy from the output window on Mac OS X (try it and let me know :wink:) but if you can’t copy the output, run the code using the “Debug” button, then a debug window will appear after closing the output window.

Test the setting on a small section first.

There is no error checking in the code, so if you enter silly values it will attempt to run with those values and may crash or freeze.

(setq thresh 0.7)
(setq min-time 0.004)

(setf *float-format* "%.4f")
(setq mincount (truncate (* min-time *sound-srate*)))
(setq ln (truncate len))

;; convert data list to string
(defun list-to-string (data)
  (if (> (length data) 0)
      (format nil "~a " (string-trim "()" (format nil "~a" data)))
      ""))

;; print to debug window and screen
(defun dprint (str)
  (format t str)
  (format nil str))

(defun analyze (snd-arr step offset)
  (let ((data ()))  ;list of the output times
    ;; Loop through data array and find peak onsets
    (do* ((indx 0 (1+ indx))
          (val (aref snd-arr indx) (aref snd-arr indx))
          (pcount nil))
         ((= indx (1- step)) (list-to-string (reverse data)))
      (cond
        ((and (not pcount)(> val thresh))
          (setf pcount 0)
          (push (+ offset (/ indx *sound-srate*)) data))
        ((and pcount (< pcount mincount))
          (setf pcount (1+ pcount)))
        (pcount (setf pcount nil))))))

(setf start (get '*selection* 'start))
(setf output "")  ;output string

(let* ((sig (if (arrayp *track*)
                (s-max (snd-abs (aref *track* 0))(snd-abs (aref *track* 1)))
                (snd-abs *track*)))
       (step 100000)
       (sig (sim sig (s-rest (/ (* (1+ (/ ln step)) step) *sound-srate*)))))
  (do* ((i 0 (1+ i))
        (snd-arr (snd-fetch-array sig step step) (snd-fetch-array sig step step)))
       ((= i (1+ (/ ln step))) (dprint output))
    (setf offset (+ start (/ (* i step) *sound-srate*)))
    (setf new-data (analyze snd-arr step offset))
    (if (> (length new-data) 0)
        (setf output (format nil "~a~a" output new-data)))))

Wow! Thanks so much for taking the time to write this code. When I click “debug” I can copy the output, but the output is only a list of values for time without the values of sound volume. Is this what is supposed to appear or is there more that I’m missing?

That’s what it does. It gives you a list of times when the waveform rises above the threshold level. As far as I could tell from your posts, that seemed to be what you were asking for. It should be very accurate, but please test carefully and let me know.

Ideally I would be able to export those time values with their corresponding sound level values. This way, I could copy them to Excel or a graphing program like LoggerPro for further analysis. Is there any way to extract the values for time and also the corresponding values for sound level?

Thanks again.

The amplitude at that point in time is whatever you set as the threshold value in the first line.
You say this is “a physics investigation”, so please try to be precise about what you need.
Have you tried studying the code to see how it works?

Ah - apologies, I misunderstood. While I only need data values above a certain threshold, it would be useful to also have the values for sound level that accompany the time values so that I can more precisely determine maximum values for sound-level. If you take a look at the LoggerPro graph I attached in my previous reply (with all the red data points), you’ll see increasing and decreasing data points. Essentially what I need is just the top half of this graph - I am interested only in how the time difference between these increasing and decreasing peaks changes. Therefore the values of sound-level that accompany time would be useful for identifying the exact position of these peaks. Does this make sense?

What are these signals that you are measuring? What’s the bigger picture - what are you trying to do?

When two mechanical metronomes are placed on a horizontal board that is free to move from side to side (it is sitting on cylindrical rollers), the oscillations of the metronomes will synchronise regardless of their initial phase difference. So I can start the pendulums of each metronome randomly and eventually their oscillations will synchronise (become in phase).

See here: https://www.youtube.com/watch?v=yysnkY4WHyM

As you can observe in the video, the metronomes make distinct ticking noises (2 ticks per oscillation). In order to find the amount of time it takes for the metronomes to synchronise, I need to monitor how the time difference between the ticks changes as time passes. This is why I’m using Audacity - if I can analyse the positions of the peaks then I can monitor how the time difference between them varies. So having the values for sound level accompany values for time would allow me to more easily determine the exact moment in time when a peak occurs, and, from there, the time difference between them (using other graphical software).

Have you considered using a two-channel oscilloscope?

It’s really simple to setup as it has a built-in trigger function. I guess even a two-channel oscilloscope plugin would do if you don’t have a Rigol or HP available…

Also, an optical detector could be much easier to work with, as they collect no ambient noise.

That is basically what the Nyquist script does. It triggers when the amplitude rises above the given threshold.

:smiley: Cool.
When’s the demo with a thousand metronomes? :smiley:
(I saw the “32 metronome” version)

and it avoids echoes, and could give separate traces in left / right channels without spill-over between the channels, but I love the simplicity of “2 metronomes, 1 board, 2 coke cans, 1 microphone” - real “kitchen sink physics”, terrific :wink:

Seems like what you need is the “start” time of each click. That’s what the code gives you.

Here’s an alternative version which outputs labels:

(setq thresh 0.7)
(setq min-time 0.004)

(setf *float-format* "%.4f")
(setq mincount (truncate (* min-time *sound-srate*)))
(setq ln (truncate len))
(setf data ())  ;list of labels

;; convert data list to label
(defun list-to-label (time)
  (setf label
    (list (- time start)
          (format nil "~a" (string-trim "()" (format nil "~a" time))))))


(defun analyze (snd-arr step offset data)
  ;; Loop through data array and find peak onsets
  (do* ((indx 0 (1+ indx))
        (val (aref snd-arr indx) (aref snd-arr indx))
        (pcount nil))
       ((= indx (1- step)) data)
    (cond
      ((and (not pcount)(> val thresh))
        (setf pcount 0)
        (push (list-to-label (+ offset (/ indx *sound-srate*))) data))
      ((and pcount (< pcount mincount))
        (setf pcount (1+ pcount)))
      (pcount (setf pcount nil)))))

(setf start (get '*selection* 'start))

(let* ((sig (if (arrayp *track*)
                (s-max (snd-abs (aref *track* 0))(snd-abs (aref *track* 1)))
                (snd-abs *track*)))
       (step 100000)
       (sig (sim sig (s-rest (/ (* (1+ (/ ln step)) step) *sound-srate*)))))
  (do* ((i 0 (1+ i))
        (snd-arr (snd-fetch-array sig step step) (snd-fetch-array sig step step)))
       ((= i (1+ (/ ln step))))
    (setf offset (+ start (/ (* i step) *sound-srate*)))
    (setf data (analyze snd-arr step offset data)))
  (if (> (length data) 0)
      data
      "No peaks detected"))

This is it applied to the audio from the YouTube video:
ticks.png

This is fantastic - exactly what I need :slight_smile:

As I’m writing a scientific report, I need to be able to cite the author of the code. How would you like to be cited?

Please see you private messages (link near top left of the forum).