# Sample Data Export

Hey there! We are two physics majors working on a project where we analyze drumbeats. We need to be able to see the amplitude of a specific sound wave as well as the exact time that the amplitude occurs. Our thought was to use “Sample Data Export”, but that doesn’t seem to be giving us what we’d like. Preferably, we’d be able to see the data in a CSV or text file, as you can with “Sample Data Export”. Is there any way to use “Sample Data Export” to see both time and amplitude? Any help is greatly appreciated! Thanks!

How does “Sample Data Export” differ from “what you’d like”?

That’s what “Sample Data Export” does. Each value is an amplitude value. The time between each sample is 1/(sample rate) so for example, with a sample rate of 44100 Hz, the time between each sample is 0.000022676 seconds.
Tip: for easy calculation of time periods, record with a convenient sample rate, such as 100000 Hz.

I believe most people doing “serious” numerical analysis use MATLAB (or a MATLAB clone).

MATLAB can open WAV files and apparently it can handle the massive amount of data from an audio file.

MATLAB is also a lot more complicated to use than Nyquist.

In general, only short sounds can be analyzed with Matlab or Octave. You run rapidly out of memory or indexes.
Of course, there are always workarounds…

But what frequency does the amplitude represent?

Is there are way to export spectrum data over time? Basically, like exporting the data from a waterfall plot?

Amplitude is not frequency.

Amplitude of a sound is how “big” the sound vibrations are and is represented in Audacity as the vertical height of the waveform. Amplitude of a waveform is the measure of the amount of displacement away from the mean position (distance from the “zero” centre line).

Frequency of a sound is “how rapidly the vibrations oscillate”.

If you zoom in close on a waveform, you can see the waveform moving up and down, crossing the centre line many times. How far away from the centre line it moves is the “amplitude”, whereas “how frequently it cycles between going up and going down” is the frequency. An approximation of the fundamental frequency can be derived from the exported sample data by counting the number of times the data crosses zero per second.

You can take a screen-shot of the track spectrogram view.
See: http://manual.audacityteam.org/man/spectrogram_view.html

Ok, but is it possible to export numerical spectral data over time. I.e. Sound pressure level over say, 20 - 200Hz, over a recorded time of 5 seconds?

Or you can make small selections over five seconds and export each selection from Analyze > Plot Spectrum… .

Gale

Audacity cannot give you “Sound pressure level” (SPL)
(see: https://en.wikipedia.org/wiki/Sound_pressure)

To calculate the sound pressure level via Audacity, your “system” (microphone / pre-amp / sound card / sound card settings / recording software) need to be calibrated so that you know that a specific SPL produces a specific amplitude of the recorded signal. Alternatively you use a “SPL meter” (which is already calibrated).

What are you actually trying to do?
Is this a school project?

Yes for a project. I’m trying to measure SPL decay over 20 - 200Hz at a high resolution, and then export the numerical data to excel. From there I want to calculate the reverberation time of an individual frequency using a ‘line of best fit’ equation (for example RT60 of 62Hz and 76Hz, rather than 1/3 octave band RT60)

What are you using as your sound source?
If you used sine tones, then you would only need to measure amplitude over time (easier and more precise).

I’ll give a brief description of my project:

I’m using a single sine tone (e.g. 42Hz) from a loud speaker as a source in a room. I then turn the source off and measure the sound pressure level decay at a high resolution (e.g. 1Hz). With the measured sound pressure level data over time, hopefully in numerical form, I can calculate the reverberation time of single frequencies (e.g. 63Hz).

The reason behind this is that im trying to measure the rooms modal decay (i.e. say the room has a 63Hz modal resonance) by using an anti-node (i.e say the room has an anti-node at 42Hz).

So, i need high resolution spectral data over time in numerical form that i can import into Excel.

That makes no sense.
Sound pressure level is not measured in Hz, it’s usually measured in dB with reference to 20 uPa.
Hz is a unit of frequency.
If you’re using a 42 Hz sine wave as the sound source, then why do you need to analyze the frequency? You already know the frequency - it’s 42 Hz.

The actual test procedure to do what you say you want is actually quite straightforward, and yes you can use Sample Data Export for the analysis.
Start with a recording of a series of sine tone with gaps between:
“beeep … beeeep … beeeep …”
The gaps need to be long enough for the reverberation to decay to almost silence.
To cover the range from 20 Hz to 200 Hz, you could have tones at 20, 25, 30, 35, … 195, 200 Hz. Or perhaps something like 20, 21, 23, 26, 30, 35, 41, 47, 54…200 Hz

Play that recording in the room that you wish to test, and record it. Background noise must be kept to an absolute minimum during the test.

Then analyze your recordings. You don’t need to analyze the frequencies because you know what the frequencies are - you just need to analyze the amplitude over time.

I meant the frequency resolution is 1Hz, so i would get numerical sound pressure levels over time at say 63Hz, 64Hz, 65Hz etc etc

I can’t use the same frequency as both the source and measurement because i need to measure the rooms modal decay (say the room has modes at 60Hz and 83Hz) with an anti-node frequency (say the room has an anti-node at 52Hz).

Basically, all i need from Audacity is to measure sound pressure level over time between 20 - 200Hz, at a frequency resolution high enough to distinguish between room modes (around 1Hz would be good), and then be able to export that data numerically.

So sound pressure level decay over 20 - 200Hz in 1Hz increments, and exported numerically. Is that possible?

Here is a link to the project procedure i am preforming:

https://www.researchgate.net/publication/299512979_Reverberation_time_measurements_in_non-diffuse_acoustic_field_by_the_modal_reverberation_time

For future reference, it’s useful if you can tell us that from the outset so that we know the context of your question

That is not going to work.
The “traditional” way to measure R60 is to use broadband noise, either as a pulse (popping a balloon) or suddenly stopping generated noise. The room resonances can be seen by analyzing the spectrum of the reverberation. This “traditional” approach breaks down for low frequencies because of a number of technical difficulties, such as:

• Accurate measurement of amplitude over a short time period is substantially affected by the phase. This makes the positioning of the microphones important, and multiple microphones are necessary for accuracy.
• Spectral analysis is always a trade-off between frequency resolution and time resolution. For high frequency resolution, a large FFT window is required, but the larger the FFT window, the more smearing occurs in the time domain. To achieve the required resolution in the frequency domain for low frequencies will require a very large window size that reduces the time resolution to a point where R60 cannot be measured.

For low frequency analysis, a different approach is required, which is the point of using sine tones. Reverberation does not change frequency (assuming no Doppler effects), so the reverberation of a 50 Hz sine wave will be at 50 Hz. No frequency analysis is necessary because you know the frequency.

Rather than doing one test with broadband noise and then splitting the result into lots of frequency bands, the method described for low frequencies is to perform multiple test with discrete frequencies (sine tones) and analyze each test separately, This largely resolves the problem of the time/frequency resolution trade-off.