Im abusing Audacity to measure the shutterspeeds of analogue cameras with a phantom powered photodiode connected to an audiojack. This works great with Audacity 2.x down to speeds of 0,002 seconds (1/500th). For faster shutterspeeds I still have to use Version 1.x because the "Selection timeline" shows me exact values down to microseconds (like shown here: [Link flagged by Google as dangerous]). So, could you please add a microseconds option in the dropdown menu of future Audacity-Versions? (And yes: Im too lazy to calculate them from the values given in the upper Timeline )
I’ve removed the link because your website is flagged by Google as malware.
Images may be uploaded directly to the forum using the “upload attachment” option below the message composing box.
I presume that you realise that Audacity 1.x can’t actually measure microseconds?
The smallest unit of time in digital audio is 1 sample period. At the default sample rate of 44100 Hz, that is about 23 microseconds.
If you set the sample rate to, say 100,000 Hz, then 1 sample period = 10 microseconds. The time / duration in samples can be read from the Selection Toolbar (http://manual.audacityteam.org/o/man/selection_toolbar.html). Calculating the time in microseconds (to the nearest 10 microseconds) is then very simple arithmetic.
“I presume that you realise that Audacity 1.x can’t actually measure microseconds?”
-Wrong. It did convert the amount of samples with the given samplerate to microseconds and not just to rounded milliseconds like 2.x does (at least in the Selection Toolbar) (see attached image).
Quote: “Calculating the time in microseconds (to the nearest 10 microseconds) is then very simple arithmetic.”
And that is exactly what Audacity 1.0 did automatically and I was asking for. Nothing more .
Doing all the time the (in my cases) “1:96000 * s” is not matter of “simple arithmetic” but much more of an anoyance. Nevertheless thank you again for Your efforts.
Joe
(In addition/ off topic: the linked homepage is not mine. Im sorry if the image i linked to could contain malmare. But trusting Google, who is withholding essential security/control settings from their androidsystems, in terms of malware and, even worse letting them censor "your" internet. Didt know that Goolge is the new Internetpolice now too. )
At 96000 Hz sample rate, 1 sample period is 10.41666666… microseconds.
Select 1 sample, that just over 10 microseconds. Select 2 sample and that’s about 21 microseconds. You can’t select 11, 12, 13 … 19, 20 microseconds of audio, so not what I would call “microsecond accuracy”
Any reason to not use a sample rate of 100,000 Hz and make life easier for yourself? Just set it as the Project Rate (lower left corner of the main Audacity window) before you start recording, or if the audio is pre-recorded, use “Tracks > Resample” to convert it to 100,000 Hz.
As Joe Jack says, you can exactly select 11,12, 13 … microseconds in 1.x. Set 100000 Hz Default Sample Rate (just so you can navigate microseconds easier), generate a tone, zoom in to maximum, press HOME, then drag with your mouse or nudge the cursor using RIGHT arrow and look at the Status Bar.
It’s no use for exporting less than a sample of audio, but you can measure less than a sample worth of microseconds and you can read the length of a selection in microseconds without calculating.
The status bar is changing, and so is the selection on the Time bar, but the audio selection is not moving until you jump to the next sample. The information is band-limited to half the sample rate and quantized to sample periods. With a sampling rate of 96000 Hz it cannot be determined if an event occurred at 3.000001 seconds or 3.000002 seconds, just that it occurred in that 10.4 microsecond period (assuming that the hardware is capable of that accuracy). I like the varied scientific and engineering applications for Audacity, but primarily Audacity is designed for audio.
Can you write more words about “band-limited” in this context?
Are there not always going to be compromises if you show finer divisions than a sample?
If we were concerned not to show finer divisions than a sample, then a format including microseconds could have Selection Toolbar digits that only moved in increments of multiple microseconds as appropriate to the sample rate. But I think you would still get people complaining about a missing “feature”.
I still think that misses the point of the requests.
There seems to have been no discussion about why microseconds was removed.
Suppose one does scientific experiments with “audio” at 1,000,000 Hz so that each sample is one microsecond?
Sure.
Here’s a signal that rises from silence to -2 dB in 1 microsecond:
This is the same signal that has been band-limited to 48000 Hz:
and this is the same signal that has been quantized to sample values at a sample rate of 96000 Hz:
Looking at the Timeline (not shown) I can see that the signal rises at “about” 0.085305 seconds (I’d judge it to be somewhere between 0.08530 and 0.08531 seconds). Even with a microsecond scale I’d still not be able to judge the rise time any more precisely than that.
As I was suggesting, converting from samples to time in microseconds becomes trivial mental arithmetic.
PCM digital audio is always band limited to half the sample rate. That’s how Harry made his name.
The human hearing range is (generously) quoted as 20 to 20,0000 Hz. Harry Nyquist, with contribution from Claude Shannon, proved that frequencies can be precisely defined up to half the sample rate. Beyond half the sample rate it is “anybody’s guess” (aka “undefined”). Scientifically (which I think is what we are talking about) it makes no sense to “measure” to an accuracy greater than a single sample period because quantizing defines the smallest meaningful unit.
In real life, measurements will always be somewhat less accurate than defined by theory.
That’s perfectly true, but why make life more difficult? If you want measurements in microseconds or tens of microseconds you can make it simple by using an “easy” sample rate,
Why worry about common audio sample rates if we’re not working with audio?
I was unsure whether Audacity would work with a sample rate of 1 million sample per second. It does
Does the hardware being used have a bandwidth up to 500,000 Hz.
For frequencies above 20 kHz we are not talking about “audio”.
If you only consider the quantizing part, then you might expect the signal in the first image to be quantized like this:
but that is not what happens, (because of the band limiting).
but it’s still pretty meaningless to give measurement units that are several orders more “precise” than the equipment. As an example, there would be no point for an electronic tuner to display pitch to hundredths of a cent, or a wooden ruler with inches marked as 1.000000, 2.000000, 3.000000 …
I think there is a reasonable case for adding one more decimal place, but where do we draw the line? What if someone requests nanoseconds or picoseconds?
Why not user-defined formats, such as e.g. in Excel?
You’d enter a string and the time would be displayed accordingly. There had to be some predefined characters, of course.
For example:
H:M:SS.CSCS (…s)
where H=hours, M=minutes, S=seconds, CS=centi-seconds and s=samples
One placeholder means that the value is only written when there’s a non-zero value at this place or in front of it.
00:05:33 → 5:33
01:00:33 → 1:00:33
Two characters would always fill up the leading zeros.
The three dots in the parenthesis mean that the samples are displayed for all after seconds, i.e. they skip hours, minutes and seconds.
This could translate 30 min, 12 sec and 22050 samples as
20:12.50 (22050)
We could of course stick to a simpler definition where all placeholders just have their constant cells, e.g.
“H:M:S” gives 00:00:00
Only the value in front is different because it can go higher than normal, i.e. hours would go over 24 when there’s no “D” in front.
However, I would like to have different formats at the same time, for instance Pal and Ntsc frames (e.g. “NF ↔ PF”).
Text or separators could be quoted explicitly or have a indicator in front (such as @).
I’ve not thought much about the implementation, but user defined formats could include bars, beats and sub-beat divisions, or 4, 5 , 6, 7… decimal places of a selected unit. Anyone want to generate a tone with a 0.008333333 hour duration?