Level triggered recording with correct timeline offset

Hello. I am using Audacity 2.3.2 on win 10/1809.
I did not find a solution to this problem on the forum, nor a technical name (or keyword) I can google.

Short description, to avoid XY problem: I have to record short (< 100ms) bursts of signals at a very high sample rate (192k) that are very infrequent (30-90 minutes, random). I need to maintain the study the time between those samples and be able to record 5-10 days of those signals without filling up my drive. Recording happens on a rather slow, headless, remotely administered machine that’s quite inaccessible.

Expectation: Sound activated recording will record samples, space them correctly in the project (30-90 minute apart), and does not record/write samples when level is below set limit. It must not record 30-90 minutes of silence samples, but only the 100ms samples spaced 30-90 minutes apart with nothing in between them - the samples which trigger sound activated recording.

Behavior: Sound activated recording does not space samples apart on the timeline, they are always contiguous.

Workaround: Continuous recording which I have to monitor every few hours. This is impractical and I have to keep redownloading large files while recording continues in the background. This is by no means ideal and downloading sometimes makes the background recording glitch.

Sound activated recording mostly works but loses some part of the header of those signals. This is not a major problem. The problem is that the signals come immediately after one another. Audacity knows when the last level triggered recording began, it seems a checkbox to offset the current recording on the timeline by how much time passed since the last recording is something obvious… but I couldn’t find it anywhere. Even if it could create new tracks with their name as the current timestamp it would work for me.

Is this a feature I can script somehow, or a plugin I can download? Is there a name for this feature? I’ve been looking for “time corrected level activated recording” and similar keywords, but nothing comes up.

“192k” is mostly ultrasound.
If your mic*/electronics are designed to record audible sound, (which is usually the case),
then any ultrasonic signals you capture are just artefacts or interference, not recordings of sounds.

Recording at 192kHz would be a stress-test for many computers, an ultrasonic glitch every few minutes would not be unusual.

If this is the case, the (spurious) signals could occur even if the mic is not attached.
& the signals will probably be specific to the computer & software you’re using.

[ * “Very fancy” microphones are required to record ultrasound … https://youtu.be/wVQVw478Rh4?t=16 ]

My question is about a feature of audacity, I provided context only to eliminate an XY problem (to avoid questions like: “why don’t you record at a lower sample rate and record continuously? what you are doing is stupid”).

I can guarantee the signals I am recording are not artifacts because the device that generates them, a device which I try to understand, correctly encodes data into the signal in a way I can partially understand. And I have designed myself the electronics that convert the signal into something that can be captured over an audio card at high sample rate, so I’m pretty sure I’m not capturing noise :slight_smile: . The glitches I mention sound more like write issues, which are pretty warranted giving the IO load during both writing and reading (recording and downloading) - like the data is skipping ahead.

You can see for yourself: https://imgur.com/a/LXhDvwl. The top one is a zoomed message, the bottom one is what happens between messages.

But yeah, if they were actual audio signals, you’d need fancy mikes and 192KHz would barely cut if for 96Khz signals.

But if you still don’t believe I am certain I am not capturing glitches, I will rephrase my question: I am snapping my fingers at random intervals from 30-90 seconds, how can I capture them without capturing the silence in between as samples - while correctly spacing them on the timeline?

Wow, that’s a tricky question.
Am I right in thinking that you want to end up with something like this:

If so, I don’t think that you can do that directly with Audacity. The only way that I can think of is to record the whole thing (with silent samples between the ultrasounds), and then use “Detach at Silences” (https://manual.audacityteam.org/man/edit_menu_clip_boundaries.html) to remove the silent samples after the recording is finished.

I’m guessing that is not an option for you because of the amount of data that needs to be temporarily written to disk. Is that the case?

Yep, that is exactly the case. I can, of course, do the post processing manually, but that is often not possible - either with Detach at silences, or manually with Ctrl+Alt+K.

I assume this style of recording is not something very common?

Thank you for taking the time to make a representation of what I want :smiley: So I’m on a dead track? I think an upgrade is the way to go, bar a python script or something similar that can record samples and spit them out in wav with a timestamp as name.

[ * “Very fancy” microphones are required to record ultrasound … > https://youtu.be/wVQVw478Rh4?t=16 > ]

I had to watch the whole thing. Beyond the behavior they are studying on mice, which is cute, and beyond their very fancy gear, they had to make all those mice cages acoustically isolated. Whoa.

Not common for “normal” audio recording. It is common for surveillance (see “audio data logger”).

Either that, or save each ultrasound with a unique file name and add the name and time stamp to a log file (a text file).

If you were running it on a headless Linux server, you could make use of the built-in event logging system, which would make the logging part easy and very efficient. I’d guess there is something similar in Windows, but I don’t know how friendly it is.

They were using Audacity though …
mice 50kHz-100kHz in Audacity (circa 2015).png
but their microphones & pre-amps would have to be compatible with (50-100kHz) ultrasound, (which is not usually the case).

Ultrasound would be easier to block than humanly-audible sound, as the attenuation of sound through air is a dependent on the frequency, (the reason foghorns are bassy).