I am a newbie to this forum, and I use Audacity to edit recorded spoken presentations.
Frequently a speaker will “umm” or “err” which distracts from the recording.
Currently I manually delete these unwanted sounds, but that can be extremely time-consuming.
So it would be very helpful if there was a function to analyse a selected sound and then search for repeats, maybe with the option of manual confirmation before deleting all of them.
Does anyone know it this is possible?
with the option of manual confirmation before deleting all of them.
It’s not a machine doing the umming, so they’re all going to be different. That takes care of simple software doing this.
But I thought of another problem. How would you know when an automated deletion was successful in order to tell the software to delete all of them? You would have to suspend the program, play it, and then resume the program. And further, the software wouldn’t do more than automatically deleting the next one, not all of them.
There is an odd application of Noise Reduction that would be an interesting experiment. Sample an Ummmm for Noise Reduction Profile and then use that over the whole lecture. Problems: It will only hit exact or close to exact matches (so far so good) but wouldn’t reduce the rest of them to zero and wouldn’t close up the hole left in the spoken sentence.
It would turn
“Let me look here, Ummmm, Yes here it is.”
Let me look here, Ummmmm, Yes here it is.
You probably don’t want that, either.
You underestimate all the work your head is doing in these edits.
Thanks for your thoughts kozikowsk.
How about manually selecting the first Umm, saving a frequency spectrum analysis and then allowing a “find” to go along the audio to locate the next similar spectrum, selecting the width of that piece of data ready for clicking “delete” manually? Ever better, do an automatic delete and tag the window at that point in the data to permit a manual check on completion?
then allowing a “find” to go along the audio to locate the next similar spectrum
Let’s see if you can do it. Flip the timeline to Spectrogram view using the drop-down on the left. Now find one of the Umms and look at the spectrum. Now whip down the show looking for the same colors in the same order. Shift-and-Drag or Shift-and-Scroll Wheel.
tag the window at that point in the data
I don’t think you can get Macros to set a Label and Labels are the only way Audacity has to point to a place or time in the show.
There’s not enough of a key there to hook software onto. If the Umms were louder, we’d have this licked by lunchtime. Finding a “word” in the middle of other words is a thing your ears are really good at and software is not. Umms are only one of the problems, right? You need to do multiple passes.
As a side illustration, audiobook testing goes through a robot which picks out too loud, too soft, too noisy etc, but then a human takes over to test for actual sound quality. Software is terrible at that.
We’ll see if anybody else posts. This is a Forum. Users helping each other.