UPDATE 5 Nov 2013: New version labels some clicks that escaped detection in the previous. Also smoother progress indicator.
This goes under the Analyze menu. It labels brief bright spots and vertical streaks in the spectrogram, which do not necessarily go up to high frequencies, but might also be lower frequency “bumps.”
My goal is an effect that automates cleanup of these clicks with localized frequency filtering, but first there is the detection problem, which I think I have solved well enough to share.
Try this tool on your sound. It will produce labels. Each label will show a frequency or range of frequencies which briefly get “loud” in that region and then quiet. There is also a dB value showing how much peak amplitude of the noisiest frequency differs between the click region and its surroundings.
For stereo tracks the label also indicates which channel has the click, and if it occurs in both, there may be different dB values.
There are seven settings in the dialog.
Block length in milliseconds determines the maximum label length and minimum separation of clicks at the same frequency. If the fundamental period of the sound exceeds the block size, there many be many false positives. (Clicks cannot be found in the first block of the sound.)
Steps per block determines the minimum label length and the precision of click boundaries. All label start and end times equal the start time of the selection plus a multiple of the step length.
Peak to Background threshold specifies how much the peak amplitude of a frequency component within a label must exceed the peaks in neighboring blocks. This is the “relative” threshold.
Peak amplitude threshold determines the minimum peak amplitude of clicks. It is the “absolute” threshold.
The remaining settings, Min test frequency, Max test frequency, and Frequency increment, determine the frequencies that are tested. Click Finder v2.ny (19.4 KB) Click Finder v1.ny (17.6 KB)
When detecting short clicks, the detection varies depending on the start position of the selection.
I’ve not checked the code, but my guess is that this is due to difference between detection in the middle of a block compared to the block boundaries. Do the blocks overlap or are they end-to-end?
There’s an intermittent bug that I’ve not pinned down yet, but sometimes it causes Audacity to crash. The bug might not be in your code. I’ll post details if I can find reproducible steps.
I’ve started looking at the code.
I like the use of macros to make the code more readable, though the update-max / update-min macros look to be more complicated than necessary. Wouldn’t this do the job more simply?
I am aware of the instability of detection of some clicks, as the starting time of the selection changes. I am not yet sure of the explanation. I suspect that some clicks are not symmetrical, having sharp attack and then decay, and that has something to do with it.
I have not seen crashes.
As for those update macros, I wrote them to avoid repeated evaluation of val, and to evaluate x only once as a value and once as the setf target (I think that’s “place form” in Lisp jargon). But maybe this is better than either version, avoiding both repeated evaluation and unnecessary assignment.
There’s a lot of optimisations in the LISP code so x and val do not need to be evaluated each time they occur. Once x has been evaluated for “if x” then the LISP code “knows” the value of x and only has to update the value for “setf x”. It does not need to re-evaluate x for (max x y) because it has already evaluated x.
I have found my silly mistake in the implementation of the absolute threshold test. I also found that my changes to make the progress indicator smoother were not quite right. Expect a version 2 file with minor updates. If you are impatient for the fix, replace the appropriate cond branch with this.
Now you will still find the “instability” that a click label 5 ms long, say, with default settings, might jump to 10 ms with the right one-sample change in the left selection boundary. That is to be expected. What should not happen, as in the case I was examining, is that a 10 ms label disappears entirely with a one-sample change of selection.
This version does not evaluate val more than once, which is good, you won’t be surprised if it has side effects. But it does expand the expression twice. Does that have peformance implications? I don’t understand how the Lisp runtime works, but it seems more interpreted than compiled. Stack traces show me source code text as “arguments” of do and let, and the code can even execute with certain syntax errors if the erroneous code is never reached. That’s not like the mental model in my older C and C++ days.
(defun unused (x)
(setf x (/ 0)) ; very bad
(setf y (non-existant unbound)))
(print "No errors :-)")
It means that testing the code needs to be very thorough, especially if functions are only used when certain conditions are met.
On the other hand it avoids a lot of “compile time” errors, because it is not compiled…
;; Potentially bad function
(defun only-if (x)
(/ (float x)))
;; User input
;(setq x nil)
;(setq x 0)
;(setq x "string")
(setq x 3)
;; Error checking
(if (and (boundp 'x) x (numberp x)(/= x 0))
(print (only-if x))
(print "Input error nicely handled."))
Low level functions are written in C and so have been compiled. Debugging low level functions can be a pain because the C code is mostly computer generated from algorithms by a “translator” program. If you’re interested, there’s more about this here: http://www.cs.cmu.edu/~rbd/doc/nyquist/part16.html#188
Not as far as I can tell.
I don’t fully understand it either, but yes the Lisp code is interpreted, though with lazy evaluation, which can make a huge difference to overall performance…
Thanks for the help, but please let’s keep this thread a discussion of the tool, not the Lisp language. I don’t mind if you delete some of the previous off topic posts once my fixes are successful.
Here’s a question: what do you call it when a frequency component makes a sharp drop, and then a rise, in amplitude within milliseconds? An “anti-click?” I have made some of them when my experimental click-fixing code overdoes it, and it sounds weird too, but does this sort of thing ever happen “in nature?” I doubt it, but I suppose I could as easily detect anti-clicks.
You can get “drop-outs” occur in nature. Typically a drop-out will suddenly and briefly drop down to silence or a low level. On old tape based recorders, drop-outs can be caused by loss of a small part of the magnetic surface from the tape. On digital recorders drop-outs can be caused by buffer under-run or over-run. A digital drop-out will typically have a brief (“instantaneous”) burst at each end of the drop-out due to the discontinuity in the waveform. There can be other causes for drop-outs including a simple “bad contact” in the analogue recording equipment.
I don’t think there is any general way to automatically fix drop-outs as the amount of “missing data” may be different from the length of the “gap” and there is no way to calculate how much data is missing. However it could be very useful to be able to detect drop-outs.
So drop outs are not natural but undesirable parts of the really recorded sound but rather failings of the media. Whereas the clicks I try to filter out are distracting things my misbehaving mouth really did.
I don’t look at the wiki enough. Where do I learn to edit wiki? I saw a wiki page for the few click repair tools for Audacity andit should be updated if this is successful.
I think this is working well, and my automatic click fixer based on eq-band which I have not yet shared also works very well.
All the major pieces are in place, but there are surely still opportunities for tuning it for speed and quality.
For instance I think my default settings for the number of test frequencies might be excessive for good results. Maybe frequencies should step logarithmically so I gather more useful data for filters built out of eq-band, and maybe there is a more intelligent way I can test each frequency.
I think your terminology is wrong. That is not an example “lazy evalution.” That is “short-circuit” evaluation and C does it too. “If” and “cond” and “and” and “or” are special forms or macros, not functions, and they are “not strict.”
I understand that Common and Lisp and XLisp do “strict evalution” for true function calls, meaning you can be sure function arguments are evaluated before a function is called, and I believe you can also depend on their evaluation left to right. This is important when there are side effects. This is NOT like a truly lazy and purely functional language such as Haskell, where the absence of side effects makes order of evaluation irrelevant to the meaning of the program, but laziness does mean certain expressions will terminate that would not with strict evaluation. You can use nonterminating, potentially infinite list values in Haskell with abandon.
All that appears “lazy” about XLisp is that such things as the correctness of function calls (for number of arguments), the definedness of variable names used in expressions, and even the correctness of the syntax of special forms like do, is not checked until there is an attempt to evaluate the forms.
For my purposes, version 2 of the add-in is too aggressive and so far I can’t get it handle my case. (It’s possible I’m using it wrong and/or setting it wrong.)
I’m looking for small islands of sound, surrounded by relative calm. I’d like to figure out how to just label the little spikes (pops/clicks) that are surrounded by what I’m calling silence which is my noise floor before I do any EQ or compression.
As I mentioned earlier, I’ll test anything you throw out there. Or retest with different settings and post results if that is helpful. I’m hoping my samples provide a better context.