Users are inventive!
[quote=“Paul L”]
If you append-record one track while overdub plays another track, then the latency correction is done AFTER recording stops. You can see what you just recorded jump left (if the setting is negative).
That’s a little weird
[/quote]
Latency correction is always done after recording stops.
I think what you notice is probably better than retaining the entire length of the append-recorded audio and pushing it (and any pre-existing audio before it) backwards.
I do not understand that “better than.” You and I describe almost the same thing, do we not? Mic picks up audio in the latency period which you see as recording proceeds, then you stop and that portion is cut out. Try it with a very negative setting to make it more pronounced. Of couse nobody wants time shift of what precedes the new portion, that does not happen now and I do not suggest that.
[quote=“Paul L”]
wouldn’t it be better to figure out the latency correction up front as the new samples are being laid down?
[/quote]
You mean so there is always the correct latency adjustment automatically applied on Stop (as now), or that samples are moved to the correct position before being laid down?
I think the moving back on Stop is instructive if the latency adjustment should not be correct, but automatically calculating the correction would be a big win if reliable IMO, yes.
Gale
As the code is now, the correction depends only on the Preferences. There seems to be something in the code to adapt the latency period based on information supplied by the port audio routines as recording proceeds, but it is commented out.
If that adaptiveness is reenabled, then we can’t see the future when recording starts and make the adjustment exactly. There would remain fine tuning after stop. At the risk of code complication, maybe we could make the fixed coarse adjustment at start so that the later adjustment, if only a few milliseconds, is less visible.
For punch and roll recording, the intent is a few seconds of play then a transition to append-recording. Capturing those seconds and then cutting them at stop would work but just look too strange. I had to figure out how to discard those input samples as they arrive. If those, then a fixed latency could as easily be discarded with them.