I’ll go to the point. Latency is by definition a shame, much as we want to sell as a lesser evil but necessary. And it is far from inevitable as us believe.
The approach to solve this issue, both amateur and professional level is the same:
Try to reduce it below the threshold of our perception that is placed in the 11 ms.
And of course latencies below that threshold are acceptable from a practical point of view.
That is not the same as the audio stream synchronously over another when recorded, means that little or no perceptible. The way to deal with the issue is then to reduce it as much as possible in search of a 0 ideal, unattainable course that way “one-way”.
Information must travel so fast in order that this process goes unnoticed. It is a wrong approach. Never yield a 100% accurate result, as much it can partially bridge the gap.
Moreover, such approach requires some technical resources and therefore economic.
We need fast hardware and software that manages how quickly so that the process remains below the aforementioned threshold.
Also in the case of layering, which is what concerns to me, it is extremely annoying. Because latency, far from being a specific magnitude and known is a value that fluctuates constantly at the mercy of thousand variables involving the processor workload.
The same latency under different conditions is not obtained, recording a track on another or recording a track about five more. Even, sadly, the same latency is not obtained in many cases performing the same operation in an instant and the next. Not to mention the additional processes to DAW that may occur in every situation.
The result with this approach to solve the problem, “the whole process as quickly as possible is done” becomes that not possible to obtain a precise value to correct. It will be more or less accurate, significant or not but never be exact. At least not from that approach.
Therefore it is required to resolve the issue in latency correction options presented by the different software, a dual action mechanism. Well, it’s a start. But not of much use if the magnitude we have to correct with a fixed value fluctuates constantly.
The proposal, which at this point should already be obvious, and that would mean latency 0 for the most sophisticated study and for the most ruinous (for purposes of recording by monitoring only the input, of course) is to force the audio interface to work with a fixed latency, given by hardware capabilities.
That is, instead of making the road one-way in which, by definition, more or less always late, use the latency correction for each recording is exactly in the place of the time it was over another.
That, plus the latency correction incorporating various driver and DAWs now, make possible to really fix the latency with which the system need work. Always fixed in a somewhat larger amount than this can provide for in a second automated action to return the audio stream to its correct position.
The issue then is not to get “as soon as possible”, ASAP, we often say some. The question is to arrive on time, take more or take less (which would be the actual system latency) in the case of the recording becomes irrelevant since it will be corrected just as this is completed, leaving the paved way for a more comfortable mix and reliable.
In the present state of things, latency correction is usually incorporated, although willful and partly on the right track is quite sterile if there is not a fixed amount on which to act.
The truth is that I don’t know the technical details of implementing either level driver or other software, but does not seem to incur a greater delay than proposed by the system should be an insurmountable difficulty as if it is the case of trying to go faster, with the known consequences (clips, pops, artifacts, or say yourself as you like).
The beauty of the idea is that it puts all studies, at least within latency recording at the same level. Latency 0 for all.
It is also a breakthrough for professional studies, I read articles analyzing the nuances of synchronized waves in phase and against phase and can hardly address the phase issue having several milliseconds of delay or advance in a recording in which, according to frequency, fit a few waves.
And with that I think we could forget all of latency in recording, at least monitoring input.
If someone knows another way to fix this issue or fluctuating latency or I’m wrong in some point please let me know.
Sorry for this long explanation and thanks a lot for your work, buddies.