anahuj wrote:Lets say this way: Digital cameras have a system to detect faulty pixels in the image sensor. The data about faulty pixels is used to clean the photos at earliest phase.
In audio, dropouts may occur in many places. The A/D converter in a USB sound card is close to your analogy of a camera's image sensor. Digital audio data is transferred from the A/D converter to the USB controller. This data transfer is invisible to the computer operating system. Any data that is lost here can only be handled in the USB device iteself (as you said, at the "earliest phase"). Typically lost data here is limited to the last few lsbs and appears at driver level as low level noise (random values in the last few bits of 24-bit samples).
The audio data is then transferred via USB to the host. There are at least two clocks involved here - one in the host and one in the client (in the case of ADAT and SPDIF there may be a third clock, but let's ignore that for now). The host and client clocks are unlikely to run at exactly the same speed, so there are three ways to handle this: a) use the host clock, b) use the client clock c) use both clock and calculate an average number of samples per frame (this transfer is frame based). The third option is usually employed, and the low level hardware driver adjusts the number of samples per frame as required to keep the clocks synchronised. Again, this is invisible to the operating system (but this is where use of a highly accurate Word clock might be employed for ADAT, SPDIF and similar systems).
Ideally the low-level USB stack and the USB-Audio should be tightly integrated without buffering. Unfortunately it's near impossible to do that except in embedded systems because the code execution time is unknown, but this is still at kernel sound component level, so invisible to applications. As you said, error handling has to handled at the earliest possible phase.
There may then be one or more proxy layers between the kernel level components and the application. As an example, the chain could look something like this:
A/D -> USB controller (client) -> USB host -> USB audio transport -> ALSA API -> Pulse Audio -> Portaudio -> Audacity.
Handling of lost data has to be handled as early as possible. Audacity at one end of the chain has no idea what is happening at the other end of the chain, or at many of the links between. Audacity can monitor its own data handling, and can query Portaudio, but beyond that Audacity has no idea what occurs earlier in the chain.