nyquist theorem misuse - ramifications

nyquist theorem requires an infinite number of samples and the mathematical recreation of the signal is done via sinc functions

in real life we have a FINITE number of samples
which leads to BIG problems
and
we do not use sinc functions since this is not math but engineering
so we use D/A converters such as ZOH deltasigma yada yada

additionally we have in and out ERRORS due to quantization of voltage
and ERRORS due to jitter of the sampling times
as well as ERRORS due to non-linearity and other practical issues
that will affect the recreation of the original signal
more so when
combined with the truncation problem
so
it is impossible to get anywhere near a real recreation of the original
without HUMONGOUS GINORMOUS OVERSAMPLING

This means that with near minimum nyquist sampling rate
AND TRUNCATION of the samples giving a FINITE data set
it is impossible to get any signal over the maximum of the samples
which may be much much lower than the original signal

proof
consider a simple sine wave say 100cps
and various sample rates starting from 200.000001S/s —
for 100cps sine wave, a quarter note at 100bpm would give you about 15 cycles of signal

draw the picture yourself with 15 cycles
try different starting points for the samples
your chances of getting anything near the real signal back using
standard D/A such as ZOH or deltasigma are slim fat and none
if we are near the nyquist minimum sample rate

so without massive oversampling we just cannot get the music back
especially at the higher frequencies
which will have 1000x fewer samples at 20Kcps than at 20cps

draw a sine wave and start taking samples
how many samples would be close enough to get the original
using a zoh or deltasigma ??

now how many samples are necessary to have less than 1 bit error at only 16 bit sample depth ? my eyeball says minimum of 40
so at 20Kcps we would need 800KS/s
which makes 192KS/s insufficient ( with only 9.6 Samples /cycle)
and 44.1 would be pretty poor

without extreme oversampling you can kiss goodbye the quality of the higher freqs even if you have enough to rebuild the lows

It seems that real life ICs rated at 192K really oversample internally for A/D but how can deltasigma do the equivalent for D/A ??

i fyou see holes in this analysis please post your views

The two major points raised were: a) Quantization ( if you will in layman terms rounding error, truncating past a certain decimal place). I seen this on review for scanners (printers), they complained about the typo for stated 48 bit resolution, I believe it was actually 32 bits. In all cases of bit resolution, the least significant bit, in this case 24th bit, value is greater than the sum of all the remaining bits if they went to infinity. (it works the other way also, for example - the fourth bit, assume integer value is 8, - is greater than sum of the smallest 3 bits (lessor bits) which totals to seven (7)). The loss of amplitude resolution (maximum loss for truncated bits is about 0.000006% in the 24 bit case ). Other problems are more significant than this technically I would assume, and few people could tell the difference or distortion even with the most expensive speaker system. B) frequency jitter, This is probably is the same level of distortion as quantization error (rounding error) crystals can be very accurate, preform a goggle search on crystal tolerance and crystal accuracy, if the proper capacitors are used, type NPO - the caps are designed not to drift with temperature, again the are other more significant errors from other sources which would be more of a concern I would again assume. Perhaps another way to compensate for this is one uses a crystal which is 100 to 10,000+ times higher frequency than the maximum frequency used, then a counter divides it down, which should further reduce the jitter, yes the crystal will have to have a multiple of all the frequencies desired on the low end, or more complex circuit/divide counter would be required. One would have to look at the specification on the sound board to see what is listed as tolerance for the crystal.

I was searching for a higher sampling rate, also, no more resolution (bits per sample)… reason for higher sampling rate, if one converts between music formats with different sampling rates, the music can be distorted when the conversion is performed, the only way I know to “compensate” for this, is record at say 192KHz, then when the music is “re-formatted” to a lower frequency playback format, this is apt to minimize the distortion, over the entire range) I read an article about this quite some time ago, about the effects of the music industry changing the sample rate and format for music that is distributed on CD at that time, and the effects/distortion on converting CDs recorded at one frequency, and “re formatted” to play back at another different and higher frequency than the “original” format/sampling rate.

Approximate wavelength of a 20kHz sinewave at 20 degree Celsius = 68 degree Fahrenheit in plain air:

33400 / 20000 = 1.72cm = 0.677inch

According to whomper’s 40-samples theory this means that if you want to listen to a 20kHz sinewave without phase-jitter, then during listening you are not allowed to move your head more than:

1.72cm / 40 = 0.043cm = 0.109inch

I’m afraid that my own breathing causes the worst phase-jitter of all involved components. I’m also in severe doubt that there exists any microphone or loudspeaker that can record or reproduce 20kHz without volume and/or phase distortion.

I generally agree that the quality degradation of a signal does not start with the Nyquist frequency, it starts much earlier, according to my experience at approximately 1/10 to 1/20 of the Nyquist frequency, e.g. with 44100Hz sample frequency, everything above ca. 1-2kHz must already be considered as “inreliable” or “not reliably reconstructable”. The Nyquist frequency is a technical limit, it does not mean that a signal of “Nyquist minus 1Hz” can be transmitted in full quality.

The CD sample frequency of 44100Hz was intentionally set too low to make it impossible to reconstruct the studio master from a CD, and to create an artificial incompatibility to the 48kHz DVD audio sample frequency. These were pure marketing-oriented decisions and nothing else.

The discussion so far also ignores the fact that in the recording path before the A/D as well as in the playback path after the D/A there are anti-aliasing filters necessary, which produce a much worse phase distortion than the rest of the digital rounding and truncation and phase-jitter errors.

  • edgar

You can test this experimentally with Audacity. Resampling from 44.1 kHz to 48 kHz produces an inaudible but measurable amount of distortion which has an essentially flat (+/- 6 dB) frequency spectrum at around -130 dB. Resampling the other way from 48 kHz to 44.1 kHz produces significant distortion to frequencies that are above 21 kHz (if present in the audio being tested).

Also worth mentioning that consumer/semi-professional sound cards invariably have worse SNR at sample rates above 48 kHz.

That’s true, but modern digital processing allows closer approximation than was originally intended.