unconfuse me - clipping doesnt matter ....

am I wrong about this?

if you clip a signal above 10Kcps in analog domain you would generate harmonics
at >20KCPS , >30 …

these would get filtered out during the A/D/A conversion
(ignoring the space from 20 - 22.05kcps for simplicity here)

so clipping at 10kcps or up in analog would not cause distortion
in the final playback

BUT
in the digital domain isnt it possible to generate SUBharmonics
so clipping in digital domain could cause artifacts you could hear

i strongly suspect that various f/x could do this.
not sure what clipping by exceeding 0 dBFS really does

Good question!

It’s easy to do the experiment. Use Generate->Tone->Sine and clip the result by any means you wish. Just amplify (allow clipping) and Analyze->Plot Spectrum still shows only the single spectral line at 10 KHz. But Amplify again knows that the amplitude is already 3 dB above clipping, so apparently no clipping actually happened.

Using Effect->Hard Limiter (-6dB, wet=1, residue=0) on a sine of amplitude 1 adds new spectral lines at 300, 1300, 1800, 2300, 5900 and 14000.

Using Effect->Valve Saturation (both sliders at .499) on the original sine adds lines at 1800, 4100, 5900, 14000 and 20000. This is an asymmetric limiter and so produces the even harmonic at 20Khz. Symmetric clipping of a pure tone produces only odd harmonics. Any clipping of a polyphonic waveform also produces intermodulation products.

A 10 KHz square wave (from Generate->Tone->Square) shows a line every 100 Hz!

So “subharmonic” generation is not what happens, and I’m having a problem explaining the 100 Hz series as aliasing, but audible spectrum pollution definitely happens.

I’m using the Welch window in Frequency Analysis and 44100Hz sample rate. Not promoting a particular window here, just giving conditions so others can repeat my experiment. Also don’t trust the relative amplitudes of single lines in the plot - look at the Peak readout as you move the cursor. The amplitudes in the plot vary a lot when you adjust the width.

While not good for (most) music, some of these sound like useful buzzer effects but you’d probably want to attenuate the 10 KHz component after distorting. No experiment is a failure unless you don’t learn anything from it (source of quote escapes me).

If the track is in 32-bit float format and you are using a recent version of Audacity 1.3, audio can be amplified above 0 dB without clipping. (32-bit float format can represent sample values much higher than 0 dB).
If you right click on the track’s vertical scale after amplifying you will see the undistorted waveform above 0 dB.
Clipping at 0 dB can be forced by applying the Nyquist command (clip s 1.0) after amplifying:
“Effect menu > Nyquist Prompt”, enter the code and apply.
You will then see the expected harmonics and sub-harmonics.

Clipping a digital signal that is close to the Nyquist frequency will cause aliasing.
Digitising an analogue signal that contains frequencies above the Nyquist frequency will remove (truncate) frequencies above the Nyquist frequency.

The original poster of this topic is no longer a member of this forum.

I was so intrigued by the spectrum consisting entirely of multiples of 100 Hz that I put some effort into the problem. What I should have noticed right off is that 100 Hz is the largest common factor of 10 kHz, its harmonics and the sampling frequency, so any result consisting of differences and multiplication by integers is constrained to multiples of this common factor. I wrote a spreadsheet program to compute the aliases of the odd harmonics up to #127, and with few exceptions their amplitudes in the spectrum correlate monotonically with the harmonic number regardless of where the alias appears. The few exceptions may be due to higher-order harmonics aliasing to the same frequencies. Mapping of multiple harmonics to the same alias frequency is guaranteed by the fact that there are only 220 possible alias frequencies and an unbounded number of harmonics. So conclusion is that every spectral line except 10 kHz is indeed an alias. There are no sub-harmonics.

Sub-harmonics do occur in a few physical systems, typically nonlinear mechanical systems with some kind of temporal memory. Example: a particle bouncing on a speaker cone aimed upwards. The particle has in effect a two-state short-term memory: It’s either in freefall or in contact with the cone. The only non-mechanical system I’ve encountered that generated subharmonics contrary to the intent of the design was a magnetic core that saturated on alternate cycles. Generating subharmonics by design, of course, is routine in digital circuitry.

Yes I believe that is correct. Even “soft clipping” can cause aliasing to occur, though in this case the aliasing can often be substantially reduced by resampling to a higher sample rate before processing.

The generation of sub-harmonics in mechanical systems (such as musical instruments) is quite interesting. It’s one of the features of real instruments that is difficult to reproduce in sampled or synthesized instruments. On violins, the bowing technique can change the degree to which subharmonics are produced, which is something that I’ve never yet seen in synthesized violin sounds.

Subharmonics can also be a problem in switching power supplies.

Bull’s Eye! That’s what I was designing when I encountered the inductor problem. It happened in the Spice model and on the breadboard. Control loop instability can cause it too, but in this case Spice showed it happens even with the loop open.

I have a violin question, but will post it in a new thread.