Auto Synchronize two sine waves

Ok this will most likely seem like an odd request. I’m wondering how possible something like this would be, if maybe there already exists a program that could do this (or if audacity could), or if it’s just something I should forget about. Here’s what I need:

Say I have a sine wav that was generated by a program. Then I run it out to an analog tape, then record it back to the computer. Now I compare it with the original (no tape). It will have small amounts of ‘wow and flutter’ in the recording. This means that the taped sine wav won’t ‘synch’ perfectly with the non-tape one. Say I want them to synch. Basically, what would be great here, would be if there was a program that would allow me to line up the two sine wavs at the first cycle or so, and then have the program analyze both, and automatically resample/stretch the taped one, so that for each cycle it crosses 0 at the same sample position that the non-taped one does.

Is anything like that even remotely possible?

I had a similar idea …

This sounds like a job for an autotune-type effect but with real-time adjustment of tempo, rather than pitch, to correct for the variations in tape speed.

I think it would be possible to correct tape warp using an audio editor like Audacity, (particularly if there is a constant reference frequency on the tape, like mains hum), but I don’t know of a plugin which does that.

There is a free “Turntable Warping” plug-in for audacity (near the bottom of this page), but using it to manually correct a warped recording would be very difficult.

This looks more promising (not Audacity though) …

The “Dirac” technology comes in a free cross-platform C/C++ object library that exploits the good localization of wavelets in both time and frequency to build an algorithm for time and pitch manipulation that uses an arbitrary time-frequency tiling depending on the underlying signal.

in this case, all it would really boil down to would be adding or subtracting a sample here and there. i would think something like this could be done in a very transparent way. i guess its just the fact that there aren’t a ton of uses, is why it doesn’t already exist. my interest in this is in being able to remove wow and flutter from tape recordings, to the degree that several taped copies of the same sound source could be mixed together with no phase canceling in higher freqs. then you get no wow/flutter, plus some noise reduction. all you would need to do would be to record a ‘guide’ sine wav on one track of the tape, while the other track(s) has whatever audio you want to get a little tape sound (harmonics etc).

It should be possible to do such a thing, but may be a bit more complicated than it first appears.

For example, if you look at this waveform:
This is a sine wave with a frequency that is approximately 1/10th of the sample rate.
Count the number of samples in the positive 1/2 wave before the cursor and compare with the number of samples in the negative 1/2 wave after the cursor. They are different, but it is not due to wow/flutter, but due to a slight DC offset (the waveform has been shifted vertically up by 0.05.

Similar discrepancies may also occur due to other types of “distortion”.

For gradual speed fluctuations or constant speed differences between different machines, I think that averaging the speed over several cycles of the “synchronisation tone” could work quite well, but for rapid changes (such as “scrape” or “slippage”) the changes would probably be too rapid to be able to measure accurately enough.

Modern cassette players should be able to play with wow and flutter better than 0.1%,

Yeah, I have a cassette deck that has pretty low levels of wow/flutter. There is still enough, even if I record something to tape and play it back at the same time (it’s a 3 headed deck), that several recordings of the same sound will not mix together well. The high end is significantly reduced due to phase not lining up properly.

Your example highlights one reason why this couldn’t be done perfectly or easily, but I also agree that ‘averaging the speed over several cycles’ would work at least well enough that the main bulk of each cycle would line up with the original fairly well. I don’t know anything about dsp so it’s out of my grasp. Someone else posted here about phase lock loop technology (post deleted or something?). I’ve scoured the internets looking for software that could do this, phase lock 2 tones, and no luck, but I have seen patents that seem to describe this exact idea.

I’m pretty sure that it would be possible to create a Nyquist plug-in that could do this but it would be rather complicated to do so - not a beginners project.
However, adjusting manually should be fairly straightforward (but time consuming). Align the start of the tracks using the “Time Shift Tool” (double headed arrow <–>) then look ahead for noticeable drift and use the “Change Speed” effect to bring the tracks back into sync.

yes i’ve already considered doing it manual and quickly realized that it would be insane. i’m dealing with pretty long files, at 96khz too. i might just put this idea on the backburner for now…

oh man, after a week of searching i found out that audition has an automatic phase correction tool, so i got the demo and tried it. it almost worked, i mean, i guess for its intended purpose it probably works fine (the tool), but i need the phase to be exact. The tool was designed to correct for drift from analog tape recordings etc, but typical stuff that would be used on would be more complex than a simple sine wav, which is what i am dealing with. i think for my very specific needs, that just forcing zero crossings to line up would work perfectly/best, and the dc offset issue wouldn’t really be an issue because the sounds i’m dealing with are so simple. i might start looking into nyquist and get an impression of just how hard this would be.

while i’m here i guess i should ask if anyone knows for sure if it would be possible to use nyquist to force one .wav’s zero crossings to match up perfectly with another, by adding/removing samples or stretching?

I think the best way of “stretching” would be by resampling.
For example, if you need to stretch 1 second of audio by one sample you could use:

(force-srate (+ *sound-srate* 1) s)

Some useful links for getting started with Nyquist:

You should definitely use Audacity 1.3.12 rather than 1.2.6 as the later version has much better implementation of Nyquist.

thanks for the help!