first post here.
I produce a podcast which features 4 hosts, all in separate locations.
The podcasters record their podcast via a Skype video call, which we record and use a synch point.
They each record their mic locally in Audacity. Once everybody, including Skype is recording, they all do a sync claps and proceed recording the podcast. When they are done recording, they Dropbox their individual mono WAV files and the Skype file to me. I load them into my DAW and simply sync up the clap sync points. Heres where it gets weird:
Podcaster 1,2 & 3 all sync fine and sound like they are having a conversation in the same room.
Podcaster #4’s audio file is always nearly 15minutes longer. (not because he forgot to stop recording)
So when I sync his clap up to his Skype clap, you can almost immediately tell that the Audacity file is behind in time.
By the end of the 60min podcast, user #4 audio is ~15minutes behind all the other files.
I verified with user #4 that the exported file was the same length (in time) on his end and mine (Dropboxing it).
Everyone is using Windows 10 and the latest version of Audacity and a Fifine USB mic.
I have verified multiple times that everyones projects and settings match and are as follows:
-44.1 project sample rate
-Export: WAV, signed 24-bit PCM
I have tried opening all the files in 3 different DAWs always with the same result.
It seems like the issue might be at the recording phase?
What am I missing? 3 people sync like a dream, and 1 persons file is 15minutes longer than reality…???
I figured it was something simple, like sample rate, but I can’t seem to figure it out.
Clearly the problem is with user #4, but I can’t figure it out!
There is one setting that can cause something like this. Sound Activated Recording. Every time the performer takes a breath or pauses, the recording stops. When you compare sound files that performer will get to the end of the show way before everybody else. All their gasps and pauses are missing from the file.
I appologize, I can see why that was confusing.
The recording session is about 60minutes. 3 of the hosts export files are ~60minutes (within 1-2mins)
User #4 file is about 75mins. When #4 exports his file, its ~75minutes long.
I will check into that. That’s a really interesting thought.
That does make me think of this:
When I sync the clap transients (audacity wav file and Skype wav file) and begin playing them together, its clear that the Audacity wav file sounds slower. The first couple of words line up, but by the end of the first sentence, the Audacity file is multiple words behind.
Even if I dont use the sync point, lets say I just line up the first few words of a sentence in which he doesn’t pause or take a breath, you can clearly hear the the audacity file is behind.
As someone who troubleshoots for a living, it wouldn’t really call that making a pile of assumptions. I would call it a common sense starting point given the known facts and the troubleshooting I’ve already done.
I have 3 vocal tracks and a skype wav file that sync perfectly for nearly 60minutes of run time.
I have 1 vocal track that is 25% longer in the time domain than 4 other files from 4 different sources. It falls out of sync within seconds of the sync point.
If the podcast guys limit their recording session to 60mins, and one guy turns in a 75min file, wouldn’t you start with the 4th person??
All the evidence points to user #4 having an issue.
Also, this occurs every podcast since I started working for these guys. (over a dozen worth)
So, if indeed I am making the wrong pile of assumptions where should I start?
The Skype file is fine and simply serves as a sync point, then I toss it. The other 3 vocal files are fine. If you remove #4’s file from the equation, there are no issues. Add #4 back in and you have a sync issue.
I Elastic Audio time stretch (well, shrink) it to line up in ProTools.
I gets a little garbly sounding considering I’m shrinking it 25% of the original.
Which Is why I’m trying to fix it at the source.
Everybody records in Audacity, I cut it in ProTools.
Also, I frequent a ton of audio forums, and this is the first instance I felt like a moderator had an attitude responding to a basic troubleshooting question.
I’m just trying to get some help from the Audacity community.
Thank you for your input so far.
The audio file for user #4 sounds perfectly normal.
Im familiar with this persons voice, pitch, cadence etc and nothing about it strikes me as odd.
I verified that the user does not have the Sound Activated Recording enabled.
I’m curious if his computer just gets bogged down running Audacity and Skype, causing the recording to be… longer than reality?
If the computer was recording 25% faster or slower than normal, it would be very obvious when listening to the recording.
If #4 was using Sound Activated Recording, their recording would be shorter.
Are you certain that #4 is too long, and it’s not #1, 2, and 3 that are too short?
Looking at your “visual aid”, the duration of “sounds” appear to be the same in both the upper and lower tracks, but some of the “silences” look longer in the lower track. Perhaps someone is playing a practical joke on you