Why do the audio and video not match?

Please read all the way through before commenting.

I do podcasting, and although my microphone is good, it still picks up sounds that I would rather not have, such as sharp breathing and background noise. I know how to remove those noises.

The issue comes with recording, exporting, and importing.

I tried to record audio directly in Audacity to easily edit. However, once I exported the audio file (no cuts, just normalization, low/high pitch filters, and compression), the audio and video files did not match up no matter how much I lined them up. I clapped at the beginning of the audio so it would be easier for me to line them up.

But that didn’t work, so I decided to use the in-video audio.

I exported the audio straight from my editing software (again, no cuts, I deleted the previous one that had cuts and restarted my projected). I edited the audio with the same normalization, filters, and compression in Audacity, and then imported back into my video editing software (I use the CapCut computer app if that helps). Again, the audio did not match up.

It was the same length, the beginning matched up perfectly, the end looked like it matched up (there was no bits hanging off, idk how to describe it), but the audio file seemed to slow down as it went along. At the three minute mark, I could really notice that the audio/video did not match up. It was obvious to me at the 1 minute mark, but it was passable until it reached the three minute mark. My mouth did not match the words that were being said.

Why is this? Is the compression messing with the length/speed of my audio files? I know it’s not the exporting process (at least not from CapCut to Audacity; maybe from Audacity to CapCut, I’m not sure). I’ve tried to use the audio/video match-up that CapCut has, but that sucks and never works regardless of what I do. I’ve tried to manually match the audio up (which I’ve done before successfully, with both Audacity and CapCut, but it was years ago so I don’t remember what settings/filters I used, or how I matched the audio up [I think I ended up splicing and moving the audio around a lot, which I didn’t want to do again]).

The only thing that works to sync audio up is if I edit the video, make the necessary cuts, export the audio (with the cuts), and then import the edited audio back into CapCut. But that makes a video with a lot of overlays harder to edit since I have to manually go in and select the clips that I need to silence.

Any tips or answers? I want my podcast audio to be elevated, but I can’t seem to figure out how to line up the audio/video.

The default sample rate for Audacity is 44100Hz, whereas the typical video audio is 48000Hz.
Changing Audacity’s default project-rate to 48000Hz seems a good idea when editing-exporting audio from-to video.

If you open the video file in Audacity, sync should be maintained when you export it back.
[ * you will need to install FFmpeg to rip the audio from video in Audacity ]

1 Like

I export the audio using CapCut’s own settings. So I’ll click “Export”, and then “Export Audio”, then import the mp3 file into Audacity. Then I’ll export the Audacity file as an mp3 file to import it back into CapCut. I didn’t know there were other ways to do this. Thanks! I will look into this!

Don’t use mp3 when editing: it’s a lossy format. Use WAV format: its lossless.

1 Like

Ah, okay! I did not know this!

MP3 gets its convenient, tiny sound files by rearranging musical tones—and throwing some of them away. And you can’t stop it.

Koz

1 Like