I was part of an all-volunteer team recording a performance at our church, and unfortunately problems with the equipment are threatening to make the audio recording unusable (see the thread “Fixing a Tinny Recording”). The back-up plan is to extract the audio from the 3 video cameras we had running and use that. The challenges there are (1) none of the 3 video cameras were recording the entire time, and (2) none of the 3 are professional cameras, and (3) all 3 are were just using their cheap built-in microphones.
We had two video cameras in the balcony at the back of the church, and the third was wandering around on the main floor on the sides of the orchestra. I’m thinking the camera that was close to the orchestra is going to be “off balance” (heavy on whichever instruments it was close to at the time) so my current thought is to take the audio from the other two video cameras and splice them together.
So I’m thinking step one is to make the audio from those two video cameras sound about the same, so when I splice them together it’s not so noticeable. I’m thinking to plot the spectra and then use the equalizer to make the spectra look as similar as possible, and then normalize so the volume matches. Does that sound like a reasonable plan?
Ah, nothing can be easy, though I guess what fun would it be if it was easy …
Video camera A saves its videos in MOV format. I can import that directly into Audacity, it shows up as 48,000 Hz 16-bit PCM, I convert stereo to mono since there was only a single microphone anyway.
Video camera B saves it videos in MTS (AVCHD) format. Audacity can’t import that, so I use Pinnacle Studio to open the MTS file and export the audio as a WAV file. The WAV file is sampled at 44,1000 Hz, so I have to open it in a new Audacity window and resample to 48,000 Hz and convert stereo to mono before importing into my main Audacity project.
I can now lay the audio side-by-side and get everything to line up (see attached “5track.png”). Tracks 1 and 2 are from video camera A, tracks 3-5 are from video camera B.
First weird thing - even when I normalize a track (setting maximum amplitude to -1.0 dB) it doesn’t make any change in the visible audio tracks, and the playback doesn’t sound any louder. Do I need to set maximum amplitude to some value besides -1.0?
Then, you can see that for nearly the entire performance, I have audio from both sources. So now I get to choose which source I will rely on, and which source will be the backup (for when the primary source is missing). I’ve attached spectra from the two sources, you can see that camera A (confusingly on the right) truncates at approximately 13 kHz while camera B (on the left) goes all the way to 19 kHz. However, the spikes up there in the upper frequencies (15 kHz and up) make me wonder if camera B just caught a lot of useless noise.
You don’t need to do that bit. Audacity can handle a mixture of sample rates.
Set the “Project Rate” (bottom left corner of the main Audacity window) to the sample rate that you want the final format to be. Set this after you have imported the first audio track.
“Normalize” sets the “highest peak value” to the specified level. This does not always reflect the “loudness” of the track. “Loudness” is a subjective thing, not an absolute measure, and you want the tracks to “sound” (subjectively) the same loudness. Peak level will only be a rough indication but you can’t rely on it for whether or not it will sound right.
I’d suggest that you initially normalize the tracks to -6 dB (and turn up the volume a bit on your speakers). This will allow you a bit of space to play with so that you can adjust the levels of individual audio segments up or down as required. When you have finished equalizing and balancing the volume (loudness), then you can mix the whole thing down to one track (Tracks menu > Mix and Render), then Normalize the entire thing to -1 dB (and turn your speakers back down).
Listen carefully to both of them. If possible, listen on good loudspeakers and listen on headphones. Headphones can often reveal more high frequency detail than loudspeakers nut loudspeakers will often give a better overall impression of the sound. Use whichever sounds the “best” as the primary source.
Make a backup of the original WAV format files before you start doing anything else.
If necessary, use Equalization on the “primary source” to get it sounding as good as possible, then do nothing more to it for at least 4 hours - ears need to rest.
Come back to it and compare your Eq’d version with the original - does it still sound better than the original? If not, save the Eq’d version somewhere (in case you change your mind) and go back to the original and have another go.
When you have an Eq’d version of the “primary source” that you are happy with you can start on the other recordings to make them sound as close as possible to the primary.
“Plot Spectrum” is only a rough guide - the important thing is what it sounds like. Always work with the “sound” - graphs and measurements can only be a guide - “hearing” is a lot more sophisticated than measurement.
Make lots of backups as you go.
When an Audacity Project is closed all of the Undo History is deleted. The only way “back” to a previous state after closing a project is if you have an earlier backup.
Murphy’s Law states: Programs become more unstable as the amount of work that could be lost increases.