Workflow Problem - Please help me correct.

Hi,

I’m working on my first video program consisting of a dozen or so sections for which I’m using Audacity as my audio editor. Results produced different sound files with different loudnesses and show my workflow to be flawed or incomplete. Would anyone be so kind to critique?

My Recent Audio Workflow:
Import original audio files into video editor from field recorder for rough edit.
Export clipped and trimmed audio file from video editor in .wav format.
Import .wav file into a distinct Audacity project file for improvement.
In Audacity:
Correct mistakes. (low frequency breath noise, delete: catch breaths, lip smacks, ums and ahs, stutters, etc…)
Further EQ for enhanced tonal quality
Compress ( Used the default compressor. Set threshold at -6Db from loudest peaks and 3:1 ratio, Compress based on Peaks. Used default for other settings. Noice Floor: -40Db, Attack time: .1 sec, Decay Time: 1.0 sec)
Normalize: to -1.0Db
Save Project.
Export .wav file to video editor
Rinse and repeat for next section

Problem is:
Some .wav masters seem louder than others whereas they must seem consistent to the viewer.

Questions for you:
So, where did I go wrong?
What should I have done differently?
What should I do now to recover?

My thoughts about where I went wrong:
For the threshold recommendation, I believe, that I followed something I read in the manual or on this forum. Problem, may have been that it was not taking the existence of multiple files into account. Consequently, I worked with each file independently and applied a different threshold to each. I thought normalizing to the same Db would have taken care of this but, upon further research, I see that loudness is different from volume and a bit subjective and ambiguous.

Looking back, I think, perhaps, that the main place where I went wrong was by treating each section independently as a unique project. Would it not have been more appropriate to import all .wav files output by the video editor into Audacity onto separate tracks in a single project? That way, I could have done all my corrections track by track but applied the Compression and Normalization settings across all tracks at once.

Would you agree?

My thoughts about recovery:
Pull each of the master .wav files back onto different tracks in a new Audacity project.
Then:
Adjust volume track by track using my ear and the “Amplification” effect? or
Should I compress again? (I’ve just discovered and installed Ladspa SC4 and Chris’s Compressor both of which seem to have an experienced fan base. (Is one to be preferred?) or
Try the “Replay Gain” effect?

When happy, export each track separately again as .wav files for import back into their respective video projects.

What do you think? Can you share any advice?

thanks in advance,

BillyB :confused:

The basic problem is - Normalization works on peaks but the perception of loudness works on the average volume (and the frequency content).

The general procedure would be:

  1. Normalize everything. This gives you a starting point .

  2. Choose the quietest sounding track as your reference.

  3. Adjust the other tracks down as necessary (by ear) to match your reference track.


    Alternatively, you can use compression and/or leveling to boost-up the quiet sounding tracks.

ReplayGain may also work, but you may end-up reducing the volume of ALL tracks, even the quietest-sounding ones. RepayGain is nice when you don’t have a huge library of tracks with no reference tracks.

Compress ( Used the default compressor. Set threshold at -6Db from loudest peaks and 3:1 ratio, Compress based on Peaks. Used default for other settings. Noice Floor: -40Db, Attack time: .1 sec,

If that’s improving the sound, fine. But in general, compression shouldn’t be used blindly. It should be used with particular settings to solve a particular problem or to get a particular effect with a particular source. (Same with EQ or any other effect that affects the character of the sound.)

You have to do this by listening. As much as we would like it, there’s no button you can push to make it perfect. The blue waves and the bouncing sound meters will only get you in the ballpark. They won’t produce a finished, balanced show.

treating each section independently as a unique project.

Right. You can do that if your next step is to hand all the clips off to the real editor who balances them into a final show for delivery. You missed the balancing step. Many Editor Humans will insist that you not do anything at all to the clips because it’s possible the editor will have to undo something you did thinking you were helping.

So after you make safety copies of everything (no producer wants to hear you damaged an original actor or performer shoot. That’s grounds for firing squad in Hollywood). Import them all into one Audacity project and, listening to everything critically on good speakers or good headphones, apply effects, balancing volume, equalization, etc, etc, etc until you get a completed show. Then Export as WAV for delivery and then Save a Project in case they want to change anything.

A note: Projects do not save UNDO.

compression shouldn’t be used blindly.

As DVDdoug above, you should be ready at any time to explain in detail what you’re doing. Nobody is going to give you points for using the maximum number of effects.

I’m working on my first video program

You should be working at 48000, 16-bit, Stereo. Not 44100. Most video editors are 48000 native and “put up” with 44100 if they have to. They will also “put up” with Mono tracks, but they really like Stereo. Never do production in MP3.

Koz

Thanks for your reply Doug!

Succinctly put. Got it Doug. Thanks!

Good basic guidelines. Thanks!


Might this then not be easily corrected with the “Amplification” effect applied to all tracks?

I have 13 tracks of narration of 8 minutes each on average. Would you call that a huge library?
If I were to follow your first recommendation above and adjust manually, I suppose my most quiet track would become my reference track, correct?
If I were to try to make the quiet tracks louder to match a louder track, that would become my reference track, right?
Is this a good time for ReplayGain? (Not sure if my circumstances match your recommendation for when to use it or not.)

This is a female voice recorded on condenser cardiod cable lavalier Audio Technica mic. I thought the sound a bit thin. Possibly due in part to low gain settings used during recording in a failed attempt to address what only later became identified as plosives due to poor mic placement as pointed out by Koz in an earlier post. After applying a “High Pass” filter to resolve the breath noise as best I could, I used compression to add a fuller body and decrease the dynamic range a bit. Sound like appropriate reasons? Should I have been more specific yet in determining compression settings?

Sounds like sage advice. It does help color my approach. Thanks!

Except for the talent, I’m a one man show but the advice is still relevant. The key to it is that I did miss the “balancing” step as you point out.

I have copies but probably worth hearing to reinforce their importance.

Thanks for stating what might just be assumed by anyone with experience! Would you stack the tracks on top of each other soloing those that you wanted to listen to or would you sequence them on the time line?

Good to know. Thanks!

Koz, please see my reply to DVDdoug on this point and let me know if there is anything more I should consider or think about differently regarding compression.

The video project is done in Camtasia Studio because it includes screen capture. The Camtasia editor only handles and publishes up to 44100 and they recommend against mixing in 48000 recordings. I’m told that the distinction is not perceptible to most people. If you feel differently, I might need to seriously reconsider the project software next time.

Got that.

Thanks for all your help! That high pass filter and mic placement tip you gave me on a previous post was a life saver.

BillyB