Hello, new Audacity user, first post. I’ve read posts and watched tutorials and have a few questions about setting up my podcast editing process.
Situation: my partner records each podcast section as a separate Aup file using Audacity on his PC. He then sends all of these files (about 10 tracks) to me to edit using Audacity on my PC. My current process is: I open each Aup file and export as a WAV file (so that I can combine all tracks in a single project window, because I can’t find a way to do that with Aup files.) Then I open a new project window, import all the WAV files, edit out the unwanted segments, and then put them in sequence using the time shift tool. Then come other edits to fix the sound, etc.
Questions:
is the above process the best way to work with multiple tracks to create a single finished track?
Would it be better to first combine them all into a single track instead using Align?
Next I need to sync the volumes of all the tracks - I’ve read about Normalize vs Amplify vs Compressor for this. For a narration (vs music), which is the best tool to use? Or something else?
Should question 3’s step come last? On individual tracks or after they’re combined? Before or after adding the music track?
I don’t want to make this overly complicated for myself, but are there any other basic steps that I’m missing? (I’m still reading up on clipping and splosives - what a great word!)
Thanks for the suggestion - I just figured out how to install this plug-in and am now off to learn more about how to use it. I’m guessing that all the tracks should be combined into a single track before I use this, rather than leaving them in separate tracks in the project window?
I’m guessing that all the tracks should be combined into a single track before I use this,
No… Normalization makes ONE adjustment to the entire file so the dynamics are not affected (loud parts remain relative loud and quiet parts remain relatively quiet).
I’m still a little confused, being new to this. My understanding is that RMS Normalize will help make the voice volumes relatively consistent, which is what I need to do since the separate tracks that I receive (that will become a single finished project) tend to have varying levels of loudness. I assumed I’d have to combine all of the tracks into a single track (tracks>align tracks>end to end) so that the effect could be applied relatively to all. Are you saying to leave all 10 tracks as separate tracks in the project window and apply RMS Normalize to each of them one by one? Will this make the volume relatively the same across all/how does it know what’s on the other tracks, or does that not matter? And then combine them into a single track after this?
Also, is there any reason I need to combine all tracks into a single track as above, other than it seems simpler to work with one track rather than having so many different ones open?
Love the visual learning - makes it so easy for me to see what the steps are. And it also reminds me how much of this I don’t understand because I realize that my last question wasn’t even accurate. Thank you!!
OK, I do have another question about RMS Normalize. Once I’ve edited all of the narration tracks of the podcast, is it smarter/better to apply RMS Normalize at this point, or is it better to apply it after I add in the music track? In other words, should this effect only be used once, and at the end? (I did try to find the answer to this, but there is so much terminology that I can’t even begin to understand at this stage, and I’m trying to keep this editing process as simple and as repeatable as possible for now while I’m learning.) Thank you for so generously sharing your time and wisdom, forum folks!!
If the narration & music overlap you should consider auto-ducking which automatically turns down the music volume when the person speaks, which increases intelligibility & maintains a more constant volume.
Steve made a plug-in which can AutoDuck called dynamic-mirror. I find that easier to use than Audacity’s native AutoDuck.
Do I first apply RMS Normalize just to the narrated track then, and are you saying to then bring in the music track and use Dynamic-Mirror? Trying to understand the order of the steps you’re suggesting. Also, I currently use Envelope for the music because it’s only at the beginning and end of the podcast. Is the D-M plug-in a better choice? Thanks!
My suggestion is apply RMS Normalize to all the tracks, music* or voice, they will then be approximately the same volume.
If two “RMS Normalized” tracks are playing at the same time their combined volume will exceed the RMS threshold you’ve set.
If someone is talking over the music, ducking the music with the speech makes the voice more intelligible.
( You can use envelope-tool & dynamic-mirror if you want ).
If in your project the music and voice do not overlap then dynamic-mirror won’t be of any help.