Hi All:
I’ve gone through the mix music and voice for podcasting tutorial to record a short standard intro for my podcast. Well-presented, but I have a few questions concerning the workflow on using compression, gain adjustmnent, and normalisation. The workflow in the tutorial is:
-
Record narration
2a. Edit narration.
2b. Use compressor on narration -
Import music
-
Time shift tracks
-
Use Envelop or Autoduck to reduce the music under the narration
-
Fade music before and after narration
-
Adjust gain on both tracks if clipping occurs
-
Save and Export. Optional: Mix and render and Normalise
-
In my case the imported music track has some clipping. The tutorial compressed the voice track, but not the imported music track. Why is that?
-
The tutorial has Mix and Render and normalisation as optional. Why would you not do that?
-
I will pre-pend the intro to each narrated podcast. The podcast will be a straight audio recording. I’m unclear on how compression and normalisation should be used in that case. I can think of:
(a) Compress the podcast track only. Prepend the previously compressed and normalised intro. Normalise the rendered mix again.
(b) Compress the podcast track only. Prepend an un-normalised intro. Normalise the rendered mix again.
In short, I’m not sure how much compression, gain adjustment and normalisation (and other processing) should be applied to separate tracks before they are mixed together; and how much to the final mix.
Thank you for your time.
Garry