Best Practices for Concert Recordings

I have been recording concerts since the late '90s and have amassed a collection of over 100 live sets from bands, mostly rock, at at small clubs around Cleveland. I’ve been using Audacity for years to touch up these recordings before sharing them with the world. My processing of these concerts is typically to adjust to the EQ a bit, raise the volume, and call it a day.

I am not an engineer (just a music enthusiast). I come to you for your thoughts on order of operations and general guidance on features that could be applied to a live concert recording. I’m not talking about removing elements of the recording or adding flashy effects. I just want to make the recording as a whole more listenable in headphones or on my speakers at home.

Can you suggest what you would consider doing to a live recording if one were given to you? What would you evaluate first? What actions are important to take first, second, third, etc. For example, should I apply EQ first and then amplify the total volume? Some features I’ve been using to this point are:

  • EQ
  • Normalize
  • Compression
  • Amplify
  • Fade
  • Envelope (I’m not familiar with this feature, but have seen it discussed here)

Thanks all for your help! With this discussion I hope to improve the processing of my future recordings and revisit some of my past recordings to improve their presentation!

If any are mono, pseudo-stereo is worth considering.
Voxengo’s Stereo Touch plugin is free … https://youtu.be/5TM8Y19DEWg?t=65

1 Like

It’s not easy to get a good live recording of a “rock band”. Usually there is too much reverb or room sound… The amount of reverb that sounds great live tends to sound unnatural through a pair of speakers in your living room (or through headphones). And often, the sound quality is less than perfect… And if you tap-into the PA mixer, you are missing the sound coming directly from the stage. Pros usually multi-track (similar to studio recordings) with some extra microphones/tracks for the room & audience.

Normalize last. EQ can boost the peaks into potential clipping (distortion) but Audacity uses floating-point internally which has no upper limit and if you Normalize after adjusting (but before exporting) it will bring the volume down to a safe level.

Otherwise Normalize will boost the volume for 0dB peaks or near 0dB peaks. (AKA “maximize” the volume). If the peaks are already 0dB, Normalize won’t do anything (or if you run it twice it won’t do anything the 2nd time).

When you run Amplify, Audacity has pre-scanned the file and Amplify will default to whatever gain or attenuation needed for 0dB peaks. If you accept the default it’s the same as normalizing (except Audacity’s Normalize defaults to -1dB and it has a couple of additional features).

Amplify and Normalize are both simple-linear volume adjustments, like adjusting the volume control. (The last one “sticks”.)

Fading is up to you.

The Envelope effect allows you to fade-up or fade-down certain parts. The “trick” is to leave the end-points unchanged so the volume doesn’t suddenly jump-up or jump-down. It works best on sort selections… Or rather, it get’s difficult to get the settings right on longer selections.

Dynamic compression (unrelated to file compression like MP3) reduces the dynamic range or “dynamic expression” by making the loud parts quieter and/or the quiet parts louder.

Limiting is a kind of fast-compression that pushes-down the peaks. Automatic volume control is kind of slow-compression.

You may not want compression because it reduces the musical dynamic expression, and it brings-up the background noise level.

In practice compression/limiting is normally used to push-down the loud parts/peaks. Then make-up gain is used to bring-up the overall-average volume/loudness.

If your recording seems to quiet compared to your other music, compression/limiting is the “fix”. Limiting is easier to use and you are less-likely to get unwanted side effects.

All popular commercial recordings have compression & limiting to “win” the Loudness War. (It’s not used as much on classical & jazz recordings.) IMO it’s usually over-done and the constant-loudness music is boring but that’s a matter of taste.

You may also want to do some editing to reduce gaps between songs or “excessive talking” etc. I don’t do much live recording but I have lots of video concerts and I usually make audio copies and I usually do some editing.

You can split the recordings into sections and then re-join with a crossfade to smoothly blend the applause/crowd noise so the edit isn’t noticeable.

And, I usually make a full-concert track plus separate tracks for each song. For these I’ll often blend-in some applause from different parts of the recording, and occasionally from a different concert (depending on what I have to work with). I’ll usually fade-in the applause/crowd for 1 or 2 seconds at the beginning and 10-15 seconds of applause fading-out at the end.

P.S.
Loudness Normalization is different from regular peak Normalization. With Loudness Normalization you have be careful because you can push the peaks into clipping.

Thanks for the explanations and tips. I’m comfortable with the live recording piece. I’ve got my process/gear/location figured out for the clubs I go to. But I always just “wing it” with the processing afterwards. Not that there is anything wrong with that - I do what sounds good, and undo if I make a bad choice. But knowing a little more about these tools will help me approach my recordings a little more systematically. Thanks again.

I do typically record in Mono. I’ll have a look at that plug-in. Thanks for the suggestion.

Bear in mind you have to create a dual-mono version of your recording for pseudo-stereo plugins to do their thing:
Duplicate your mono track, then join them together as a stereo pair
https://manual.audacityteam.org/man/splitting_and_joining_stereo_tracks.html

1 Like