Will a Master volume control ever be added?

I can’t imagine it would be overly difficult and can be extremely useful. Thanks!

1 Like

You mean in addition to the Playback Volume Slider?

Well yes, of course. That’s just playback volume, for which you can use the Windows Mixer of you care to adjust that (or your external amp, as would likely be the case if you’re using WASAPI or ASIO out). Yeah, I mean for the audio which would actually be processed in the 32-bit floating space and exported.

I’m still not sure what you mean.
How does your suggestion differ from the Playback Volume Slider?
How does your suggestion differ from the Amplify / Normalize effects?

The Playback Volume Slider only effects the volume of what you’re listening to within the software - essentially the digital monitoring level - which is why I said that’s no different than adjusting it in the Windows Volume Mixer.

It’s actually no different than using the Amplify effect to apply gain, except for the fact that it would apply to all the tracks simultaneously, for which they wouldn’t have to first be mixed down (with the relative balance of the tracks and effects essentially further made inalterable), applied in what would effectively be the mastering stage already… Otherwise, any time you have to make a change, you have to reprint and renormalize the mixed down tracks before soloing and exporting it. Quite uncomfortable, inconvenient, and inefficient.

It would essentially be regulating the master volume meter prior to coming in/out the final bus.

The Amplify effect is applied to all selected tracks.
If you select all tracks (“Ctrl + A” on Windows / Linux) and then use the Amplify effect, all tracks are amplified by the same amount.

If you play the multi-track project and it shows a peak level of -6 dB in the playback meter, then selecting all of the tracks and amplifying by +6 dB will bring the mix level up to 0 dB while retaining the relative levels of each of the tracks.

Audacity isn’t a “DAW” and doesn’t have “buses”. There is just one, stereo, playback stream, which may be a mix from multiple tracks.

I understand it doesn’t have buses. I’m just using the terminology so it’s clear what I’m referring in the traditional sense of a DAW. Yes, I’m asking for a coming control on the final output stream (prior to leaving out the program).

The problem with using amplify like that is that you’re physically altering tracks, and the adjustment wouldn’t always be one way. So basically bringing it back up after lowering it reduces its fidelity and raises the noise floor. Otherwise inconvenient doing it on the fly. It also influences the levels of the tracks if you want to export them separately (also not ideal). I otherwise already use it to some degree for such purposes.

So long as you are working in 32-bit float format (the default), that is not a problem. Assuming that the amount of amplification is less that +/- 100 dB, the theoretical loss is insignificant (much less than the smallest 16-bit value).

Unfortunately, when mixing tracks, it is impossible to determine what the mixed level will be until the tracks are mixed. If you have two tracks, both with peak levels of -6 dB, the level of the mix could be anywhere between 0 dB (full scale) and -infinity (silence).

So basically you want on-the-fly amplification during export, affecting the exported data but not the project data.
How would you (or Audacity) know how much amplification to apply?

I think I understand the problem that you want to address - achieving a reasonable level in the exported file without messing with the project.
The way I handle this is:

  1. “Ctrl + A” (Select All)
  2. “Tracks menu > Mix > Mix and Render to new track” (if you have the “full” set of shortcuts, the shortcut is “Ctrl+Shift+M”)
  3. Select the new mixed track and apply the Normalize effect.
  4. Solo the “Mix” track
  5. Export
  6. Delete or Undo the “Mix” track.

The steps you laid I already referenced earlier.
There’s no substituting what I’m asking for. That’s part of the reason why I was more technical with the terms earlier - to be precise in desired function without the workarounds. The workarounds are what prompted me to come here and ask about it.

PS- The bit depth/floating space doesn’t really have to do with the problem of lowering the resolution of the audio by 1 bit if you lower it 6 dB and then back up. That’s not theoretical. That’s a 1-bit less. Point is, too many cons and inconveniences for doing it this way when a simple master volume control can be created.

but I don’t see how your suggestion could work practically. How would you (or Audacity) know how much to amplify by, without first mixing down to see what the level of the mix is?

Something which has been suggested before, which I think is both practical and useful, would be a “master” volume control that, when adjusted, moves all track Gain sliders by the same amount. Would that satisfy your needs?

If you are working in 32-bit float format (which is the default), then amplifying up and back down, or down and then back up, is lossless. This is why Audacity uses 32-bit float by default - it is incredibly accurate (thousands of times more accurate than 16-bit). Unless you are amplifying by many hundreds of dB, amplifying when in 32-bit float format is harmless.

Actually, you might be right about the audio resolution, that is, if Audacity actually converts the tracks to 32-bit floating point, rather than simply having them in the space. I was under the impression that whatever 16-bit tracks physically actually remain 16-bit, but are just being mixed in a 32-bit space, meaning that if they’re pushed into the red, you can actually mix them down, use Amplify to bring the gain down, and the details would still be there rather than cut off/clipped. For the 16-bit files to actually occupy the entire 32-bit bit-depth, wouldn’t the new files have to be stored somewhere in the project rather than just the original files being referenced? Otherwise, when you make a 6 dB adjustment to the waveform of the sound, you still go from a 16-bit to a 15-bit audio resolution.

Regarding having a group slider, where all the tracks can be affected at once, yeah, that would work perfectly fine. It’s the same thing actually. I thought this would actually be harder to implement than just a master fader/slider as I was suggesting. It would literally be the same as the Playback Volume Slider except it would work “pre-fader” rather than “post-fader”.

I actually wanted to suggest the group thing separately, as I found it would be useful in situations for selecting particular sets of tracks which you have set up relative to one another. That would be great!

So, will it be considered? It would certainly be more than appreciated, and I’m sure not only by me.

Not to insinuate anything further, but part of the reason I came on here to ask is because I always just figured it was a basic/integral part of any audio manipulation software.

I’ve logged your interest in this feature. It is a fairly popular request so I hope that it will be implemented in some form, but Audacity only has a very small team of developers and there’s a lot of feature requests, so there’s no guarantee that it will be implemented soon.

It is indeed a very common part of any DAW, but not common for audio editors, but then it is not common for audio editors to support multiple tracks.
Probably the most obvious place for a master volume control would be in the Mixer window (which is where most DAWs have their master volume), but Audacity’s mixer is still rather clunky and could do with a lot of work.

There’s an important distinction:
“Tracks” are not “files”. If a 16-bit WAV file is imported, then by default the data is copied into Audacity to create a 32-bit float format track. However, if the “Quality” settings in Preferences are set to 16-bit as the default format, then a 16-bit WAV file will be imported (copied) into a 16-bit track. The “track” format is shown in the panel on the left end of the track.

If the “track” is 16-bit, then yes you will quickly lose bits when processing (which is why 32-bit is the default).

I see. Okay, thank you! We both hope the same thing then XD. Perhaps you’re right that it’s not actually common specifically in most audio editors. The other thing desperately needed/missing is any sort of crossfading capability :confused:

I understand the distinction between the file and the track. Just based on how Audacity works, I figured the tracks are directly being sourced from the files. Here’s what I’m confused about then, from what you’re saying, there would only be the loss if the files were exported back into 16-bit, so my question is, if the waveform gets upscaled real-time (or upon importing) to 32-bit floating point, where is that stored? In RAM or some sort of temporary cache in a system folder?

Also, if that’s how it’s done, wouldn’t 0 dBFS still be the threshold beyond which the waveform would clip and further couldn’t be restored bringing it back down (or perhaps I’m misunderstand the difference between regular 32-bit and 32-bit floating)? I’ve actually always wondered this. Thanks!

Ideally we would like to have “non-destructive” crossfades, where audio clips may be overlapped on the same track, and the overlapping region crossfades. Audacity tracks do not currently support overlapping audio clips, so that would need to be added first.

In the meantime we have these two plug-ins:

That is possible, but is not the default. See: Read uncompressed audio files from original location (faster). Note that when this is enabled, the original source file is accessed in read only mode. When the audio is modified, the selected audio is copied from the file and it is the copy that is modified, not the original file. By default the copy will be 32-bit float (as per “Preferences > Quality”)

It is written to disk as “block files” (file extension: “.au”).
For a new project, the data is written to Audacity’s temporary folder (Directories Preferences - Audacity Manual).
For projects that have been saved, the data is written to the project’s “_data” folder. Every saved project has a “project file” (“.aup”) and a data folder (the folder name ends with “_data”).

For integer formats, 0 dBFS is an absolute threshold. It is not possible to represent values greater than 0 dB with integer format.

For floating point formats, values far in excess of 0 dB may be represented. The range 0 dB to -infinity dB is equivalent to sample values in the range +/- 1.0. 32-bit float can represent both extremely large and extremely small numbers, so audio data can be amplified way over 0 dB and back down to the “legal” range below 0 dB without damage. This is a major advantage to using 32-bit float format.

Ah, I see. Thanks for the details! If I may further put it in summary, yes, the files are upconverted to 32-bit (by default, as set in the settings and represented by the information of the tracks) and is saved to the hard drive (in one place or another depending on whether or not the project has been saved or file/waveform altered). Floating point further allows for dynamic bit representation (just taking the existing data and putting it into the full range), serving as a kind of headroom for any potential “clipping”, only actually clipping once placed/played back in a non-floating data structure/bit-depth.

I’d still argue that, as opposed to having a fader, apart from (more prominently) the mentioned inconveniences, it’s still a destructive method to alter the waveform (even if imperceptible). In any case, I appreciate your patience!

PS- More nondestructive stuff would be great! The existing crossover plug-ins are further, uh… less than ideal one might be inclined to say, but I appreciate the referral. Thanks again!

Registering another vote for both of these things:

  • a master gain control–which I’d rather see presented as a separate gain stage than a control that moves all the individual sliders, but either would do

  • the ability to gang sliders so they move together and keep a group of tracks at the same relative levels

They should be two separate things, so a project could have several gangs of tracks, and still have a single master gain control that affects all of them at once.

I found this thread looking for a simple solution for my own problem:
I normalize all the tracks before I mix down and this causes the audio output of my computer to distort no matter what volume I set the computer to. While everything is 32 float it’s converted to whatever the audio output needs by clipping.
Obviously I can (and do) export the mixdown to a 32 bit float WAV and import that into Audacity and normalize it so it is no longer over full scale and listen to that. What a PITA - please add some way to scale the output of Audacity to the PC’s audio system so it doesn’t distort.

A simple way to deal with that:

  1. Mix down within the Audacity project (“Ctrl + A” to select All, then “Tracks menu > Mix > Mix and Render”)
  2. Normalize the mixed track.
  3. Export.

If you want to retain a pre-mix copy of the project, you could either save a backup copy of the project before step 1, or use “Ctrl + Z” (undo) to undo the mix after step 3, and then save the project.

You misunderstand - my main want is to be able to listen to the project play while I adjust track volumes and pans and NOT hear distortion caused by the 32 bit floating point to 24 bit (my audio card’s default?) fixed point conversion. At the very least there should be a parameter in Audacity to attenuate the output by a fixed amount (6db would work but user selectable would be better) before converting it by clipping to 24 bit fixed.