Mixing down multiple channels

Hi all.

I’m stuck on an audio programming problem. I have written a program which takes a score file and mixes down multiple wav sample files in the correct time sequence into a single .wav file.

Most of the time it works just fine, however at points where more than 5 or so inputs are being mixed, the sound becomes very distorted. I have tried a few different solutions but none of them are exactly what i’m after.

The first, and most obvious, was to make the input channels quieter by multiplying them by a constant as they were being mixed. If I multiplied each sample by 0.2 there was usually no distortion, however at points in the output stream where only one input channel was being mixed it would be very quiet and loose a lot of definition. Even at this low volume, if the score file requires something ridiculous like 40+ input channels at once it still distorts.

The second thing I tried was to halve both the input and output samples as I added to the mixdown. Here is a snippet of the code…

for (int i=0; i < active.size(); i++) {
		int frames = active.read(i, buffer, time, off);
		for (int j=0; j < frames*2; j++) {

			output[j] = output[j]/2 +buffer[j]/2;


This eliminates all distortion, however the input channels that were added to the output stream first become virtually in-audible as more and more channels are added, and the last ones to be added often drown out the rest.

How does Audacity manage to to accomplish this? Surely there must be some technique that all audio editors/sequencers use for this type of thing.

Any help would be very very appreciated… this has had me stumped for far too long.

Thanks. :slight_smile:

I don’t know how it is done in Audacity, but in Nyquist you can use the “scale” function on each sound,

then use the “sim” function to combine them.

Thanks. It’s the fundamentals of how the sim function works is what i’m after. I’ve downloaded the source code so I guess its just a case of trawling through it to find the answer… could take some time.

You need a compressor. Might be able to find code for one somewhere. Seems like it wouldn’t be terribly difficult to code. Although you’d probably benefit from a multi-band compressor which I think is a bit more difficult to code.

I don’t think that’s really what myxit is after. A compressor changes (reduces) the dynamics (the range between quiet and loud), but all that is being asked for is to reduce the levels of each track so that the sum is below 0dB.

That’s easy to do, just attenuate the tracks to 1/n their volume, where n is the number of tracks you’re mixing. Or do a real-time normalization. But he’s already tried something similar and complains that although the parts with lots of stuff going on are fine, the passages of the music where only one track is playing is too quiet. Sounds like reduction of dynamics is specifically what he’s asking for.