Hi all.
I’m stuck on an audio programming problem. I have written a program which takes a score file and mixes down multiple wav sample files in the correct time sequence into a single .wav file.
Most of the time it works just fine, however at points where more than 5 or so inputs are being mixed, the sound becomes very distorted. I have tried a few different solutions but none of them are exactly what i’m after.
The first, and most obvious, was to make the input channels quieter by multiplying them by a constant as they were being mixed. If I multiplied each sample by 0.2 there was usually no distortion, however at points in the output stream where only one input channel was being mixed it would be very quiet and loose a lot of definition. Even at this low volume, if the score file requires something ridiculous like 40+ input channels at once it still distorts.
The second thing I tried was to halve both the input and output samples as I added to the mixdown. Here is a snippet of the code…
for (int i=0; i < active.size(); i++) {
int frames = active.read(i, buffer, time, off);
for (int j=0; j < frames*2; j++) {
output[j] = output[j]/2 +buffer[j]/2;
}
}
This eliminates all distortion, however the input channels that were added to the output stream first become virtually in-audible as more and more channels are added, and the last ones to be added often drown out the rest.
How does Audacity manage to to accomplish this? Surely there must be some technique that all audio editors/sequencers use for this type of thing.
Any help would be very very appreciated… this has had me stumped for far too long.
Thanks.