I’ve been diving into audio-related technologies for a while now, and recently I’ve been focusing on audio and vocal separation. It’s been really exciting, but I’ve found it hard to find communities or discussions about this kind of work.
I use a field recorder and a special stereo cable and plug two lavalier microphones in to it to get separation. I record one voice on the left track and the other voice on the right track.
But I did learn that it makes the audio recording sound odd to people. It’s like sitting in the middle of a big padded room with two people talking to each other from opposite sides.
I learned to reduce the stereo track to two mono tracks, one panned all the way left and the other panned all the way right (Audacities Split Stereo To Mono does this for me) and then remixing them 70% left and 70% right sounds better to people.
Interested. I record my garage band practices and a weekly solo acoustic virtual Open Mic over Zoom. I am getting closer to having the band room set up so I wont need to be tweaking the levels with separation. The Zoom Jam is tougher as it depends on the setup for about 10 different mics and instruments and operators. I handle about 20 tracks a week and dont separate all. But when something is interesting I split the vocal/instrument and do some levels, EQ, etc. Open Vino comes pretty close (close enough) in 2 track mode. Splitting a 6 piece band into 4 buckets is spotty. Sometimes the Bass disappears into Other Instruments or Drums. We have a harp player and that seems to get split almost randomly between vocal and instruments. And there is not a good way to balance the Other Instruments when Guitar, Keys and Harp share a track.
I am not a pro - or targeting release to Spotify etc. Hobby musician and recording guy. All in all - the AI separation is remarkable! Even as it is I am able to preserve some live recordings in a way that is at least listenable! I look forward to it growing - seems a more practical approach than investing in hardware to record 10-12 tracks at a time!
Haha, I think either I didn’t explain it clearly or you misunderstood what I meant. I was actually referring to using AI-based audio separation — for example, the common approach of separating vocals from the background music — rather than adjusting the stereo channels.
That’s awesome to hear! I’ve actually been researching and experimenting with audio separation for quite a while now, and I totally agree — the results from AI separation can be really impressive. It still has its quirks, but the progress is pretty amazing.Once I get everything fully sorted out with the model I’m working on, I’ll definitely share it with the community.
