I’m on v.2.4.2.
Just to give some context: I have a screen recording in which I recorded my voice through my earpiece microphone and was recording my PC sound through the screen recording software as well. For the first 2 minutes, I’m able to here the PC sound (the person I’m speaking with) clearly and that audio is recorded, so I have that audio in case it could be helpful. After that though, I switched my PC sound input to a different audio device and my screen recording software didn’t capture that. It turns out though that my PC sounds still played (I can hear the person I’m speaking to in my earpiece still) and I can even hear the sound bleed out of the earpiece into the earpiece mic if I listen very carefully. It’s a little hard to make clear what the other person is saying through this method, but it’s definitely clear that they are saying something.
Is it possible to recover what they’re saying, maybe through some spectral filtering approaches? Since I have their voice recorded in the first 2 minutes, could I use the higher power frequencies and filter to only those in the other segments of the recording? Or maybe some other approach? I’ve already tried using Graphic EQ and reducing all the lower end (below 1 kHz) and elevating everything above that, but it’s too static-y and still is pretty difficult to decipher what exactly the other person is saying. I’ve tried normalizing as well, but that wouldn’t work in this case either.
I’ve attached a couple of audio samples, one without amplification (the original audio) and another with amplification (so you can hear that someone else is speaking without raising your computer volume up to max). Any ideas to try would be appreciated.
With amplification.aup (2.44 KB)
Without amplification.aup (2.39 KB)