Below is a sample from an interview that had lousy sound (won’t bore you with all the background angst). The original file was google voice (again, not my choice but had to make do). Working with a couple sound engineers, and spending hours on this on my own, I’ve gotten it to the point you hear, below.
Is there any chance of improving this, even marginally? Obviously, I’m not looking for pristine sound; but it’s right at the point where I could consider using it and if there’s anything that can be done to get it even a little better, I’d really appreciate it. This is a really important interview for me.
Probably not. The other sound people have done all they can with the sound between the words. You have actual word damage where it sounds like four people talking at the same time. Audacity can’t split a performance into individual voices, instruments or sounds.
I think this is what happens when somebody tries to give an interview by speaking in the general direction of their phone in a noisy environment. I had a very annoying call with someone recently which had an extreme example of this. It was almost unusable because I couldn’t tell who I was talking to and the person doing it can’t hear what’s happening, so they think this is all perfectly normal.
The digital-artefacts can be made less conspicuous by shaving-off some of the the high end, (>3500Hz), using an equalizer.
Then IMO intelligibility can be improved by dynamic-range-compression e.g. LevelSpeech2.