I have been out interviewing people on my iPhone - and more recently using lav and shotgun mics - for the last month.
To my dismay, when I listened to the recordings - which seemed to have been executed properly - there was this horrible crackling noise almost like an old-time radio broadcast.
I was devastated, because I thought all of my interviews were ruined.
Then I invested over $1,000 in prosumer lav and shotgun mics, and I still had the same issue in most cases.
Yesterday, I Googled this topic, and the first result was a YouTube video showing how if you go into Audacity and change the Preferences > Devices > Latency > Buffer Length from “0” to say “20”, if resolves this issue.
I was estatic!!!
Can someone help me understand why I was having these issues, and why this settings change made all the difference?
(My goal is to become a semi-independent sound engineer since I have high aspirations for doing video and audio for my startup business, and I want to really learn the sound and Audacity fundamentals so I get the best results!!)
Computers are constantly switching between different tasks - reading from disk, writing to disk, check the modem, write to RAM, read from RAM, crunch some numbers, send a screen refresh to the graphics card, …
On the other hand, audio is a continuous stream of data, so there needs to be some way to avoid losing data while the computer is busy doing something else. This is accomplished with “buffers”.
When you record audio, the data does not stream directly from the sound card onto the hard drive - that would not work because as soon as the computer switches to another task you would lose some of the audio data. Instead, the audio streams into a special memory area called a “buffer”. The computer can then read from the buffer when it has time to do so, “emptying” the buffer, and the sound card can be continuously writing to the buffer “filling” the buffer.
Clearly there has to be some kind of synchronisation to ensure that there is always some data in the buffer when the computer wants to write data to disk, and there must be sufficient space in the buffer to provide somewhere that the sound card can dump its data. Thus the size of the buffer is important. If it is too small it could become full and the sound card may have nowhere to puts its data. This is called “buffer overflow” and causes bits of the data to be lost.
Large buffer size is generally not much of a problem on Windows or Linux, other than increasing the delay between receiving data and writing it to disk, but it is more of a problem for Mac computers as the macOS sound system is designed for relatively small audio buffers.
I don’t know. It’s one of the mysteries of macOS. We know that on macOS a smaller buffer size often helps, but personally I don’t know why that is. My “educated guess” is that it is some sort of synchronization problem, but I’m not an expert on the internals of Core Audio.
Well, changing “Buffer Length” from 100ms to 20ms seems to have fixed the “crackle” I was getting in my interviews recorded on my iPhone and also on my separate audio track recording on my Zoom H6 - which is a major relief as I thought that I had lost 3 weeks worth of interviews?!
As far as my other thread, where when I record Internet radio shows on rMBP1, Audacity is dropping files, I’m still not sure what causes that…