Audio Peaks at -0.5 to 0.5 Instead of -1 to 1 Vertical Scale

I currently have two microphones: the Blue Yeti (USB) and the Audio Technica AT3035 (XLR connected to Focusrite 2i2 audio interface). When I use the Blue Yeti, audio waves look normal and peak at -1 and 1 vertical scale (https://gyazo.com/38aa8bca12f2d6ae617d2e25f5533445). But whenever I use the AT3035, for some reason, the wave only gets as big as -0.5 to 0.5 on the vertical scale before peaking (https://gyazo.com/b7f64f645a3c3137ecff2ab31f79d20e). My question is, why does this happen and is there a way to fix it?

Thanks!

You’re only using half of your Focusrite interface… If you plug-in a 2nd mic you can get to “100%”.

You can either boost the volume after recording (the Amplify effect) or you can record in stereo, and then delete the unused channel after recording.

See the [u]Audacity FAQ[/u]

Focusrite also makes the Scarlett Solo, which I believe naturally mounts mono instead of stereo which wouldn’t do that. Scarlett may also provide a driver which can fix that with the 2i2.

Koz

Post back how you worked around it.
Koz

It is as you said, DVDoug, as soon as I switched to stereo recording, the wave reached 100% (-1 to 1 instead of -0.5 to 0.5). So it looks like the culprit is the audio interface I bought. So I guess a solution would be to record in stereo, and then convert the track to mono via Audacity’s “Stereo Track to Mono” function, right? As for recording mono and then amplifying the audio, will amplifying a smaller mono wave result in reduced sound quality compared to a regular sized wave?

Appreciate the help!

Kozikowski, I have the latest drivers installed for 2i2 and mono recordings still have smaller waves. You’re probably right about the Scarlett Solo being a better purchase for mono voice recording. There are workarounds with the 2i2 though (amplifying the smaller wave after recording or recording in stereo and converting to mono), but still annoying.

Alright, so the best solution I’ve found so far with the 2i2 is to record in stereo, then click “Split Stereo to Mono” on the stereo track’s drop down menu, which will split the left and right channels into individual mono tracks, and then delete the channel you’re not using. It’s an annoying extra step, but it seems to do the job. Not sure if doing this will result in any slight loss in sound quality compared to recording with a mono friendly audio interface though.

Kozikowski, I have the latest drivers installed for 2i2 and mono recordings still have smaller waves.

It’s not a bug, it’s intentional… (There may be other ways they could have handled the situation, but this seems to be the usual solution…)

Virtually all interfaces have either LED level meters or an LED [u]clipping[/u] (distortion) indicator for each input/channel. (I believe the LEDs on your interface turn red if you “overload” the inputs.)

…Let’s say you’re recoding in mono and singing into channel-one and the levels are “green” but
just below clipping. Now, you’re back-up singer starts singing into channel-two, and those levels are also OK. As the interface is now working, everything will be OK and the mixed (summed) signals will be below 1.0.

If each channel alone were allowed to go to 1.0, the combined-mono signal could clip without you ever seeing a red LED warning on your interface.

I see, that makes sense, thank you for the thorough explanation.

Not sure if doing this will result in any slight loss in sound quality

I expect Just-Left and Intentional-Mono to give you exactly the same quality.

How handy this is depends on your show. If you have live production such as Skype or Game production, this stereo dance thing isn’t going to work very well.

Of course there’s always mud in the game. The two cousin Behringer interface units are not just one double the other. The two-channel one has very slightly better quality microphone electronics. So in that particular case, you would get slightly better sound by splitting stereo instead of using the mono unit.

!@#$%^&

Koz

will amplifying a smaller mono wave result in reduced sound quality compared to a regular sized wave?

“Technically”, you’re loosing half of your digital resolution. (If you’re recording at 16-bits you are only using 15-bits.)

But practically speaking, your resolution is limited by the analog & acoustic noise. If you double the volume electronically or digitally before or after recording, you’ll double the signal and the noise together.

And if you record at 24-bits (which your interface is capable of) it won’t make any difference because 24-bit analog-to-digital converters are only accurate to around 20-bits anyway. Pros often record at 24-bits at around -18dB (about 12% of maximum)… That’s either to allow headroom for unexpected peaks, or it’s “tradition” to allow headroom for mixing & effects. (Headroom for mixing & effects is not necessary with modern software because almost all audio software, including Audacity, uses floating-point internally and once you’re in the digital domain you have virtually unlimited headroom. (DACS, ADCs, and “normal” WAV files are hard-limited to 0dB, but you can temporarily go over 0dB “inside” of Audacity.)