How to Fix vertically shifted waveforms recorded on different devices?

I’m putting together audio from 2 different devices and trying to make them as similar as possible. But it’s obvious by just looking at the waveforms that one is shifted vertically rather than centered. I tried using Effect–>Normalize but it seems to have no effect.
vertical offset.jpg
The top track shows the audio all together, and the bottom 2 tracks show the portions recorded on each device: Both are using high quality microphones, but one is attached to a MacAir and the other to a Windows PC

How can I make both look (be) the same?

If you mean asymmetrical, that does not make any difference to how it sounds.

If they sound different is because of different frequency content, that can be corrected with equalization.

Thanks for the link – very interesting stuff. Based on this part of the link you sent

To talk or sing, we have to breathe out, and to play a trumpet, we have to blow air through the tubing. So, in these examples, there is inherently more energy available for the compression side of the sound wave than there is for the rarefaction side

I’m guessing that one of the speakers just somehow generates a greater difference between the pressure and rarefaction side of the wave. I can accept that.

But getting back to my question and your suggestion, is there anything I can do to make the 2 sound as similar as possible? I want them to sound like they are in the same room.

Here are the results for Anaylze–>Plot Spectrum for representative samples of each voice:
2.jpg
3.jpg

There’s an Audacity plug-in which can be used to create matching frequency-analyses, but it’s not user-friendly.
If the rooms have different reverberation, (e.g. different sizes, furnishing, flooring, etc),
then you will still be able to spot the join, even if the frequency-analyses are made identical.