No, quite the opposite.
It means that when recording with a high bit depth (24 or 32 bit) you can afford to leave lots of headroom without sacrificing any dynamic range.
You should aim to have your analogue hardware (including the analogue inputs of your sound card) working within the range they are designed for, but when you hit the digital domain, the dynamic range of 32 bit float is so huge that if your highest peak is at -12 dB, there are still more than enough bits below that to accurately reproduce every detail.
There are really 2 parts to this subject - the design sensitivity of the analogue equipment, and then the digital realm.
In the old days of audio tape, it was necessary to try and squeeze every last dB possible onto the tape by running as close to the red as possible. That was because of the limited dynamic range of audio tape. By recording as loud as possible, the noise floor of the tape (background hiss), would be relatively quieter.
Today, with 32 bit digital recording, Even if we “waste” the loudest 12 dB possible, the digital noise floor is still way below the noise floor of any of the analogue equipment that is being recorded.
However, that does not mean that you can just record everything quietly and boost it up with Normalization and expect a top quality recording. Microphones, pre-amplifiers, mixing desks and sound card inputs all have there own noise floor. Each link in the audio chain should ideally be running within its design parameters.
Some years ago, I wanted to record the ticking of a wrist watch. At that time, the best microphone that I had was a Sure SM58 dynamic vocal microphone and a cheap realistic mixer. Even if I recorded onto a state-of-the-art digital recorder, my results would have been poor, simply because the self noise of the microphone and mixing desk were almost as loud as the ticking watch. What I would have needed would be a very sensitive, low noise microphone, and a high gain, low noise pre-amplifier.
Similarly, if you try to record a bass drum from a rock drum kit with a highly sensitive condenser microphone, it is very likely that irrespective of what volume you set the microphone pre-amplifier at, the recording will still come out sounding distorted because the microphone is being overdriven.
When recording, I try to use the right microphone for the job, and assuming that I have it plugged directly into a mixing desk, I will adjust the input gain so that the pre-fade (input) level is peaking close to the optimum level (my mixing desk will go happily up to +12dB before clipping, so I can drive the input close to 0dB with most material. I will also adjust the output from the mixing desk so that it suits the input of the recorder, which in the case of my sound card is to peak no higher than -3dB as an absolute maximum. Finally I set the recording levels for the digital audio recorder (Audacity), and here I can finally relax a bit - with a recording level at around -16 dB, I have lots of headroom, and the (very low) self noise of the analogue equipment is still faithfully reproduced.