The Audacity input slider “should” be tied to any level slider for the turntable that appears in Windows “Sound” - moving one slider should move the other slider. If that is not the case, the Audacity input slider should be greyed out.
The Default Format in Windows “Sound” does not over-ride your choice of 32-bit for Default Sample Format in Audacity’s Quality Preferences. You will record in 16-bit (presumably the only bit depth the turntable supports) and Audacity will expand this to 32-bit float. This doesn’t expand the recorded dynamic range, of course, but 32-bit float internal calculations are faster and more precise.
Even if a recording device actually records in 24-bit, choosing “MME” or “Windows DirectSound” host in Audacity will still force 16-bit input into Audacity, due to limitations in MME and in PortAudio’s
support for Windows DirectSound (PortAudio is a third-party Audio I/O library that Audacity uses). The only 24-bit recording Audacity can currently do is recordings of computer playback using WASAPI Loopback (if you actually have a 24-bit sound card).
Strictly speaking, on Windows Vista and later under MME and DirectSound, Windows will upconvert to 32-bit float to do any processing necessary (for similar reasons that Audacity by default does calculations in 32-bit float) before converting back to 16-bit.
There’s no point in recording in floating point. Audio ADCs (analog-to-digital converters) and DACs are integer based. If you record in floating-point, your driver is simply making a conversion.
Like most audio editors & DAWs, Audacity works internally in floating point. This makes the DSP math “easier” and you can go over 0dB internally-temporarily without clipping.
Sometimes, there is a benefit to creating a temporary 32-bit floating-point file if you are going to come back and work on it later, or if you are going to move your work to a different audio editor. Before you export/save your file to it’s final “release” format, you should reduce your peaks to 0dB or less, because integer formats can clip.
The pro studio standard is 24-bit/96kHz. If your hardware hsupports 24-bits, you can record in 24-bit (integer) resolution. But, that’s total overkill*, especially for vinyl, or if you are going to downsample to 16/44.1 for CD anyway. (Because of the noise floor, vinyl has far-less than 16-bits of accuracy or resolution.)
There’s no harm in it, but there’s also no need to dither. Dithering is “standard practice” when you downsample, but the truth is you can’t hear dither at 16-bits (or the effects of dither) under “normal” listening conditions anyway. And, you certainly are’t going to hear it with vinyl noise in the background. (Dither is noise, and adding a tiny bit of dither-noise to an already noisy vinyl recording isn’t going to make any difference.)
The guys at [u]HydrogenAudio[/u] who’ve done blind listening tests will tell you there’s no audible difference between 16-bit/44.1kHz and anything “better”. If you have some time to kill and some high-resolution files to downsample, you can repeat these blind listening tests yourself.
Are there many practical devices that do record in floating point (32-bit or 16-bit)? If not, are you arguing for setting Audacity Default Sample Format to 16-bit, and turning dither “off” (because if you don’t, 16-bit export will add it, due to a bug)?
Given Audacity does process internally in 32-bit float, isn’t it still better to keep dither on?
You could certainly hear the mis-shaped stereo dither (except for FLAC) in Audacity prior to 2.0.5.
I think you can still hear cumulative dither now in very quiet music (given removal of vinyl noise), but it may be less bad than the effects of not dithering if you are doing repeated processing in a 16-bit project.