I am importing audio from DAT tapes into Audacity 2.0.3 over a TOSlink digital fiber connector. Which setting of the sample format parameter will give me an exact digital copy? I want to create a .wav file with the identical sample values as on my DAT tape.
Since the DAT tapes are in 16-bit PCM format, should I set the sample format to 16-bit integer? Or should I accept the 32-bit floating point default? In either case, does Audacity convert the signal to 32-bit float internally and then reconvert it to integer to export as a .wav?
Which setting would give me an exact digital copy? Does the choice of sample format make any difference at all?
I am running on a Mac Mini late 2012 model with Mac OS 10.8.3. My TOSlink source program is coming from a Tascam DA-20 MK II DAT deck. My DAT tapes have a sampling rate of 48,000 sample/second.
So you will probably need to set the “Project Rate” to 48000 - you can do that directly in the lower left corner of the main Audacity window, or if you are going to do a lot of this then you can set the default to 48000 in “Edit > Preferences > Quality”
You can leave that at 32 bit float.
Assuming that you are not going to be processing the audio in any way, temporarily turn off “dither” in
“Edit > Preferences > Quality: High Quality Conversion: dither = None”
This will ensure that when you export the conversion from 32 bit float to 16 bit is bit perfect.
Note that if you are processing the audio in any way, it is generally better to have dither enabled (“shaped” is usually the best option) as dither prevents harmonic distortion due to quantize errors when converting from a high bit format to a lower bit format. There is not actually much audible difference, but dithered conversion is a bit more “pure” than undithered conversion.
Thank you very much for your suggestions and guidance. I had already turned off dithering and had set the default project rate to 48,000. I am curious what happens when I use the 16-bit integer sample format? I ran a test and the results look quite similar if not identical. Does Audacity convert everything to 32-bit float internally anyway even if no processing is being performed?
Actually, I am not certain there is no processing: I set the input channel to “MONO”. However I am providing a TOSlink input signal that has both a left and right channel that are nominally the same, although not digitally identical. Does Audacity add or average the left and right input channels to produce the mono result?
I don’t think that Audacity doe either.
Audacity requests audio data from the sound system according to the default settings in Edit > Preferences > Quality. The sound system then communicates with the audio device drivers which in turn grab audio data from the hardware, pass it to the sound system and deliver it to Audacity. Audacity then attempts to make sense of what comes back (and usually succeeds) but has little control over what data it actually gets. If Audacity requests data in a format that is not supported by the hardware, then somewhere along the line it will be converted or you will get an error message saying “Error while opening sound device. Please check the input device settings and the project sample rate.”
Conversion from 16 or 24 bit to 32 bit float is lossless (perfect). Converting back again in the way described previously is also perfect as long as the data has not been changed at all (the audio samples still have exact integer representations). All processing in Audacity is done in 32 bit float format for maximum precision,
That’s what I was saying, I don’t think that Audacity does decide - the decision is made before it gets to Audacity. Typically with better quality sound cards, when recording from analogue inputs, the left input channel is used. With typical “on-board” sound cards it could be anything. A user reported recently that on his Mac, the right channel is used (very unusual).
As Gale wrote, if you want to record a mono track with control over which channel(s) are used, record in stereo and then either “Tracks > Stereo Track to Mono” if you want an average of left and right channels, or “Track drop down menu > Split Stereo To Mono” if you want just the left or just the right channel (and delete the unwanted track by clicking on the [X])
DAT speaks AES/EBU 48000, 16-bit, stereo. It’s possible to connect one of these things to broadcast television station digital video recorders, station program routing systems and satellite links. The grown-up version of these player/recorders has RS-422 machine control just like everything else in the station. Press Play from the control room on the fourth floor and it doesn’t matter where the DAT machine is. The control cable is designed to run hundreds of feet through walls, ceilings and raceways.
There is no mono unless the recorder decided to only put something on one channel. Another option is to put the same thing on both channels – two-channel mono, but you never lose two channels that I know of.
I guess it’s possible to have odd duck signal management in post production, but a mono DAT tape would not be compatible with other DAT machines.
And yes, it’s always a juggling act to stop Audacity from helping you through bit depth conversions. If all you did was capture and produce a sound file, it should not make any difference that Audacity is doing a bit depth conversion (two actually) and you can safely turn dither off. If you start messing with the show by adding filters or effects, that will take you straight out of bit-for-bit land. There will be conversion errors and your job is to make them as insignificant as possible.