Sample Data Export/Import is “bit-perfect”/lossless?[SOLVED]

Hello,


I would just like to check if Sample Data Export/Import is “bit-perfect”/lossless?

I took into account this webpage:
http://manual.audacityteam.org/man/sample_data_import.html

Example:
– I export a sound in 32 bits 48000 Hz as RAW file;
– I export the same sound with Sample Data Export;
– I import with Sample Data Import the text file generated previously;
– I export the result in a new RAW file 32 bits 48000 Hz.

→ the first RAW file = the second RAW file? (I checked their SHA-512 and there are identical, but I would like your confirmation)





EDIT – [Answer] Yes, Sample Data Export/Import can be “bit-perfect”/lossless:


sample-data-export.ny

→ if desired, backup the file (= copy it for example) before manipulate it

  • In the file, you have to replace:
(setq *float-format* "%1.5f")               ; 5 decimal places

to →

;; (setq *float-format* "%1.5f")               ; 5 decimal places
(setq *float-format* "%.100g")               ; 100 digits maximum. Non-significant zeroes are not kept

;maxlen 1000001


to →

```text
;; maxlen 1000001

(if (> number 1000000)
(add-error “Cannot export more than 1 million samples.”))


to →

```text
;;  (if (> number 1000000)
;;      (add-error "Cannot export more than 1 million samples."))
  • If you go into strange things, you should read the entire thread before any other research.

No,not bit perfect for 32-bit samples. Sample Data Export writes the sample values with 5 decimal places, whereas 32-bit float precision is around 15 decimal places. In other words, Sample Data Export rounds the sample values to 5 decimal places. 5 decimal places are almost enough for 16-bit precision.

Oh… you are right…

– Is it possible to increase this value to be “bit-perfect”/lossless? If yes, you said “whereas 32-bit float precision is around 15 decimal places”. Do you have a more accurate value?
– Is it possible to increase the limit of 1 million samples?

I think I have found these two parameters in “sample-data-export.ny”, but maybe I will break something.

The “ny” file is just a plain text file. Make a backup copy of it before you start modifying it, then there’s no need to worry about breaking it.

Fortunately on Linux there is no shortage of good text editors, but for any Windows users reading this, I’d recommend Notepad++
(On Linux I use Scite, which has built-in syntax highlighting for LISP)

Yes, but be careful. Many applications will choke on huge text files.

(defun checknumber ()
  (setq number (min number len))
  (if (< number 1)
      (add-error "No samples selected."))
  (if (> number 1000000)
      (add-error "Cannot export more than 1 million samples."))
  (setq number (truncate number)))



It could certainly be made bit-perfect for 16-bit or even 24-bit, but I’m not sure about 32-bit float, but I think it will probably be bit perfect with around 16 decimal places.

No, because we’re talking about converting between binary floating point numbers and decimal floating point. For example, decimal 0.1 in binary is 0.00011001100110011…

(setq *float-format* "%1.5f")               ; 5 decimal places

Try something like:

(setq *float-format* "%.16f")               ; 16 decimal places

Note that the more decimal digits, the bigger the file size. Each digit is 1 byte, so 16 decimal places, plus a leading “0.” or “-0.” and a new line character, is 19 or 20 bytes per sample, so 1million samples will create a file that is close to 20 MB.

It is ok for the text editors, thank you for your suggestions!

So in the exported text file, I have values like this:

0.62876382284369708500000000000000000000000000000000

When there is at least 2x 0 values at the end (here there are many more), it means I am “bit-perfect”/lossless, is not it? You gave me 16 as decimal places, I put 50 (yes, do not worry about the size of the text file, I have a lot of space), all seems fine!

Here is a very small 32-bit float number as a decimal to 50 decimal places:
0.00000002683188249363865907071158289909362792968750

You might want to read a bit about IEEE 754 standard to get an appreciation of why this is complicated:
https://en.wikipedia.org/wiki/Single-precision_floating-point_format

I am sure I did not quite understand your link. With your example and your link, it seems the “decimal places” are infinite, so I wonder if I can get “bit-perfect”/lossless with Sample Data Export.

But in my case, I always have the same number of 0 at the end.

Why do you need “bit perfect”?
Where do the original “bits” come from?

Why should I edit without “bit-perfect” in mind?
I do tests with the loopback recording. It depends for the files: 16 and 24 bits. But I do tests with the Tone plugin of Audacity too, and I assume it is in 32 bits (and I save it as 32 bits file). I play them with VLC and I disabled its resampling.

“Bit perfect” is one of those phrases that is easy to grasp, but not really very meaningful.

32-bit → 8-bit is obviously not bit perfect, right?
but
8-bit → 32-bit → 8-bit may be lossless.

24-bit → 64-bit → 24-bit → laptop / iPod / other portable player, is probably less than 16-bit in terms of signal to noise (last time I looked, Apple were quoting “SNR > 90 dB” and “THD+N < -75 dB”)

The smallest, non-zero, 32-bit float value is 1.40130×10−45 (about 0.00000000000000000000000000000000000000000000014), but sound cards are 24-bit at best, so that will be converted to zero + noise, where “noise” will be millions of times larger than this tiny value.

Yes I understand this.

You mean “0.00000000000000000000000000000000000000000000014” is not the maximum exponent and can be for example greater than “0.000000000000000000000000000000000000000000000140000000…”? Or 1.40130×10−45 is the maximum possible exponent?

For loopback recording, I do not use the “Stereo mix” of the sound card because I know it is not “bit-perfect”, I use WASAPI loopback (I could try ALSA and Jack and PulseAudio forwarding under GNU/Linux), which can be “bit-perfect”. But this part is not really related to the subject. I simply want to configure Sample Data Export to export “bit-perfect” sound from Audacity. By the way, I know there is a delay between recording and playback, so I added Silence at the beginning and end of the test files to precisely select the sound. With the same steps as my first post, I have their SHA-512 which are identical (with 50 decimal places, I even tested 200 and I still have 0 as value after the first 15/16 values).

I wasn’t sure if Nyquist supported this notation, but it appears that it does:

(setf *float-format* "%.100g")

I think this should ensure that enough digits are printed in all cases.

“.100g” means 100 digits will be printed after the dot, right?

Yeah now it is a pretty big number! So as long as I do not have around 100 digits printed after the dot (I assume I can even increase the value if necessary), the exported text file will contain the sound as “bit-perfect”/lossless?

Not quite.
It means, use the shortest representation: %e or %f, with up to 100 digits, including digits on both sides of the dot.,
where %f is floating point notation, and %e is scientific notation (mantissa/exponent).

32-bit floating point numbers, and hence sample values, may be greater than 1, in fact, very much greater - up to about 340282300000000000000000000000000000000.
By my reckoning, 100 digits should be sufficient to uniquely identify every value and thus give the same 32-bit sample value after the 2 way conversion.
100 digits is more than enough to provide a “bit perfect” decimal representation of any sample value in any format support by Audacity (up to 32-bit float).


*1 One this that we haven’t mentioned, though in normal use they should not occur, are inf, -inf and NaN. (positive / negative infinity, and “Not a Number” (NaN - Wikipedia)

*2 I think that far less than 100 digits are required, but the ‘100’ ensures that the number of digits has a limit.

Hum yes, I used a too big shortcut, sorry.

I will handle this kind of case if I meet him, no problems.

With this last configuration, I just tested by changing the last number of a line in the exported text file. Once imported into Audacity, then exported in a 32 bits RAW file, the SHA-512 of both RAW files are identical, even with a modified value… Does Sample Data Import truncate or round the values when importing? Or Audacity itself? (in Audacity, my project settings are 32 bits 48000 Hz)

Converting floating point values to decimal is handled by Nyquist. You would need to dig into the Nyquist programming language code (The Computer Music Project Software) to find out exactly what happens there.

I tried

sample-data-import.ny, line 183:

((numberp val)  (float val)) ;valid.

replaced by

((display val)  (float val)) ;valid.

So, in the input file the first line is:

0.80000001192092896

and with the “Debug” button before importing, it shows:

0.8

but if I change the first line, for example to

0.85445676892092896

now it shows:

0.854457

EDIT: But what I do not understand at all is that even if Audacity imports “0.8”, if I export this importation, the new exported file will be the same as the original input file.

Try running this with debug:

(setf test 0.123456789)
(display test)
(setf *float-format* "%.10f")
(display test)

float-format sets the format for displaying numbers. It does not affect the actual value.

I tested

(setf *float-format* "%.10f")

and even better:

(setf *float-format* "%.1000g")

You are right, the full value was here.
Back to the file, it contains in multiple times (generated by “Tone” 23000 Hz, 0.8 amplitude)

0.80000001192092896
-0.80000001192092896

I changed them to

0.80000001192092895
-0.80000001192092895

Now when I import, I can see

0.80000001192092896

:confused: