bit depth


I was wondering, is there an algorithm or like a pseudocode for reducing bit depth of a .wav file ? I’m new to programming with audio, so I might need detailed information

Thanks in advance

In PCM audio, the amplitude of each sample is defined on a linear scale.
For 32-bit float format, the maximum sample value is +1.000 and the minimum value is -1.000
For integer formats; signed 16-bit has a maximum of +32767 and a minimum of -32768, and signed 8-bit has a range of +127 to -128.
Taking the example of converting from 16-bit to 8-bit PCM, you would divide each sample value by 32768 and multiply by 128, then round the answer to the nearest integer value. Unfortunately this has an unwanted side effect which is that gradual changes from one 16-bit value to another will become repetitive steps when rounded to the nearest 8-bit value, and this stepping sounds like a rather unpleasant (though fairly low level) form of distortion. To avoid this side effect it is usual to add in a random factor so that some values are rounded up and others are rounded down and there should be a higher probability that values are rounded to the closer integer value. This process is called “dither” and essentially what it does is to randomize quantization errors. It is important that neither rounding nor dithering causes digits to “wrap” between positive and negative. For example, when converting the value 32767 from 16-bit to 8-bit, it must be rounded to 127 (01111111) and not 128 (11111111) because 128 in signed 8-bit notation is negative.