“Encoding” is the technically correct word in that context, but could easily be confused “encoding and decoding” as used in connection with formats such as MP3, AAC etc.
The “encoding” for PCM format (“WAV”, “AIFF” and “RIFF” all use “PCM encoding”) is an extremely simple system. Files of these types usually have some sort of “header” information, and may include “metadata”, but the actual audio data is really just a list of numerical values. Each numeric value represents one point on the waveform (a “sample”).
Formats such as MP3, AAC, OGG, WMA etc. do much more complex things with the audio data - they “encode” the sample values to create a smaller set of numeric values. Playing back one of these types of files requires that the data is “decoded” to produce (approximately) the audio data that you started with. PCM formats do not require “encoding” / “decoding” in this sense because the numeric values are just stored as a sequence on numbers.
Within Audacity, whether editing or processing, audio is just a bunch on numbers. The “valid” range of sample values is from -1 to +1 (as shown on the vertical scale of an audio track). As you are no doubt aware, computers work in binary, so the range of possible values for audio samples needs to be represented as binary values. For “16 bit integer” binary data, the possible range of binary values is 0000000000000000 to 1111111111111111. In decimal that is a range of -32768 to +32767 (65,536 distinct values). The binary numbers are “normalized” to a range of -1 to +1 by dividing the binary value by 32768.
In contrast, “32 bit float” is a rather more complex form of binary number which can itself represent fractional (“floating point”) values, thus the “valid” range for audio samples of -1 to +1 can be represented directly using 32 bit float numbers, and there are billions of distinct values. 32 bit float values can also represent values outside of the range +/- 1, which is very handy for us because it means that if we accidentally cause values to go outside of the “valid” range, then we can recover from that (by amplifying to a lower level) without damage (only applies to “floating point” data).
What libsndfile does is to handle converting between “raw numbers” and “audio files”. When an audio file of a supported type is read from disk, libsndfile will extract the raw numbers from it. Similarly, when writing an audio file, libsndfile will convert the raw number data into a “file stream” which can then be written to disk.
Any of that useful? Does that give you a better idea of what is going on?