Anolog - digital conversion process

Can anyone explain how one individual section, sample or single unit of information within a sound file can contain have several different sounds, an entire orchestra at once if required. Put very simply, surely the information distinguishing the sound of a trombone from that of a timpani or a snare from a snore in the audience must be stored somewhere in the sound file… But where? how?
Every time I search for this information, all I get is, frequency, sample rate, location in the stereo spectrum
Thanks in advance

It’s a very clever trick performed by your brain.
“Sound” is just “vibration” - typically, “vibrations in air”. When you hear a recording of a trombone, you are not hearing “a trombone”. Your ears respond to vibrations in the air and create electrical impulses in your brain. Your brain is able to recognise patterns in the auditory information, and if it has sufficient experience of “trombone sound”, it is able to match the patterns within the information with the concept of “a trombone”.

Thanks Steve I can understand that in principle alright but what I have difficulty with is, in digital terms, is this information (full orchestra etc) stored in just one frequency reference point?

No. The “full orchestra etc” is a conceptual connection created by the brain. The digital data is just meaningless data until the mind gives it meaning.

The individual points in the sound file correspond to the pressure of the air around the original microphone at a particular instant in time. It takes a great many of these points (44,100 for every second) to represent a full orchestra.

I think I see the problem. The blue waves on the Audacity timeline represent the back and forth motion of the air around your ears. If the air changes direction between twenty times a second and twenty thousand times a second we hear that as sound. Microphones change that air motion into electrical motion and the digitizer changes that electrical motion into numbers.

You can watch Audacity doing it. Attached. If you magnify the blue waves enough, they will show you the sample points. That’s those little blue dots. Each dot is a binary number.

Assume you sung a song and recorded it into the computer. Collect all those binary numbers in a basket play them out one after the other really fast. The computer turns them back into blue waves, the soundcard turns them into electrical variations and the speaker turns them back into sound—air movement.

Please note that nowhere in this process does the computer ever figure out which of those number collections is a trumpet. That’s what’s so hard to explain to people when they want us to “separate the violins from the vocals” in a song. The computer doesn’t know violins. It’s just a basket of numbers.

Screen Shot 2014-12-14 at 22.16.54.png

It’s not just digital it’s the same in good old analog too - think about it. What is represented at any one particular point in a vinyl LP groove or a single point on a tape? :nerd: