I am trying to learn more about the frequency and sound and i came up with below thoughts.
The frequency is defined by number of repetitions per second.
For example 1Hz is 1 repetition of the same wave pattern over 1 second, 40hz is forty repetitions of same wave patter over 1seconds, 100hz is one hundred repetitions per second and so on. If I correctly understand the smallest wave pattern would be some change in the air pressure that propagates over the air. For the 40hz signal to happen the air pressure would change in same fashion 40 times over 1 second.
When 40hz is recorded over .5sec we can still hear the 40hz sound. but over shorter time.
Can we say that it is still a 40Hz signal that consists of specific wave pattern that contributes to the 40hz frequency.
Can 1/40 of that signal be called a 40hz frequency signal?
Can we say that the 40Hz signal is that one specific change of the air pressure and it does not matter if it happens 40times per second or 1 time and it is still considered a 40Hz signal?
Now, lets say we have three different signals coming from different locations: 40hz, 100hz and 10000hz. If we record them separately they all would look like repeating wave with specific wave pattern making up its frequency. Now, if we record all three sounds all together the recorded sound waves would be a sum of all three signals that would create much complicated wave pattern and therefore different air pressures propagating over the air.
When the audacity performs the spectrum analysis of the recorded combined signal does it use the Fourier transform to decode different frequencies from the sum of all ? In primitive words, does it scan for patterns repeating over one second time frame and later assigns them to the specific frequency based of number of repetitions in 1 second?
Can we say that the recorded signal - in its finest form - which depends on the sampling frequency is just the amplitude (air pressure) sample at any given time?