Hey everybody, Tobi here and its the first time I am opening a thread here. I used Audacity for some audio manipulations but nothing special at all. Probably you all are really fancy and handsome stuff. However, that’s the reason why I am asking you about the following «interesting» topic.
I am working in the engineering field regarding numerical simulations (computational fluid dynamics). So I am dealing with fluid flows in a lot of different topics such as aerodynamic, heat and ventilation systems and so on.
In a wide range of simulations, there are unsteady phenomena that lead to vortex shedding. Or another situation would be a “crash” simulation in which material gets deformed rapidly. Due to solving the flow equations, I get a 3-dimensional pressure field. So I could record the pressure at some defined point. Hence, this pressure wave should correspond to some sound (whatever it would be). The pressure would be in Pa and be time-dependent.
The question now is: Based on this information, could we create a corresponding sound out of that data?
I also could make an FFT analysis on the pressure field but I am not sure if this will somehow help?
I am not a sound-producer and hence, I have no idea if something like that would work out but out of the box, it should somehow be possible to create some noise using e.g., audacity by raw pressure data (or their fluctuations)?
Any feedback is warmly welcomed and sorry if the thread is not located properly.
Thanks in advance,
If you understand FFT and you understand how FFT relates to sound, I’d say you have a good start. People doing scientific work often use Matlab which can do all kinds of analysis and manipulations. I’ve never used it but it can read & write audio files. I’m not sure if it works in real-time (if that’s important to you).
Audacity does NOT work in real time. There is a special programming language for Audacity called [u]Nyquist[/u] that’s mostly used for effect plug-ins and audio analysis. But, I think you’d need your data in an “audio format” and you’re going the other way around.
Or of course, you could write a custom program in any programming language, but even if you already know how to program learning “audio programming” is a rather big task and you might want to get some help with that part of the project. (I’ve never done audio programming myself, but I’ve “studied” a little audio & DSP programming).
Note that each data point is one audio sample. By default, Audacity spaces the samples at intervals of 1/44100 seconds (the default “sample rate” is 44100).
If your data sampling frequency is not 44100 samples per second, you can change the track sample rate to match your sampling frequency (see “Rate” in the track’s drop-down menu: https://manual.audacityteam.org/man/audio_track_dropdown_menu.html#rate), though note that if the sample rate is below a few hundred samples per second, you won’t hear very much because the frequency is too low.
If you are sampling at a lower rate than the track’s sample rate, then effectively you are representing the sound “speeded up”.
If you are grabbing a numeric value once every second, and put that into a track which has a sample rate of 44100 Hz, then 44100 seconds worth of numeric values will occupy 1 second of track time. That is, it will play 44100 times faster.
If you grab 1 sample every millisecond (1000 per second), then in a track with a sample rate of 44100 Hz, it will play approximately 44 times faster (44.1 to be exact).
The data I can generate are time-dependent pressure data of arbitrary sampling rate - even though 44100 is very high - but for 3 s it would fit maybe
The data isn’t too important and Audacity can handle higher or lower sample rates and it’s just “processing numbers” which (usually) happen to represent audio samples.
Note that Audacity is intended for audio/sound and people sometimes run into problems/limitations when using it for non-audio scientific applications. You can also have problems if you try to record or play-back signals/frequencies above or below the audio range. The hardware (microphone/soundcard/speakers), and of course your ears, are generally limited to the audio range of 20Hz - 20kHz.
But if you already have the data you can manipulate it into audio data that can be played as sound, perhaps by simply speeding it up or slowing it down, or perhaps with more-fancy FFT manipulation, etc. But, I’m not exactly sure what you’re trying to do or if any of this will be useful. Most “data” doesn’t convert to audio in any useful/meaningful way but if you’re dealing with vibration data that’s a good start. And air pressure vibrations/variations in the audible frequency range are sound (in liquid too if your head is under water ).