So, I’m asking for some help with Audacity spectrogram view. The goal for my AV project is to generate a spectrogram of a specific WAV file, which was earlier generated in python, and then to save it as a PNG file. Of course, I can stick with python with some of the available spectrogram libraries. However, the thing is I absolutely love Audacity spectrogram view ever since my linguistics undergrad. It is visually gorgeous - which does matter for the project.
Unfortunately, going into Audacity and selecting spectrogram view for the wav file (and then screenshotting) is out of question - the wav file will be edited constantly in real time and I need this to work in code, not Audacity. The idea is to create en vivo shifting spectrogram based on me changing the source wav file in python (well, actually using some light designer software but that’s irrelevant).
Yes, I did check the Audacity code. But I’m more or less completely lost. I’d appreciate any amount of help: from pointing me in the right direction to helping out a bit. Once the project is done it should look very, very nice - that is, if I manage to make this work with Audacity.
That is likely to be very difficult.
There’s another app called “Friture” that is written in Python / QML that includes a spectrogram view that is very similar to Audacity. GitHub - tlecomte/friture: Real-time audio visualizations (spectrum, spectrogram, etc.)
Hey, thanks a lot for the answer. I’ve checked the app and it’s not quite Audacity in terms of spectrogram view. But I suppose the code might be helpful nevertheless.
But back to Audacity, how “very difficult” you think it might be? I’ve read through what I think are the spectrogram scripts. Yes, I’m lost, but I can learn. Step one is knowing if it’s theoretically feasible that I’d manage to take those scrips and make it work - you know, separate them from the Audible interface, run them by themselves while still being able to edit the spectrogram settings. I do suspect it will be hard but hearing that it’s at least possible would help me a lot. Alternatively, let me know if it is impossible - because idk, the scripts might be too closely tied to the rest of the Audacity code - so I don’t waste time on this.
EDIT: I’ve also checked Sonic Visualizer and ofc Praat, but neither of them come close to the beauty of Audacity’s spectrograms.
What is it in particular that you like about Audacity’s spectrogram view? If it’s the colours, then that’s just a customization that you could make to any spectrogram plotting code. (Audacity currently provides 2 different colour maps and 2 greyscale maps, selectable with the “Scheme” setting in Spectrograms Preferences - Audacity Manual).
Audacity uses a standard FFT library, but it’s implementation is specific to Audacity’s use of blockfiles rather than a normal audio file. The implementation is very specific to Audacity’s code structure, so the only part that is really portable to other applications is the FFT library, which is pretty much the same as any other FFT library.
I can’t really point out the specifics to be fair. It’s definitely the colours as well, but I’ve managed to got those with e.g. the Librosa python library. But what I failed to get across the different python libraries is, I don’t now, the overall resolution and the structure? For instance, my profile pic is a spectrogram from Audacity. I’ve tried to plot several spectrograms with several different libraries using the same WAV source file, but never did I achieve the detailed structure present in the Audacity’s spectrogram view. The Librosa package does come fairly close in terms of colours and a few other attributes, but the resolution absolutely sucks. I’ve tried to duplicate Audacity parameters, but the result is still vastly different.
Another very valuable feature is the spectrogram settings. As I’ve said, my project is meant to end up as a projection of a spectrogram changing in time. I’ve prepared several parameters that can do this in the python code that generates the sinewave/WAV file. But the parameters in the Audacity spectrogram settings can likewise induce interesting changes. Overall, it’s hard to pinpoint what exactly I value about Audacity spectrograms since I’m judging it on a purely visually-appealing level which is not really translatable to any of the parameters. I just do know I’ve never managed to come close to them by using different apps and python packages.´
This brings me to another way of thinking. You’ve said Audacity uses a standard FFT library. In that case all I need to do is identify where it diverges from other apps (Praat, Sonic Visualizer, Friture, MatLab + all the python libraries I’ve used so far). So for starters there is this question: what do you mean by Audacity’s use of blockfiles?
When you import an audio file, Audacity copies the audio data into the project (Audacity does not operate directly on the file).
Whether imported or recorded, audio in an Audacity project is save as small chunks of data of about 1MB each. These are called blockfiles. An audio track is not one continuous file, but many of these small blockfiles. A record of where each blockfile belongs is stored as XML within the project’s database.
To create the spectrogram view, Audacity has to process each blockfile in the sequence, and must correctly handle the FFT windows at the boundaries between adjacent blocks.
This topic was automatically closed after 30 days. New replies are no longer allowed.