im thinking about a audio quality measuring project. The Idea is, to play a sound file and to record it on an other device. Kinda DSP scenario but i dont have one. Is there a way to compare 2 audiofiles and create a difference diagram or some result to “correct” the sound processing? And comparing should focus on real life audio [which frequence[band] should be different) (and not on encoding specifica like say 128K vs 256K) Maybe one of those “easy” questions with face-palm character
You could open both files in Audacity (one above the other), choose one and Effect > Invert. Then you can Tracks > Mix > Mix and Render to a New Track.
That new track will be the difference between the first two. Then you can use any of the analysis tools on the new track.
The problem with this approach is it’s too good. Any difference in simple volume between the two original tracks, for one example, will show up in the difference track. That and none of this is automatic. You have to apply each of the analysis tools manually and manually analyze the result. It doesn’t automatically give you a list of corrections, filters, and effects to turn one into the other.
This is a cousin to the problem people have then they try to use cancellation to solve interference damage. You can’t buy a good copy of Mantovani Strings On Fire and subtract it from your restaurant interview to get rid of the background music.
The “Plot Spectrum” effect shows the frequency content of the selected audio: https://manual.audacityteam.org/man/plot_spectrum.html
If you want to do numerical analysis, Plot Spectrum can export the raw data (Click the “Export” button).