Is there a feature in Audacity which I can use to import a text file, simliar to the “spectrum.txt” file generated when you analyse a waveform via Analyse and Plot Spectrum menu buttons? In other words, I can create a list of frequencies and levels and import them in Audacity and then be given a choice as to what type of waveform (sine, square, etc.), for example. Frequency (Hz) Level (dB)
No there is no feature that does that, however it is relatively simple to do from the Nyquist Prompt.
The Nyquist Prompt is in the Effect menu, and to enable it you need to have part of an audio track selected.
The following code posted into the Nyquist Prompt box will generate the required tone into the selected track.
(setf data (list
(simrep (i (length data))
(scale (db-to-linear(second (nth i data)))(hzosc (first (nth i data))))
Additional values may be added to the list provided that they are in the form:
where the first character is a single quote,
the first number is the frequency in Hz
the second number is a negative number and is the amplitude in dB.
This is a minor variation of the previous code that will probably make it easier to copy and paste your data:
(simrep (i (/ (length data) 2))
(scale (db-to-linear(nth (+ 1 (* i 2))data))(hzosc (nth (* i 2)data))))
In this version no special formatting is required - just paste the list of data pairs in the appropriate place.
Note that this will work for a large number of frequencies (over 100) but if you try using thousands of data pairs it will generate a stack overflow error.
As you’ve not said what you want this for, I don’t know if that will be a problem or not.
An interesting concept.
I’ve used “Plot Spectrum” to analyse some piano notes, then used a modified form the code from the previous message to synthesize the piano tones, and finally applied the original amplitude envelope to synthesized tones. This is a rather crude form of synthesis, but the result is undeniably piano-like.
Here’s a brief recording of the original piano notes followed by the synthesised notes.
The difference between the code used to synthesize this and the previously posted code is that this version applies a random phase shift to each of the generated frequencies rather than each frequency starting at 0 degrees.
;; Synthesise sound from spectrun analysis
; data goes here
; in the form of two numbers
; frequency (Hz) and amplitude (dB)
; for example
; 440 -12
(setf out (s-rest 1)) ;initialise output
;; function to generate random phase shift
(defun rand ()
(* (rrandom) 360))
;; loop through each data pair
(dotimes (i (/ (length data) 2))
(let ((phase (rand))
(amp (db-to-linear(nth (+ 1 (* i 2))data)))
(freq (nth (* i 2)data)))
;; add new sine to output
(setf out (sim
(scale amp(hzosc freq *sine-table* phase))))))
out ; return output
This is the code that copies an amplitude envelope from the left channel of a stereo pair and applies it to the right channel.
;; Apply amplitude envelope from left channel to right channel
;; Caution - there is no checking for silence
;; (inverse of silence is infinite).
;; calculate envelopes at 20 Hz
(setq block (truncate (/ *sound-srate* 20)))
(snd-avg (aref s 0) block block op-peak))
(snd-avg (aref s 1) block block op-peak))
(mult ltops ; left channel envelope
(aref s 1) ; original right channel
(s-exp (diff (s-log 1.0)(s-log rtops)))) ; inverse of right channel envelope
Applying the averaged frequency response of the six note phrase to each of the synthesised notes is why some of them sound very unnatural. The longest (final) note is the most faithful: because of its duration more of its frequency response has made it into the Frankensteinian equalisation used to create the synthesised phrase.
If you’d just used the frequency analysis from one note to produce a synthesised version of that one note the result would be more faithful, although you would still be applying an averaged frequency response to a whole note so it will still be a little unnatural, (as the natural piano note tails off the high frequencies die off first, but that wouldn’t happen on this type of synthesis).
I noticed that the synthesised version has much less noise than the original.
Another reason for the “unnatural” sound is that in the real piano note there are specific phase relationships between the harmonics. That information is not available from the spectrum analysis, but if you try the earlier code examples you will see (hear) how important that is. In the first examples, the synthesized sine tones all start at 0 degrees, consequently there are very pronounced beats as the tones move in and out of phase with each other. To reduce this unwanted effect, the later code sets a random phase shift to each tone - this totally ignores the actual phase relationships, but as that information is not available it is a better compromise than having them all start exactly in phase.
A more “natural” synthesis could be achieved by analysing the original sound in smaller overlapping chunks - The posted sample was just analysed in 7 chunks (roughly) following the notes. Unfortunately I don’t know how to analyse the frequencies (FFT analysis) in Nyquist, so I’m not able to automate analysis/synthesis. Nevertheless there are some interesting possibilities for synthesizing sounds if “natural” is not an important consideration, for example scaling the values in the frequency list can produce a pitch shift, which need not be linear - the harmonic breadth (frequency range) could be widened or narrowed to produce new sounds.
Another limitation of the synthesis method used is that the envelope tracking is done very roughly - 20 Hz is not high enough to reproduce the hard attack of piano notes.
I deliberately added a little noise to the piano sample to avoid absolute silence which would produce infinite gain during the envelope tracking - not a good way to do it
Wow! This is a complete blast from the past for me.
In 1974-76 I worked with John Keeler at the University of Waterloo following up on his “Piecewise periodic analysis of almost periodic sounds and musical transients”. If you do a search for “keeler piecewise periodic” you’ll find lots of references to his 1972 paper, but all of them require subscriptions to read the full text.
Keeler was analysing organ sounds, but the technique could be expanded to other instruments. He does explicitly specify “non-percussive musical transients” in the abstract, and the piano is actually a percussion instrument, so I’m not sure if he was saying that the technique would not work with piano sounds.
But I remember how it worked.
You needed to know the fundamental frequency of the sound (note played on the instrument). The software then did an FFT analysis on each period of the sound, and output the level and phase of as many harmonics as you wanted, versus time. The output was two graphs: amplitude versus time of the fundamental and harmonics, and phase versus time of the fundamental and harmonics.
[As an aside, the sounds were first recorded on an Ampex reel-to-reel (in the field (for the organ), or in the anechoic chamber (for the guitar)) Captured using a 12-bit A/D converter connected to a DEC PDP-11 and written to tape. That tape was then taken to the IBM 360 operators, who would mount it for you so you could run the FORTRAN analysis program.]
Anyway, someone with the requisite programming skills and familiarity with FFT algorithms could easily reproduce this system today. Any takers?
I did give the code above it a shot, (the envelope follower will come in handy), however but I can’t get anything musical form the synthesis code ,
most of my attempts sound like an electric toothbrush …
That particular sound is quite fussy - errors in both the synthesis and in the envelope follower tend to compound on each other.
Because of the random factor in the synthesis you will get a slightly different version each time.
In this sample the envelope was created manually, so we’re only looking at the frequency synthesis.
If you listen to the background hiss in the two samples you can clearly hear how the frequency content in the original varies over time, but in the synthesized version it doesn’t.
Takes me back to some work I did on granular synthesis back when I was at college.
How much do you know about about FFT? I’ve got some information about implementing FFT in Nyquist, but I don’t have enough depth on FFT theory.
Apologies if we’ve hijacked your original question - feel free to jump in and say more about what you were planning.