EQ question
Forum rules
This forum is for Audacity on Windows.
Please state which version of Windows you are using,
and the exact three-section version number of Audacity from "Help menu > About Audacity".
Audacity 1.2.x and 1.3.x are obsolete and no longer supported. If you still have those versions, please upgrade at https://www.audacityteam.org/download/.
The old forums for those versions are now closed, but you can still read the archives of the 1.2.x and 1.3.x forums.
Please state which version of Windows you are using,
and the exact three-section version number of Audacity from "Help menu > About Audacity".
Audacity 1.2.x and 1.3.x are obsolete and no longer supported. If you still have those versions, please upgrade at https://www.audacityteam.org/download/.
The old forums for those versions are now closed, but you can still read the archives of the 1.2.x and 1.3.x forums.
EQ question
I think my question is this: Can I analyze the timbre of one instrument (in this case, I have an analog piano with a very old time feel and room ambience) and then apply those timbre qualities to a digital piano? In this case I want to augment the progression and want to continue with that piano sound using a synth. I don't know how to do it. But I was wondering if I can analyze the original piano sound and duplicate it's sound on the synth piano? I don't care about it being exact, but I don't like the high contrast of having this old time timbre and going to such a clean digital sound.
Last edited by Jebbers on Wed Mar 25, 2020 5:08 pm, edited 1 time in total.
Re: EQ question
There are ways of "sampling" the piano sound can using that for MIDI but that's about as much as I know...
Otherwise you won't get them to sound alike. You can play the same note on both instruments and Plot Spectrum to see the difference in the frequency spectrum and then you can make some EQ adjustments. Adjust the levels to approximately-match first (so the graphs match-up better) and use broad-adjustments because you don't want to match just one note.
You can add reverb for "room sound", but you'll just have to experiment by-ear (and again it's not going to match exactly).
There are "matching EQ" plug-ins. Izotope Ozone has one, but it's not free. And, there are other differences besides EQ.
Otherwise you won't get them to sound alike. You can play the same note on both instruments and Plot Spectrum to see the difference in the frequency spectrum and then you can make some EQ adjustments. Adjust the levels to approximately-match first (so the graphs match-up better) and use broad-adjustments because you don't want to match just one note.
You can add reverb for "room sound", but you'll just have to experiment by-ear (and again it's not going to match exactly).
There are "matching EQ" plug-ins. Izotope Ozone has one, but it's not free. And, there are other differences besides EQ.
-
kozikowski
- Forum Staff
- Posts: 69374
- Joined: Thu Aug 02, 2007 5:57 pm
- Operating System: macOS 10.13 High Sierra
Re: EQ question
Many people record the target piano and plug the recordings into the synth. There's even a way to cheat by using a real piano every so many notes and use pitch shifting to get the others. You can tell when Promotion and Publicity had a hand in it because the notes at the extremes start sounding funny.
My current keyboard has a different tactic. They use what sounds like the real thing in the normal instrument range and then pitch shift to get the rest of the keyboard. That's not realistic, but it can be musically interesting. I like using Voice Choral and Violin Choral way lower than normal in a song. You almost can't tell what I'm doing. It's the home version of the cathedral 16-foot long "flute."
Do you have the target piano? You can't do these jobs from a recorded song. You can't even cheat. You need pure piano notes by themselves to do a good job.
Koz
My current keyboard has a different tactic. They use what sounds like the real thing in the normal instrument range and then pitch shift to get the rest of the keyboard. That's not realistic, but it can be musically interesting. I like using Voice Choral and Violin Choral way lower than normal in a song. You almost can't tell what I'm doing. It's the home version of the cathedral 16-foot long "flute."
That will be hard. Any time the work order says "...and produce an infinite number of changing overtones and harmonics," you know you're in trouble. That's why "Echo" and Reverb" always sound a little funny. That's also one reason we can't take echoes out of a performance. It's the upside down version of that problem.room ambience
Do you have the target piano? You can't do these jobs from a recorded song. You can't even cheat. You need pure piano notes by themselves to do a good job.
Koz
Re: EQ question
I think I'm in over my head. I'm not that sophisticated, yet. The synth I am using is just the Musescore synth. I have no keyboard or software. I guess I'd just like to know how to read the waveform, spectogram and analysis readouts. I know that actual soundwaves are three dimensional. I know that amplitude is the height of a wave, pitch is the longitude of the wave. I don't know the term for a group of waves, such as a note. I spose I can call that a note. The longitude of a note indicates it's note value and can indicate the meter. How do I read other information, such as the timbre of the instrument? What can I read from the spectrogram that I cannot read from the waveform? I may want to begin getting acquainted with that.
I'm looking at other options to complete this piano progression. I can edit the progression and possibly duplicate chords and paste them in where I need them. I still need to know this other stuff for future projects.
I'm looking at other options to complete this piano progression. I can edit the progression and possibly duplicate chords and paste them in where I need them. I still need to know this other stuff for future projects.
Re: EQ question
With any visual display you are limited to what you can see and what your brain can interpret. i.e. You can't look at the waveform (or spectrogram, etc.) and tell what song is playing. If you have some samples/examples you might see the difference between piano and a trumpet, or something like that.I'd just like to know how to read the waveform, spectogram and analysis readouts. I know that actual soundwaves are three dimensional. I know that amplitude is the height of a wave, pitch is the longitude of the wave
I believe most modern MIDI is sampled (recorded) from real instruments. In the early synthesizer days the sounds were actually synthesized by trying to electronically duplicate/approximate the harmonic structure and the ADSR of various instruments. This is advanced stuff! A lot of scientists & engineers have spent lots of time trying to re-create the sound of real instruments and I'm sure books have been written on the subject.
Or, purely electronic sounds were generated that didn't sound like any existing "real" instruments. That's a lot easier!
Technically, sound is defined 2-dimensionally as amplitude and time like you see in the standard waveform view. Digital audio is simply a series of amplitude-samples at a known sample rate. (If you double the sample rate, you double the pitch and the tempo, just like if you speed-up a vinyl record or analog tape.)I know that actual soundwaves are three dimensional. I know that amplitude is the height of a wave, pitch is the longitude of the wave. I don't know the term for a group of waves, such as a note. I spose I can call that a note.
But, our brain doesn't perceive frequency/pitch as time-related so we usually talk about frequency, amplitude, and time. And when you look at a waveform you can't see frequency/pitch until you zoom-in to the point where you're looking at a fraction of a second of audio.
The spectrum shows the frequency spectrum over a selected period of time so you loose the time information. A spectrogram does show the "3 dimensions" of amplitude, frequency, and time.
It might be "educational" to use the Generate function to generate some sine waves (and maybe square waves) at different known-frequencies and then compare the various visual representations (as well as listening).
A note is defined by it's frequency. We don't directly hear "frequency" (cycles per second), but we perceive "pitch" which is related to the lowest and most-dominant frequency at that moment. (Similar to how our eyes see color which is related to wavelength.) ...Technical guys talk about frequency and musicians talk about notes & pitch.I spose I can call that a note.
Re: EQ question
OK. Looking at the waveform while zoomed in, I can see the different relative pitches because their lengths are shorter or longer than each other. As I zoom out, I notice that at one point the wave form appears to have a weave of waves, as if harmonics are being represented. What is this actually? How should the weave be interpreted?
Re: EQ question
It's virtually impossible to determine what a waveform will sound like just from looking at the waveform.
As an illustration, consider these two waveforms:
What do you expect will be the difference in the two sounds?
Type your reply before listening to the two sounds.
As an illustration, consider these two waveforms:
What do you expect will be the difference in the two sounds?
Type your reply before listening to the two sounds.
- Attachments
-
- Audio Track-2.wav
- (430.81 KiB) Downloaded 4 times
-
- Audio Track.wav
- (430.81 KiB) Downloaded 4 times
9/10 questions are answered in the FREQUENTLY ASKED QUESTIONS (FAQ)
Re: EQ question
I'll listen to them in a minute. I guess that they sound the same? Because of the pointed peaks. They are similar to the square shape.
Re: EQ question
They sound identical because they are harmonically identical (even though the waveforms look quite different).
9/10 questions are answered in the FREQUENTLY ASKED QUESTIONS (FAQ)
Re: EQ question
They both look like square forms to me and when I heard them they sound the same.