I think my question is this: Can I analyze the timbre of one instrument (in this case, I have an analog piano with a very old time feel and room ambience) and then apply those timbre qualities to a digital piano? In this case I want to augment the progression and want to continue with that piano sound using a synth. I don’t know how to do it. But I was wondering if I can analyze the original piano sound and duplicate it’s sound on the synth piano? I don’t care about it being exact, but I don’t like the high contrast of having this old time timbre and going to such a clean digital sound.
There are ways of “sampling” the piano sound can using that for MIDI but that’s about as much as I know…
Otherwise you won’t get them to sound alike. You can play the same note on both instruments and [u]Plot Spectrum[/u] to see the difference in the frequency spectrum and then you can make some EQ adjustments. Adjust the levels to approximately-match first (so the graphs match-up better) and use broad-adjustments because you don’t want to match just one note.
You can add reverb for “room sound”, but you’ll just have to experiment by-ear (and again it’s not going to match exactly).
There are “matching EQ” plug-ins. Izotope Ozone has one, but it’s not free. And, there are other differences besides EQ.
Many people record the target piano and plug the recordings into the synth. There’s even a way to cheat by using a real piano every so many notes and use pitch shifting to get the others. You can tell when Promotion and Publicity had a hand in it because the notes at the extremes start sounding funny.
My current keyboard has a different tactic. They use what sounds like the real thing in the normal instrument range and then pitch shift to get the rest of the keyboard. That’s not realistic, but it can be musically interesting. I like using Voice Choral and Violin Choral way lower than normal in a song. You almost can’t tell what I’m doing. It’s the home version of the cathedral 16-foot long “flute.”
room ambience
That will be hard. Any time the work order says “…and produce an infinite number of changing overtones and harmonics,” you know you’re in trouble. That’s why “Echo” and Reverb" always sound a little funny. That’s also one reason we can’t take echoes out of a performance. It’s the upside down version of that problem.
Do you have the target piano? You can’t do these jobs from a recorded song. You can’t even cheat. You need pure piano notes by themselves to do a good job.
Koz
I think I’m in over my head. I’m not that sophisticated, yet. The synth I am using is just the Musescore synth. I have no keyboard or software. I guess I’d just like to know how to read the waveform, spectogram and analysis readouts. I know that actual soundwaves are three dimensional. I know that amplitude is the height of a wave, pitch is the longitude of the wave. I don’t know the term for a group of waves, such as a note. I spose I can call that a note. The longitude of a note indicates it’s note value and can indicate the meter. How do I read other information, such as the timbre of the instrument? What can I read from the spectrogram that I cannot read from the waveform? I may want to begin getting acquainted with that.
I’m looking at other options to complete this piano progression. I can edit the progression and possibly duplicate chords and paste them in where I need them. I still need to know this other stuff for future projects.
I think what you’re trying to do is impossible…
I’d just like to know how to read the waveform, spectogram and analysis readouts. I know that actual soundwaves are three dimensional. I know that amplitude is the height of a wave, pitch is the longitude of the wave
With any visual display you are limited to what you can see and what your brain can interpret. i.e. You can’t look at the waveform (or spectrogram, etc.) and tell what song is playing. If you have some samples/examples you might see the difference between piano and a trumpet, or something like that.
I believe most modern MIDI is sampled (recorded) from real instruments. In the early synthesizer days the sounds were actually synthesized by trying to electronically duplicate/approximate the harmonic structure and the [u]ADSR[/u] of various instruments. This is advanced stuff! A lot of scientists & engineers have spent lots of time trying to re-create the sound of real instruments and I’m sure books have been written on the subject.
Or, purely electronic sounds were generated that didn’t sound like any existing “real” instruments. That’s a lot easier!
I know that actual soundwaves are three dimensional. I know that amplitude is the height of a wave, pitch is the longitude of the wave. I don’t know the term for a group of waves, such as a note. I spose I can call that a note.
Technically, sound is defined 2-dimensionally as amplitude and time like you see in the standard waveform view. [u]Digital audio[/u] is simply a series of amplitude-samples at a known sample rate. (If you double the sample rate, you double the pitch and the tempo, just like if you speed-up a vinyl record or analog tape.)
But, our brain doesn’t perceive frequency/pitch as time-related so we usually talk about frequency, amplitude, and time. And when you look at a waveform you can’t see frequency/pitch until you zoom-in to the point where you’re looking at a fraction of a second of audio.
The spectrum shows the frequency spectrum over a selected period of time so you loose the time information. A spectrogram does show the “3 dimensions” of amplitude, frequency, and time.
It might be “educational” to use the Generate function to generate some sine waves (and maybe square waves) at different known-frequencies and then compare the various visual representations (as well as listening).
I spose I can call that a note.
[u]A note is defined by it’s frequency[/u]. We don’t directly hear “frequency” (cycles per second), but we perceive “pitch” which is related to the lowest and most-dominant frequency at that moment. (Similar to how our eyes see color which is related to wavelength.) …Technical guys talk about frequency and musicians talk about notes & pitch.
OK. Looking at the waveform while zoomed in, I can see the different relative pitches because their lengths are shorter or longer than each other. As I zoom out, I notice that at one point the wave form appears to have a weave of waves, as if harmonics are being represented. What is this actually? How should the weave be interpreted?
It’s virtually impossible to determine what a waveform will sound like just from looking at the waveform.
As an illustration, consider these two waveforms:
What do you expect will be the difference in the two sounds?
Type your reply before listening to the two sounds.
I’ll listen to them in a minute. I guess that they sound the same? Because of the pointed peaks. They are similar to the square shape.
They sound identical because they are harmonically identical (even though the waveforms look quite different).
They both look like square forms to me and when I heard them they sound the same.
Let’s take this another way. This is the analysis of one single piano note, “G” somewhere on the left.
Pitch increases from left to the right. See 400Hz? 440Hz is that single oboe note when the orchestra is warming up. Earthquakes and thunder are on the left and dog whistles and hummingbird conversations are on the right. 3000Hz is baby screaming.
All those spikes at all those different pitches, tones and sizes combine together to make one single piano note. That’s one of the 88 keys and you only get that particular combination in one room. I used to have a display with two piano notes at once. That just turned into a blizzard of different spikes, and tones.
Analyzing is a nightmare. That’s why recording the actual piano clean and use those notes is so attractive. You don’t have to analyze anything.
There is a wacko/exotic item, too. When you tune a piano (I know you do that all the time, right?) you only get one tuning fork. You tune one note in the middle of the keyboard exactly right and from there you go a fifth up and a fourth down and listen for the subtle error between the tones. The people who don’t go screaming down the street become piano tuners. Those errors are one reason pianos sound like they do.
I never liked grand pianos very much until I met one in the enormous, open production storage area at Channel 5 in Washington after hours. It sounded amazing. I put it together. This is what it’s supposed to sound like. OK, maybe not with the paint buckets and canvas scraps on the floor, but the room.
Honkytonk pianos hang a layer of flexible metal staples between the hammers and the strings. That’s how they get that tinkly sound. You can identify one of those from down the street, It’s totally distinctive. That would be a nightmare to do electronically.
Nobody wrote you have to tune all three piano strings the same pitch. Having one of the three strings a little off pitch is another distinctive sound. “I found this old piano in the attic and it still works!!”
So no. we can’t do that.
Koz
Koz - I don’t get your presentation. The whole point of having graphic representations of sound is so we can view and interpret. I can see loads of information in the wave forms.
I was able to correctly guess what the two forms that you posted would sound like, relatively speaking. I posted my answer before I downloaded your files. I waited about 15 minutes for you to view that the files had not yet been downloaded, just in case we are not on the honor system. My interpretation went like this. “I have seen a square wave form before. I have not seen the wave form Koz has posted. However, the outline of a square can easily be imagined from viewing this form. The parabolic curve is almost entirely enclosed by the square. The little amplitude that exceeds the outline of the square is minuscule compared to the lower part in which the curve is submerged. I think these wave forms sound the same.” And they did sound the same when I listened to them.
I am wondering where the information regarding timbre appears in the wave form? I understand that real physical waves are 3D and that these graphics are only two dimensional representations. My best guess yet is that, since the timbre is in the overtones, the weaving in the wave at mid level zoom is the first and most obvious indicator of timbre. I see more of what I am looking for in your graph on the note G. That might be what I need to learn next.
OK. I think I’m getting it now. Frequency Analysis is what I want to look at. Yes, I understand the physics of sound. I know that any note is composed of many tones that are not so apparent to the ear. The string or tube vibrates at differently position nodes. So the note contains the fundamental, the 8ve, 5th, etc.
The square waves are not from me. That’s classic overtone analysis. How many pure sine waves do you need to add and when to make a square wave. Timing matters, too. Somewhere there’s an actual graphic of all the sines adding up.
That’s why when you do energetic tone controls, you can get unpredictable waveforms. Some of those sines are there to cancel. That’s their job, and if you take them out by accident, waves can actually get bigger.
Fourier analysis? That’s been a while.
Koz
I think I made a mistake. The weave I was talking about is in the tones generated in Musescore. I see the weaving when I load an exported Musescore to Audacity and zoom the track.