. Well, let's get started. The topic for today is - Sorry.
For today and the next two lectures, we are going to be studying Fourier series. Today will be an introduction explaining what they are. And, I calculate them, but I thought before we do that I ought to least give a couple minutes oversight of why and where we're going with them, and why they're coming into the course at this place at all.
So, the situation up to now is that we've been trying to solve equations of the form y double prime plus a y prime, constant coefficient second-order equations, and the f of t was the input. So, we are considering inhomogeneous equations. This is the input. And so far, the response, then, is the solution equals the corresponding solution, y of t, maybe with some given initial conditions to pick out a special one we call the response, the response to that particular input. And now, over the last few days, the inputs have been, however, extremely special. For input, the basic input has been an exponential, or sines and cosines.
And, the trouble is that we learn how to solve those. But the point is that those seem extremely special. Now, the point of Fourier series is to show you that they are not as special as they look.
The reason is that, let's put it this way, that any reasonable f of t which is periodic, it doesn't have to be even very reasonable. It can be somewhat discontinuous, although not terribly discontinuous, which is periodic with period, maybe not the minimal period, but some period two pi. Of course, sine t and cosine t have the exact period two pi, but if I change the frequency to an integer frequency like sine 2t or sine 26 t, two pie would still be a period, although would not be the period.
The period would be shorter. The point is, such a thing can always be represented as an infinite sum of sines and cosines. So, it's going to look like this.
There's a constant term you have to put out front. And then, the rest, instead of writing, it's rather long to write unless you use summation notation. So, it's a sum from n equal one to infinity integer values of n, in other words, of a sine and a cosine. It's customary to put the cosine first, and with the frequency, the n indicates the frequency of the thing.
And, the bn is sine nt. Now, why does that solve the problem of general inputs for periodic functions, at least if the period is two pi or some fraction of it?
Well, you could think of it this way. I'll make a little table. I'll make a little table. Let's look at, let's put over here the input, and here, I'll put the response. Okay, suppose the input is the function sine nt. Well, in other words, if you just solve the problem, you put a sine nt here, you know how to get the answer, find a particular solution, in other words.
In fact, you do it by converting this to a complex exponential, and then all the rigmarole we've been going through. So, let's call the response something. Let's call it y. I'd better index it by n because it, of course, is a response to this particular periodic function.
So, n of t, and if the input is cosine nt, that also will have a response, yn. Now, I really can't call them both by the same name. So, why don't we put a little s up here to indicate that that's the response to the sine. And here, I'll put a little c to indicate what the answer to the cosine.
You're feeding cosine nt, what you get out is this function. Well, by the way, notice that if n is zero, it's going to take care of a constant term, too.
In other words, the reason there is a constant term out front is because that corresponds to cosine of zero t, which is one. Now, suppose I input instead an cosine nt. All you do is multiply the answer by an. Multiply the input by bn. You multiply the response. That's because the equation is a linear equation.
And now, what am I going to do? I'm going to add them up. If I add them up from the different ends and take a count also, the n equals zero corresponding to this first constant term, the sum of all these according to my Fourier formula is going to be f of t. What's the sum of this, the corresponding responses? Well, that's going to be summation a n y n c t plus b n y n, the response to the sine.
That will be the sum from one to infinity, and there will be some sort of constant term here. Let's just call it c1.
So, in other words, if this input produces that response, and these are things which we can calculate, we're led by this formula, Fourier's formula, to the response to things which otherwise we would have not been able to calculate, namely, any periodic function of period two pi will have, the procedure will be, you've got a periodic function of period two pi. Find its Fourier series, and I'll show you how to do that today. Find its Fourier series, and then the response to that general f of t will be this infinite series of functions, where these things are things you already know how to calculate.
They are the responses to sines and cosines. And, you just formed the sum with those coefficients. Now, why does that work? It works by the superposition principle. So, this is true.
The reason I can do the adding and multiplying by constant, I'm using the superposition principle. If this input produces that response, then the sum of a bunch of inputs produces the sum of the corresponding responses. And, why is that? Why can I use the superposition principle? Because the ODE is linear. It's okay, since the ODE is linear.
That's what makes all this work. Now, so what we're going to do today is I will show you how to calculate those Fourier series. I will not be able to use it to actually solve any differential equation. It will take us pretty much all the period to show how to calculate a Fourier series. And, okay, so I'm going to solve differential equations on Monday.
I probably won't even get to it then because the calculation of a Fourier series is a sufficient amount of work that you really want to know all the possible tricks and shortcuts there are. Unfortunately, they are not very clever tricks. They are just obvious things. But, it will take me a period to point out those obvious things, obvious in my sense if not in yours. And, finally, the third day, we'll solve differential equations. I will actually carry out the program. But the main thing we're going to get out of it is another approach to resonance because the things that we are going to be interested in are picking out which of these terms may possibly produce resonance, and therefore a very crazy response.
Some of the terms in the response suddenly get a much bigger amplitude than this than you would normally have thought they had because it's picking out resonant terms in the Fourier series of the input. Okay, well, that's a big mouthfu. Let's get started on calculating.
So, the program today is calculate the Fourier series. Given f of t periodic, having two pi as a period, find its Fourier series. How, in other words, do I calculate those coefficients, an and bn.
Now, the answer is not immediately apparent, and it's really quite remarkable. I think it's quite remarkable, anyway. It's one of the basic things of higher mathematics.
![]()
And, what it depends upon are certain things called the orthogonality relations. So, this is the place where you've got to learn what such things are. Well, I think it would be a good idea to have a general definition, rather than immediately get into the specifics. So, I'm going to call u of x, u of t, I think I will use, since Fourier analysis is most often applied when the variable is time, I think I will stick to independent variable t all period long, if I remember to, at any rate. So, these are two continuous, or not very discontinuous functions on minus pi. Let's make them periodic. Let's say two pi is a period.
So, functions, for example like those guys, sine t, sine nt, sine 22t, and so on, say two pi is a period. Well, I want them really on the whole real axis, not there. Define for all real numbers. Then, I say that they are orthogonal, perpendicular. But nobody says perpendicular.
Orthogonal is the word, orthogonal on the interval minus pi to pi if the integral, so, two are orthogonal. Well, these two functions, if the integral from minus pi to pi of u of t v of t, the product is zero, that's called the orthogonality condition on minus pi to pi. Now, well, it's just the definition. I would love to go into a little song and dance now on what the definition really means, and what its application, why the word orthogonal is used, because it really does have something to do with two vectors being orthogonal in the sense in which you live it in 18.02. I'll have to put that on the ice for the moment, and whether I get to it or not depends on how fast I talk.
But, you probably prefer I talk slowly. So, let's compromise. Anyway, that's the condition. And now, what I say is that that Fourier, that blue Fourier series, - - what finding the coefficients an and bn depends upon is this theorem that the collection of functions, as I look at this collection of functions, sine nt for any value of the integer, n, of course I can assume n is a positive integer because sine of minus nt is the same as sine of nt. And, cosine mt, let's give it a different, so I don't want you to think they are exactly the same integers.
So, this is a big collection of functions, as n runs from one to infinity- Here, I could let m be run from zero to infinity because cosine of zero t means something. It's a constant, one- that any two distinct ones, two distinct, you know, how can two things be not different? Well, you know, you talk about two coincident roots. I'm just killing, doing a little overkill. Any two distinct ones of these, two distinct members of the set of this collection of, I don't know, there's no way to say that, any two distinct ones are orthogonal on this interval.
Of course, they all have two pi as a period for all of them. So, they form into this general category that I'm talking about, but any two distinct ones are orthogonal on the interval for minus pi to pi. So, if I integrate from minus pi to pi sine of three t times cosine of four t dt, answer is zero. If I integrate sine of 3t times the sine of 60t, answer is zero. The same thing with two cosines, or a sine and a cosine.
The only time you don't get zero is if you integrate, if you make the two functions the same. Now, how do you know that you could not possibly get the answer is zero if the two functions are the same? If the two functions are the same, then I'm integrating a square. A square is always positive. I'm integrating a square.
A square is always positive, and therefore I cannot get the answer, zero. But, in the other cases, I might get the answer zero.
And the theorem is you always do. Okay, so, why is this? Well, there are three ways to prove this.
It's like many fundamental facts in mathematics. There are different ways of going about it. By the way, along with the theorem, I probably should have included, so, I'm far away. But you might as well include, because we're going to need it. What happens if you use the same function? If I take U equal to V, and in that case, as I've indicated, you're not going to get the answer, zero. But, what you will get is, so, in other words, I'm just asking, what is the sine of n t squared.
That's a case where two of them are the same. I use the same function. Well, the answer is, it's the same as what you will get if you integrate the cosine, cosine squared n t dt. And, the answer to either one of these is pi. That's something you know how to do from 18.01 or the equivalent thereof. You can integrate sine squared. It's one of the things you had to learn for whatever exam you took on methods of integration.
Anyway, so I'm not going to calculate this out. The answer turns out to be pi. All right, now, the ways to prove it are you can use trig identities. And, I'm asking you in one of the early problems in the problem set, identities, identities for the product of sine and cosine, expressing it in a form in which it's easy to integrate, and you can prove it that way. Or, you can use, if you have forgotten the trigonometric identities and want to get some more exercise with complex- you can use complex exponentials.
So, I'm asking you how to, in another part of the same problem I'm asking you how to do it, do one of these, at any rate, using complex exponentials. And now, I'm going to use a mysterious third method another way. I'm going to use the ODE. I'm going to do that because this is the method. It's not just sines and cosines which are orthogonal. There are masses of orthogonal functions out there.
And, the way they are discovered, and the way you prove they're orthogonal is not with trig identities and complex exponentials because those only work with sines and cosines. It is, instead, by going back to the differential equation that they solve. And that's, therefore, the method here that I'm going to use here because this is the method which generalizes to many other differential equations other than the simple ones satisfied by sines and cosines.
But anyway, that is the source. So, the way the proof of these orthogonality conditions goes, so I'm not going to do that. And, I'm going to assume that m is different from n so that I'm not in either of these two cases.
What it depends on is, what's the differential equation that all these functions satisfy? Well, it's a different differential equation depending upon the value of n, - - but they look at essentially the same. These satisfy the differential equation, in other words, what they have in common. The differential equation is, let's call it u. It looks better. It's going to look better if you let me call it u.
U double prime plus, well, n squared, so for the function sine n t cosine n t, satisfy u double prime plus n squared times u. In other words, the frequency is n, and therefore, this is a square of the frequency is what you put here, equals zero. In other words, what these functions have in common is that they satisfy differential equations that look like that. And the only thing that's allowed to vary is the frequency, which is allowed to change. The frequency is in this coefficient of u. Now, the remarkable thing is that's all you need to know. The fact that they satisfy the differential equation, that's all you need to know to prove the orthogonality relationship.
Okay, let's try to do it. Well, I need some notation. So, I'm going to let un and vm be any two of the functions. In other words, I'll assume m is different from n.
For example, this one could be sine nt, and that could be sine of mt, or this could be sine nt and that could be cosine of mt. You get the idea. Any two of those in the subscript indicates whether what the n or the m is that are in that. Any two, and I mean really two, distinct, well, if I say that m is not n, then they positively have to be different. So, again, it's overkill with my two's-ness.
And, what I'm going to calculate, well, first of all, from the equation, I'm going to write the equation this way. It says that u double prime is equal to minus n squared u. That's true for any of these guys. Of course, here, it would be v double prime is equal to minus m squared times v. You have to make those simple adjustments. And now, what we're going to calculate is the integral from minus pi to pi of un double prime times vm dt.
Now, just bear with me. Why am I going to do that?
I can't explain what I'm going to do that. But you won't ask me the question in five minutes. But the point is, this is highly un-symmetric. The u is differentiated twice. So, those two functions- but there is a way of turning them into an expression which looks extremely symmetric, where they are the same. And the way to do that is I want to get rid of one of these primes here and put one on here.
The way to do that is if you want to integrate one of these guys, and differentiate this one to make them look the same, that's called integration by parts, the most important theoretical method you learned in 18.01 even though you didn't know that it was the most important theoretical method. Okay, we're going to use it now as a basis for Fourier series. Okay, so I'm going to integrate by parts.
Now, the first thing you do, of course, when you integrate by parts is you just do the integration. You don't do differentiation. So, the first thing looks like this. And, that's to be evaluated between negative pi and pi. In doing integration by parts between limits, minus what you get by doing both.
You do both, the integration and the differentiation. And, again, evaluate that between limits.
Now, I'm just going to BS my way through this. This is zero. I don't care what the un's, which un you picked and which vm you picked. The answer here is always going to be zero.
Instead of wasting six boards trying to write out the argument, let me wave my hands. Okay, it's clear, for example, that a v is a sine, sine mt. Of course it's zero because the sine vanishes at both pi and minus pi. If the un were a cosine, after I differentiate it, it became a sine. And so, now it's this side guy that's zero at both ends. So, the only case in which we might have a little doubt is if this is a cosine, and after differentiation, this is also a cosine. In other words, it might look like cosine, after, this cosine nt times cosine mt.
But, I claim that that's zero, too. Because the cosines are even functions, and therefore, they have the same value at both ends. So, if I subtract the value evaluated at pi, and subtract the value of minus pi, again zero because I have the same value at both ends. So, by this entirely convincing argument, no matter what combination of sines and cosines I have here, the answer to that part will always be zero. So, by calculation, but thought calculation; it's just a waste of time to write anything out. You stare at it until you agree that it's so. And now, I've taken, by this integration by parts, I've taken this highly un-symmetric expression and turned it into something in which the u and the v are treated exactly alike.
Well, good, that's nice, but why? Why did I go to this trouble? Okay, now we're going to use the fact that this satisfies the differential equation, in other words, that u double prime is equal to minus n, I'm sorry, I should have subscripted this. If that's the solution, then this is equal to, times. You have to put in a subscript otherwise.
The n wouldn't matter. All right, I'm now going to take that expression, and evaluate it differently. Un double prime vm dt is equal to, well, un double prime, because it satisfies the differential equation is equal to that. So, what is this? This is minus n squared times the integral from negative pi to pi, and I'm replacing un double prime by minus n squared un. I pulled the minus n squared out. So, it's un here, and the other factor is vm dt.
Now, that's the proof. What do you mean that's the proof?
Okay, well, I'll first state it, why intuitively that's the end of the argument. And then, I'll spell it out a little more detail, but the more detail you make for this, the more obscure it gets instead of, look, I just showed you that this is symmetric in u and v, after you massage it a little bit. Here, I'm calculating it a different way.
Is this symmetric in u and v? Well, the answer is yes or no. Is this symmetric at u and v? Because of the n. The n favors u. We have what is called a paradox. This thing is symmetric in u and v because I can show it is.
And, it's not symmetric in u and v because I can show it is. I can show it's not symmetric because it favors the n. Now, there's only one possible resolution of that paradox. Both would be symmetric if what were true? All right, let me write it this way. Okay, never mind. You see, the only way this can happen is if this expression is zero.
In other words, the only way something can be both symmetric and not symmetric is if it's zero all the time. And, that's what we're trying to prove, that this is zero. But, instead of doing it that way, let me show you. This is equal to that, and therefore, two things according to Euclid, two things equal to the same thing are equal to each other. So, this equals that, which, in turn, therefore, equals what I would have gotten. I'm just saying the symmetry of different way, what I would have gotten if I had done this calculation. And, that turns out to be minus m squared times the integral from minus pi to pi of un vm dt.
So, these two are equal because they are both equal to this. This is equal to that. This equals that. Therefore, how can this equal that unless the integral is zero? Remember, m is different from n. So, what this proves is, therefore, the integral from negative pi to pi of un vm dt is equal to zero, at least if m is different from n.
Now, there is one case I didn't include. Which case didn't I include? Un times un is not supposed to be zero. So, in that case, I don't have to worry about, but there is a case that I didn't. For example, something like the cosine of nt times the sine of nt. Here, the m is the same as the n.
Nonetheless, I am claiming that this is zero because these aren't the same function. One is a cosine. Why is that zero? Can you see mentally that that's zero? Well, this is trying to be in another life, it's trying to be one half the sine of two nt, right?
And obviously the integral of sine of two nt is zero between minus pi and pi because you integrate it, and it turns out to be zero. You integrate it to a cosine, which has the same value of both ends. Well, that was a lot of talking. If this proof is too abstract for you, I won't ask you to reproduce it on an exam. You can go with the proofs using trigonometric identities, and/or complex exponentials. But, you ought to know at least one of those, and for the problem set I'm asking you to fool around a little with at least two of them. Okay, now, what has this got to do with the problem we started with originally?
The problem is to explain this blue series. So, our problem is, how, from this, am I going to get the terms of this blue series? So, given f of t, two pi s a period. Find the an and the bn.
Okay, let's focus on the an. The bn is the same.
Once you know how to do one, you know how to do the other. So, here's the idea. Again, it goes back to the something you learned at the very beginning of 18.02, but I don't think it took.
But maybe some of you will recognize it. So, what I'm going to do is write it.
Here's the term we're looking for here, this one. Okay, and there are others. It's an infinite series that goes on forever.
And now, to make the argument, I've got to put it one more term here. So, I'm going to put in ak cosine kt. I don't mean to imply that that k could be more than n, in which case I should have written it here. I could have also used equally well bk sine kt here, and I could have put it there. This is just some other term. This is the an, and this is the one we want. And, this is some other term.
Okay, all right, now, what you do is, to get the an, what you do is you multiply everything through by, you focus on the one you want, so it's dot, dot, dot, dot, dot, and you multiply by cosine nt. So, it's ak cosine kt times cosine nt.
![]()
Of course, that gets multiplied, too. But, the one we want also gets multiplied, an. And, it becomes, when I multiply by cosine nt, cosine squared nt, and now, I hope you can see what's going to happen.
Now, oops, I didn't multiply the f of t, sorry. It's the oldest trick in the book.
I now integrate everything from minus, so I don't endlessly recopy. I'll integrate by putting it up in yellow chalk, and you are left to your own devices. This is definitely a colored pen type of course. Okay, so, you want to integrate from minus pi to pi?
Just integrate everything on the right hand side, also, from minus pi to pi. Plus, these are the guys just to indicate that I haven't, they are out there, too. And now, what happens? Every term is zero because of the orthogonality relations. They are all of the form, a constant times cosine nt times something different from cosine nt, sine kt, cosine kt, or even that constant term. All of the other terms are zero, and the only one which survives is this one. And, what's its value?
The integral from minus pi to pi of cosine squared, I put that up somewhere. It's right here, down there? So, this term turns into an pi, an, dragged along, but this, the integral of the square of the cosine turns out to be pi. And so, the end result is that we get a formula for an. An is, well, an times pi, all these terms of zero, and nothing is left but this left-hand side.
And therefore, an times pi is the integral from negative pi to pi of f of t times cosine nt dt. But, that's an times pi. Therefore, if I want just an, I have to divide it by pi. And, that's the formula for the coefficient an.
The argument is exactly the same if you want bn, but I will write it down for the sake of completeness, as they say, and to give you a chance to digest what I've done, you know, 30 seconds to digest it. And, that's because the argument is the same. And, the integral of sine squared nt is also pi. So, there's no difference there. Now, there's only one little caution. It have to be a little careful.
This is n one, two, and so on, and this is also n one, two, and unfortunately, the constant term is a slight exception. We better look at that specifically because if you forget it, you can get them to gross, gross, gross errors. How about the constant term? Suppose I repeat the argument for that in miniature. There is a constant term plus other stuff, a typical other stuff, an cosine, let's say.
How am I going to get that constant term? Well, if you think of this as sort of like a constant times, the reason is the constant is because it's being multiplied by cosine zero t. So, that suggests I should multiply by one.
In other words, what I should do is simply integrate this from negative pi to pi, f of t dt. What's the answer?
Well, this integrated from minus pi to pi is how much? It's c zero times two pi, right? And, the other terms all give me zero.
Every other term is zero because if you integrate cosine nt or sine nt over a complete period, you always get zero. There is as much area above the axis or below. Or, you can look at two special cases. Anyway, you always get zero. It's the same thing with sine here. So, the answer is that c zero is equal to, is a little special. You don't just put n equals zero here because then you would lose a factor of two.
So, c zero should be one over two pi times this integral. Now, there are two kinds of people in the world, the ones who learn two separate formulas, and the ones who just learn two separate notations. So, what most people do is they say, look, I want this to be always the formula for a zero. That means, even when n is zero, I want this to be the formula.
Well, then you are not going to get the right leading term. Instead of getting c zero, you're going to get twice it, and therefore, the formula is, the Fourier series, therefore, isn't written this way.
It's written- If you want an a zero there, calculate it by this formula. Then, you've got to write not c zero, but a zero over two. I think you will be happiest if I have to give you advice. I think you'll be happiest remembering a single formula for the an's and bn's, in which case you have to remember that the constant leading term is a zero over two if you insist on using that formula. Otherwise, you have to learn a special formula for the leading coefficient, namely one over two pi instead of one over pi. Well, am I really going to calculate a Fourier series in four minutes?
Not very likely, but I'll give it a brave college try. Anyway, you will be doing a great deal of it, and your book has lots and lots of examples, too many, in fact. It ruined all the good examples by calculating them for you. But, I will at least outline. Do you want me to spend three minutes outlining a calculation just so you have something to work on in the next boring class you are in? Let's see, so I'll just put a few key things on the board.
I would advise you to sit still for this. Otherwise you're going to hack it, and take twice as long as you should, even though I knew you've been up to 3:00 in the morning doing your problem set. I got up at 6:00 to make up the new one. So, we're even.
This should be zero here. So, here's minus pi. Here's one, negative one.
The function starts out like that, and now to be periodic, it then has to continue on in the same way. So, I think that's enough of its path through life to indicate how it runs. This is a typical square-away function, sometimes it's called. It's an odd function. It goes equally above and below the axis. Now, the integrals, when you calculate them, the an is going to be, okay, look, the an is going to turn out to be zero. Let me, instead, and you will get that with a little hacking.
I'm much more worried about what you'll do with the bn's. Also, next Monday you'll see intuitively that the an is zero, in which case you won't even bother trying to calculate it. How about the bn, though? Well, you see, because the function is discontinuous, so, this is my input. My f of t is that orange discontinuous function.
The bn is going to be, I have to break it into two parts. In the first part, the function is negative one. And there, I will be integrating from minus pi to pi of the function, which is minus one times the sine of nt dt. And then, there's another part, sorry, minus pi to zero. The other part I integrate from zero to pi of what?
Well, f of t is now plus one. And so, I simply integrate sine nt dt. Now, each of these is a perfectly simple integral. The only question is how you combine them. So, this is, after you calculate it, it will be (one minus cosine n pi) all over n. And, this part will turn out to be (one minus cosine n pi) over n also. And therefore, the answer will be two minus two cosine, two over n times, right, two minus, two times (one minus cosine n pi) over n.
No, okay, now, what's this? This is minus one if n is odd. It's plus one if n is even. Now, either you can work with it this way, or you can combine the two of them into a single expression.
Its minus one to the nth power takes care of both of them. But, the way the answer is normally expressed, it would be minus two over n, two over n times, if n is even, I get zero. If n is odd, I get two. So, times two, if n is odd, and zero if n is even. So, it's four over n, or it's zero, and the final series is a sum of those coefficients times the appropriate- cosine or sine?
Sine terms because the cosine terms were all coefficients, all turned out to be zero. I'm sorry I didn't have the chance to do that calculation in detail.
But, I think that's enough sketch for you to be able to do the rest of it yourself. This is one of over 2,200 courses on OCW. Find materials for this course in the pages linked along the left. MIT OpenCourseWare is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum.
No enrollment or registration. Freely browse and use OCW materials at your own pace. There's no signup, and no start or end dates. Knowledge is your reward. Use OCW to guide your own life-long learning, or to teach others.
We don't offer credit or certification for using OCW. Made for sharing.
Download files for later. Send to friends and colleagues. Modify, remix, and reuse (just remember to cite OCW as the source.) Learn more.
This is one of over 2,200 courses on OCW. Find materials for this course in the pages linked along the left. MIT OpenCourseWare is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum. No enrollment or registration. Freely browse and use OCW materials at your own pace.
There's no signup, and no start or end dates. Knowledge is your reward. Use OCW to guide your own life-long learning, or to teach others. We don't offer credit or certification for using OCW. Made for sharing. Download files for later. Send to friends and colleagues.
Modify, remix, and reuse (just remember to cite OCW as the source.) Learn more.
. The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free.
To make a donation or view additional materials from hundreds of MIT courses, visit [email protected]. PROFESSOR: Welcome. One quick announcement- if you have not yet picked up your graded exams, you can do so by seeing the TAs after the hour. So today I want to continue to think about what we started last week, thinking about Fourier series. The idea is to develop a theory that lets us look at signals on the basis of frequency content, much as we looked at frequency responses as a characterization of systems, according to the way they process frequencies. And we saw last time that there were a number of kinds of signals, for example, musical signals, where that kind of an on approach- thinking about the signal according to the frequencies that are in it- makes a lot of sense and can lead to insight. We also developed some formalism.
We figured out how you can break a signal into components and then assemble the components to generate the signal. And what I want to mention at the beginning of the hour today is just how to think about this operation in a more familiar way. We do this kind of a thing, breaking something into components all the time. One of the more familiar examples might be thinking about 3-space, right? The Cartesian analysis of 3-space is based on the idea that you can think about a vector location in 3-space as having components.
There's a component in the x direction, the y direction, the z direction. That's completely analogous to the way we're thinking about Fourier representations for signals. So just like we would think about synthesizing the location of a point by adding together three pieces, and we would think about analyzing a point to figure out how big the components are in each of those directions, it's exactly the same when we think about Fourier series. We think about representing a signal as a sum of things. So the sum is precisely the same.
This one happens to have an infinite number of terms, ? Eh.? The top one has three terms, ? Eh.?
The principles are very similar. So we think about representing a signal as a sum of components. We think about representing a point in 3-space as a sum of components, and we think about analyzing the signal or the vector in 3-space, so that we figure out what each of those components are. And we do it in an operation where it's actually very convenient to think about the decomposition of the Fourier components using precisely the same language that we would use for thinking about vector spaces. So we would think about- in the case of the Fourier, we think about integrating over the period sifts out a component.
The analogous operation for 3-space is to think about a dot product. The way you take a vector and figure out the component in the x direction is to dot it with it, the dot product. In the Fourier case, we think about it as being an inner product. The idea is completely analogous. So we think about having the inner product of two things- the reference direction and the vector.
So reference direction and vector- we think about it exactly the same way, except now it's an inner product, which means that after we've multiplied, we have to integrate. That's the only difference between inner product. Inner product implies some are ? Integrated?
after you've done the multiplication. So we do exactly the same thing, except that now we think about the inner product of a and b. That's just the integral, where we take the complex conjugate of one of the signals only because by defining it with a complex conjugate there, we set up the inner product so that the answer is zero unless we take the inner product of two things in the same direction. By putting the minus sign there, if the two reference directions, that is to say the one characterized by k and m, the ones characterized by k and m, the inner product will be zero if we take the complex conjugate as long as k is not equal to m. K equals m is the only not zero component. OK, is that all clear?
So to make sure that it's clear, here's a question. How many of the following pairs of functions are orthogonal in T equals 3? Part of the goal of the exercise is to figure out what the little caveat in T equals three means. So look at your neighbor, say hello, figure out a number between 0 and 4. SIDE CONVERSATIONS OK, so how many signals, how many of the pairs are orthogonal to each other? Raise your hand with some number between 0 and 4, unless you're completely bizarre and raise five, I mean.
OK, come on, come on. Higher, so I can see them. Remember if you're wrong. It's your partner's fault, it's not your fault.
OK, not quite. A lot of bad partners, no, no. Let's do the first one. Is the cos of 2 pi t orthogonal to the sine of 2 pi t over the interval capital T equals 3? I haven't a clue. I don't care.
No, no, no, no. You all care, no.
Are they orthogonal? So what do you- how do I formally ask the question are they are orthogonal? OK, so it's either the last slide or the next slide. So go back to the last slide. What's it mean if they're orthogonal? AUDIENCE: INAUDIBLE PROFESSOR: So how do I take the dot product?
What do I do? AUDIENCE: INAUDIBLE conjugate. PROFESSOR: Conjugate 1. So I want to- I'm thinking about 1 over T, the integral over T, a star of T, b of t, dt. So the t comes in here, right, I'm integrating over a period t. So I take the two functions and I multiply them together.
So I have this function, and I have that function. I multiply them together. If you multiply two sinusoids of the same frequency but different phase, what do you get?
Another sinusoid, right? So you all know all these complicated trig relationships, right? Here's one of them. If you multiply cos of 2 pi T times the sine of pi T you get half the sign of double the frequency. You don't need to memorize that. You just look at this picture, you look at that picture.
This one over the interval 3 has 3 periods. There are 3 periods of that waveform over the period of capital T. There's an integer number 3, same here. Here- how many periods? But it's exactly six. So you get a pure sinusoid.
You get an integer number of periods. You integrate over an integer number of periods, you get 0. They're orthogonal. Had I chosen the period differently, they may not have been orthogonal.
It depends on the period. So the inner product depends on the period, because the inner product has something to do with integrator sum. And so the range over which you sum or integrate matters.
How about cos 2 pi T cos 4 pi T? AUDIENCE: Yes. PROFESSOR: And the reason is? AUDIENCE: So think if you wrapped INAUDIBLE together, then there's a lot of symmetry that goes on INAUDIBLE is going to be 0.
PROFESSOR: So now we've got two different frequencies. But we still get these funny cosine relationships that have to do with sums and differences. And the sums and differences both happen to be periodic over 3, over the interval capital T equals 3, right? So we still get the property that the average value here, which is what the interval was pulling out, the average is 0.
So they're also orthogonal. How about cos 2 pi T sine pi T? OK, I've asked two questions, and they were both yes. So I'm getting bored at this point, so by the theory of questions in lecture, the answer is? LAUGHTER Now, wait a minute.
I'm not that boring. So is this periodic over a capital T equals 3? Ah, excuse me. I didn't say the right question, sorry. Is this function have an integer number of periods in the time interval capital T equals 3? What's the period- what's the fundamental period of this waveform?
PROFESSOR: 1. So it has 3 periods over the interval cap T equals 3. What about this one? PROFESSOR: Period is 2. How many periods are there in the time interval capital T equals 3? AUDIENCE: INAUDIBLE PROFESSOR: A period is 2. How many periods are there- 1 and 1/2, not an integer.
Bad news, right? So integer number three, not integer number.
If you were to integrate this over the period t equals 3, if I didn't multiply them if I just did that, if I just thought about that integral, I wouldn't get 0, right? There's more positives than there are negatives. And when I multiply them, the same sort of thing happens. I get two big peaks down and only one big peak up. It's because the resulting waveform no longer has an integer number of periods in the interval capital T equals 3. Last one- cos 2 pi T e to the- whoops.
Is that what I actually said? Good, I forgot the j. Because without the j, they would obviously not be orthogonal. Obviously, right? I didn't mean to ask something quite that obvious.
So what about cos 2 pi T and e to the j 2 pi T? Not, I'm as clueless as I was on Part A.
No, no, no, no, you're not. No, you're not. No, you're not. So how do you think about that?
You can use Euler's expression. And if there had been a j there, this would have been a correct expression. It's not quite a correct expression because I forgot to put the j there. But had there been a j there, it would have been cos 2 pi T plus j sine 2 pi T. And the awkward thing is that the cos and the cos are obviously not orthogonal with each other.
A signal is not orthogonal with itself, OK? So because part of this signal is that signal, those two signals are not orthogonal. OK, so that's kind of- so that's the idea of orthogonality. It's a very good way to think about decompositions.
And even though we only spent about half an hour last time, and only about 15 minutes this time, that is the whole theory of Fourier series. That doesn't mean we can't ask hard questions. There were a couple of questions.
Yes, you were first. AUDIENCE: Is there a way to think about orthogonality using the Fourier INAUDIBLE. PROFESSOR: Well, the Fourier coefficients are the result of orthogonality. I don't think you can tell- if I just told you a bunch of Fourier coefficients, I don't know if you can tell me something about the orthogonality of the underlying signals or not.
AUDIENCE: What if INAUDIBLE. PROFESSOR: Excuse me? AUDIENCE: INAUDIBLE ? The period? and the Fourier INAUDIBLE. PROFESSOR: Let's see, so I'm not completely sure I know what you're asking. Certainly if you tell me that the Fourier's coefficients are blah, blah, blah, 3, 2 7, and 16.
And if you tell me that you're working with a simple Fourier series periodic in 3, then you've told me everything. And so there's a way for me to backtrack that it was orthogonal. I am not sure if I'm connecting with you, so if I'm not, ask me after lecture to make sure that- AUDIENCE: INAUDIBLE PROFESSOR: Sure. AUDIENCE: I think he's saying if you have two signals INAUDIBLE coefficients INAUDIBLE two signals, can I tell if those two signals are orthogonal INAUDIBLE the coefficients are orthogonal INAUDIBLE. PROFESSOR: If they have components in common, they couldn't possibly be orthogonal. So I would answer yes to that question.
So if that's what you were- so I think that's probably right. Does that sound right? AUDIENCE: I'm awfully confused by the complex conjugate, the INAUDIBLE.
PROFESSOR: Yes, yes, yes. AUDIENCE: So does that mean we're taking the complex conjugate ? Of a? and we're ? Applying it? to b? PROFESSOR: We're taking the complex conjugate of the entire function.
At every point in time, we take the complex conjugate of it. And it's especially useful to think about if you're doing something of the form- if a of t were e to the j 2 pi mt and if b of t were e to the j 2 pi lt. The only thing we're trying to do- but this comes up quite frequently- the only thing we're trying to do is when you conjugate one of these, you rig it so that when you add the exponents, the result goes to 0 by putting the minus up there. AUDIENCE: It doesn't seem like we had to do any of that for the example we just worked on. It seems like there were just like ? Signals? INAUDIBLE.
PROFESSOR: Oh, interesting. That's a very good point. That's interesting.
So I didn't intend to throw you a ringer. These were signals, all of these except that one, are real functions of time.
That's why the complex conjugate didn't come up. So I apologize. I wasn't trying to make it seem tricky.
OK, so it's because this function of time is everywhere real that we didn't need to rehearse this. We did have to do it in that one.
OK, so the point is that we've already covered, even though we've only done a little bit of work in lecture, we've already covered all of the theory. What remains though is to do some practice. And also what remains is to understand how this is useful. So it's not just music. The example that I want to talk about today is speech.
The same sort of thing that we could do with music last time, we can do with speech. And here are some utterances.
AUDIO PLAYBACK - Bat, bait, bet, beet, bit, bite, bought, boat, but, boot. END PLAYBACK PROFESSOR: All right, it was just intended to be a bunch of sounds that we can analyze with Fourier analysis to get some insight into how to think about, in particular, speech recognition and speech synthesis. So we can take those utterances, and all I did was write a little Python program to do the decomposition that I showed on the previous slides, so that I could break these time waveforms.
Here I'm illustrating one, two, three, four, five, six periods. So I took one period of that sound and ran it through that kind of an integral to break it into Fourier components, which are showed here. And what I want you to see is just like you could have recognized a pattern here, and you might try to recognize which vowel is which by the signature in time.
An alternative, and far more useful way of thinking about it, is to try to recognize the pattern in frequency. So there are characteristic differences in the sounds, and we'll look at the basis for why there are. There are characteristic differences in the sound that can help us to identify automatically, by a machine, what was being said. And so what we want to do is learn to think about a pattern that characterizes ah, ee, oo in the frequency domain, as opposed to the time domain. AUDIO PLAYBACK - Bat beet, boot. END PLAYBACK PROFESSOR: So there's something different about those sounds that manifests a difference in this Fourier signature. So that's one of the useful applications of this.
And we'd like to understand that better. There's a really good physical reason why that happens. And it has to do with the way we produce speech. So you can think about speech as being generated by some source.
Ultimately, the source of my speech is somewhere down here, which always amuses me when I see the cut off heads talking, like at Halloween and like on some cartoon shows. Because you can't do that, right? Because the source has to do with down here someplace, right?
My lungs push in. That pushes air through something and starts making noise somehow. I'm going to focus today on things that we call voiced- in a voiced sound. Ah, in a voiced sound, it's caused by vibrations of the vocal chords. So if you were to stick a camera down someone's throat, this is the sort of thing that you would see. It's an enormously complex structure whose mechanics are extremely difficult to understand. Because what happens is when you want to make a high sound, you tense the structure.
You pull on some muscles that pull the cords. The cords are normally rattling pretty fast. And what you do is, you pull on a muscle that tenses them to make it go higher. But your intuition should say, now wait a minute- you're making it longer to make it higher? And your intuition would be right. Normally, long organ pipes are higher or lower in frequency?
AUDIENCE: Lower. PROFESSOR: Lower. So you have to do a lot of mental calculations in order to move these muscles correctly. So that the resulting frequency of the vibration comes out right. It's not obvious, because two things happen as you tense the muscle.
The folds- the vocal cords get longer which you would think would make the frequency lower, but they get tighter, which, of course, goes the other direction, right? So it's a very complicated thing. And in fact, it's something that goes bad with professional speakers. But even more, professional singers often have a lot of trouble with the enormous stress that happens on this structure with repeated use and repeated overuse.
Anyway, this takes a real beating. But that's ultimately the source of speech. But if that were all you had, it wouldn't sound much like speech. A lot of the interesting stuff comes from these cavities that you intentionally manipulate as you're speaking to make the different characteristic sounds. So the idea then is that you have a source that contains information like frequency.
What's the pitch of the utterance? But you have this other thing, which is acting like a filter. If you think about the whole thing as a system, we have a block which represents a filter, which is the thing that has a frequency response. The frequency response depends on how I've put my tongue in my mouth and how I've opened my lips and stuff like that. We'll see that in a minute. But it also depends on how the vocal folds- it has an input, which is the vocal folds.
So the idea then is the same kind of a source filter idea that we motivated last time by way of the RC filter example. If you put a resistor and a capacitor together with a source, a convenient way to think about that is as a low pass filter. We think about it having a frequency response.
So the system, just the RC part, has a frequency response which we can characterize by a Bode diagram. So we can think about- we did this last time- so we can think about the low frequencies go through without attenuation. The gain is 1, and the phase is 0. So basically low frequencies go through the filter without any change. High frequencies are attenuated. The higher the frequency, the more the attenuation, and phase shifted by lagging pi over 2.
So that's a way of thinking about the RC circuit as a low pass filter. And it gives us insight in the kinds of signals that go through and don't go through. So that, if we think about a signal like a square wave having a Fourier series decomposition, it only has odd components and the odd components fall with k. The magnitude of the component is inverse with k. So we get components that, if I plot on a log scale, the reciprocal relationship of the weight of the components, the magnitude of the components, makes it a straight line with a slope of minus 1. And now we can think about putting this signal into the RC filter and thinking about what the output should look like.
If the frequency of the square wave, if the fundamental frequency, 2 pi over the period, if 2 pi over capital T, if 2 pi over capital T is some frequency that's low compared to the corner frequency of the low pass filter, basically the output of the filter, which is showed in green, overlaps the input, which is showed in red. You can't tell the difference because all the components have the same magnitude and phase as the input. But if you change the frequency of the square wave so that the fundamental is higher, some of the higher frequencies are attenuated and phase shifted.
The shape of the waveform, showed in green, starts to deviate. If you go to still higher frequencies, the deviation's even greater. And if you go to high enough frequencies, they're all in the region where the magnitude is being attenuated by whatever frequency. So my dependence of 1 over k becomes 1 over k squared, and it goes from being a square wave to a triangle wave. So that's a way of thinking about the signal transformation in terms of a filter. We did that last time. What's going on with speech is exactly the same thing.
What we want to do is think about- the glottis makes some kind of a sound that goes into a filter. The filter is this thing that is controlled by my tongue's position and my jaw position and my lip position and stuff like that. And what comes out is speech. To demonstrate that, here's a film that was made by Ken Stevens. Ken Stevens was a professor in this department. He just recently retired. This was done when he was a graduate student.
It's very hard to see because the contrast is not great. But you have to take into consideration this was made with X-rays. OK, we probably wouldn't do this today.
It was a relatively large exposure to x-rays, which we sort of frown on these days. Just so you're not too worried, Ken Stevens, when he retired, had the longest teaching career in our history. He was a lecturer who actively lectured for 50 years.
So he seemed to have done OK. So you don't need to worry about what happened to him.
But we probably wouldn't repeat this. It's a little hard to see.
The bone is easy, right, because x-rays don't go through bones very well. What you can just barely see is his lips. And it's important to watch the lips, too. It's also important that his chin is on a chin rest to simplify analysis. The idea of this was to get quantitative measurements to fit the source filter idea.
OK, so now I'm going to play a film, a recording of him. VIDEO PLAYBACK - Test. The tongue.? ? The tongue.? The INAUDIBLE.
The INAUDIBLE. The neck.? INAUDIBLE. Clock.? The INAUDIBLE. The INAUDIBLE.
Fourier Series Lecture Notes
Two.? INAUDIBLE. Tech.? INAUDIBLE. Why did INAUDIBLE set the INAUDIBLE on top of his desk? I have put ? Blood under?
? Two clean?
? Yellow shoes.? END VIDEO PLAYBACK LAUGHTER PROFESSOR: OK, so what you were supposed to see is that the thing that we associate with speech is only a small part of it. His lips were obviously moving. That's what we see. But if you were paying attention, his tongue was going up and down not a little bit, but a lot. So the gap between his tongue and the roof of his mouth was going from 0 to about that far.
The velum back here was opening very broadly on occasion. So there was a significant variation in the shape of the structure through which the glottis wave form was passing. And that's the basis of the filtering that gives rise to the different speech sounds. So to convince you of that, here I have a carefully machined item. I don't want this one. I want this one. So this is a Japanese oo.
OK, now I don't know Japanese, so I have to just sort of trust the guy who made this that it actually sounds like a Japanese oo. The second one I'll show is a Japanese ee, which actually sounds more like an ee to me. But anyway, this model was made from measurements of the type that I just showed with Ken.
So the idea was to estimate the size of those cavities through which the air was passing, and then make, by machining in Plexiglas, a structure that has that shape. So this was an early test of whether the source filter idea works. So if that is the explanation for how speech is generated, then I ought to be able to take a boring sound of the type that's generated by the glottis- BUZZING SOUND And put it through this, and it should sound more like a vowel.
Know what I'm talking about? So this is a Japanese oo. BUZZING SOUND COMBINES BUZZING SOUND WITH 'OO' SOUND I don't know if anybody knows Japanese.
I don't know if that sounds like an oo. Does anybody know Japanese, and does that sound like an oo or not? OK, I'll pass.
BUZZING SOUND This is an ee. OK, now notice that the ee looks very different. The question is whether that's a big enough difference to make the difference between an oo and an ee. I'm pressing the same button, nothing up my sleeve, nothing- OK, so same button. COMBINES BUZZING SOUND WITH 'EE' SOUND OK, so what you're supposed to be convinced of is there is enough information in the shape of the vocal structures to account for the difference in the sounds. Now of course, we don't really care about the acoustics if we're trying to, for example, synthesize or analyze speech. We don't particularly care about that.
We do like to know that there is a theory that underlies it, right? And there's a very sound physical basis for why we should think about the source filter idea. When I say source filter, source filter- so everybody calls it the source filter model of speech. So is there any good physical reason for why that should be true? Of course, what we care about is the frequency response.
So here what's showed is measurements of frequency responses taken from speakers. So now we don't do the x-ray thing. All we do is record somebody saying heed, had, hood, haw'd, who'd. And we look at men, women, and children, and we characterize how their frequency responses change when they make those different sounds. So what's showed here is that you get a relatively good fit by thinking about the frequency response having three formants.
The formants are the peak frequencies. There's a theory, which I won't go into, for how you take this shape and turn it into a formant frequency. And given just the formant frequencies, or given the frequency response measured at uniform spacing across frequencies, there is a theory for how you can generate the smooth line, which is really- this is an 11th order ? Fit,? which means that there are 11 poles and no zeros. So what you do to get this shape then is take the locations and amplitudes of the formant frequencies and do a fit using poles. And so here's a table showing measured formant frequencies, F1, F2, and F3, for whatever, six different sounds for three different categories of speakers.
And that's kind of a complete analysis then in terms of the source filter idea. So this figure summarizes the idea. We think about source filters. So the source is the glottis. The filter is the formants created by the throat. And speech is the thing that comes out of the source filter.
The source is some periodic waveform caused by the banging together of the vocal folds. The filter is the frequency response of the throat. And the result, then, is just passing this glottis wave form- so this is a measured- by sticking a microphone in somebody's throat, this is a measurement of what the glottis acoustics looks like.
This is a Fourier decomposition of that periodic waveform. Then this is the frequency response of that thing. And this is the Fourier coefficients of the output for different sounds. So here is the frequency. So the same glottis signal underlies and ee sound and ah sound and generates two different spectra. We call that combination of magnitudes and angles.
We call that the Fourier spectrum. So you get two different spectra, depending on the filter shape. And that's the basis- this theory, this source filter idea- is the basis of the current technology for speech recognition and speech production. So I actually cheated.
Masako Sato
Those sounds that I played earlier- bit, bat, bought, beat, all those things- those were actually synthetic speech. OK, all I did is I ran a speech synthesizer, and I said, synthesize bit. So that was really a synthesized thing. That was not a real person. And so the synthesizer used this theory in order to generate this synthetic speech. We also use this theory in order to recognize speech.
And you'll do a homework problem in Homework 10, I think it is, in which you'll build the primitive front end of a speech recognizer using this theory. And I'll give you a couple of utterances of different vowels and you'll have to classify which vowel is being said, according to some automatic speech recognizer based on this theory. The theory is also just fun, because a theory lets us figure out anomalies. So when somebody has a speech impediment, for example, when I did, when I was a little kid I did. And I was sent to speech school. Now they do a much better job because they do analysis to figure out what I'm doing wrong, using this sort of source filter idea. We can also use the source filter idea to understand paradoxes.
So for example, I've told you before I work on hearing aids. I tried to make hearing aids hear. And so people with hearing deficiencies like mine, where I have sort of progressive age-related because I'm old, right, that's what happens. I have age-related hearing loss, which means that I'm losing high frequencies. I'm less sensitive to high frequencies. People like me, which is the vast majority of people my age- it's easier to understand male speech than female speech. Higher frequencies.
So those higher frequencies shift some of the important stuff that I should be listening to into frequencies I don't hear anymore. So that's a way of using this theory to try to understand what's wrong with me. But there's also things- it's not just me. Normal people have trouble distinguishing female speech, especially in taxing environments, and one of those is singing. So if you consider altos and sopranos, sopranos are like, the worst, right? Because they are not only female, but they're at the high end the females.
And there are those who complain about not being able to understand female singers. OK, so here's a demo that will help us to understand whether that's a valid kind of a criticism or not. So what I've got is a professional singer singing, la, la, la, la- on a scale.
Fourier Series Lecture Notes
So from low frequency to high frequency, then a different sound, a different sound, a different sound, a different sound. So the first thing that I want to do- I want you to listen to those different sounds as she goes across the scale. Then I'm going to play just the low ones and just the high ones, just the low frequency ones and just the high frequency ones. So first, the different scales- la, la, lore, loo, lee, OK? AUDIO PLAYBACK - La, la, la, la, la, la, la, la, la, la, la, la, la, la, la, la.
Lore, lore, lore, lore, lore, lore, lore, lore, lore, lore, lore, lore, lore, lore, lore, lore, lore. Loo, loo, loo, loo, loo, loo, loo, loo, loo, loo, loo, loo, loo, loo, loo, loo. Ler, ler, ler, ler, ler, ler, ler, ler, ler, ler, ler, ler, ler, ler, ler, ler. Lee, lee, lee, lee, lee, lee, lee, lee, lee, lee, lee, lee, lee, lee, lee, lee. END PLAYBACK PROFESSOR: OK, so now what I've done is I've sliced out the lowest frequency, the very first of the scale from each of the sounds and pasted them together to get the low frequency run.
And then I took out the high ones and pasted them together. Exactly the same sounds, just played in a different order. So first the low frequency ones.
And the high frequency ones. LAUGHTER PROFESSOR: It's not her fault. She's doing everything right. And you can see that. Here is, again, a Python program analyzing those same segments.
So what's shown here is the ee, the filter derived from the ee, by thinking about the lee, lee, lee, lee, lee, lee- by looking at that sequence, and averaging across the frequencies. So here's the filter. Here's the filtered glottis spectrum for a low frequency and intermediate frequency and high frequency.
What's the difference between the low, middle, and high? What's characteristically different at low and high?
AUDIENCE: INAUDIBLE frequency, like high amplitude. PROFESSOR: So if you look at the low frequency, the low pitch, there are more frequency components in a given range. So if I say analyze the frequencies between 0 and 1,000 hertz, 1,000 cycles per second, there are more lines when you have a low frequency. And so you get the density of the lines is greater for the low frequency utterance than it is for the high frequency utterance. The low frequency utterance is spaced close enough that you can clearly figure out this pattern from that spacing, because there are multiple lines per peak. The problem is that the speech waveforms have very sharp resonances. The peaks are narrow.
So that as you go to a higher frequency, now it's very hard to see. So where there was two lines characterizing this guy, now there's one. And at the highest frequency, there's nothing there. Similarly with these peaks, again, several lines representing each peak. One line representing- nothing representing this peak, nothing representing that peak. There is nothing about ee in that signal. And if you do the same analysis for ah, you get the same result.
There's nothing about ee, and there's nothing about ah. There's just nothing there. There's no way anybody is going to tell those two sounds apart. So if the singer put her voice, her vocal tract, in precisely the right location, there would be no difference between those sounds, OK, regardless of what the director said.
OK, so that's the problem. So that's a way of using the Fourier analysis to gain some insight into some anomalous situations.
AUDIENCE: Does this have more to do with the rate at which you're sampling? PROFESSOR: No. It has to do only with the frequency content of the glottis waveform. You can think about it as sampling. And that's a good insight, because the Fourier series only has components at integer multiples of a base frequency. So that means we're sampling in frequency, not in time.
So we have this potentially continuous frequency response, which is characterizing this. That is continuous. I could excite this at any frequency that I want to. But the glottis waveform of the singer is only sampling that at particular frequencies- C, C prime, C double prime, B, B prime, B double prime, right?
So there's only certain frequencies at which the singer is sampling this. So there is a way of thinking about it as sampling.
But it's not sampling due to my ADD converter or anything like that. It's sampling in frequency. So the point is that this kind of a source filter idea, and more generally, the filter idea, is such a powerful representation that next time we'll think about how to do the same sort of thing for non-periodic ? Stimuli.?
See you then. This is one of over 2,200 courses on OCW. Find materials for this course in the pages linked along the left.
MIT OpenCourseWare is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum. No enrollment or registration.
Freely browse and use OCW materials at your own pace. There's no signup, and no start or end dates. Knowledge is your reward. Use OCW to guide your own life-long learning, or to teach others. We don't offer credit or certification for using OCW. Made for sharing. Download files for later.
Send to friends and colleagues. Modify, remix, and reuse (just remember to cite OCW as the source.) Learn more.
Comments are closed.
|
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |