forked from Sasserisop/MATH201
added lec 33, fixed a katex error in system of linear equations, and revised some of power series
This commit is contained in:
parent
21619cacd9
commit
e88b973271
|
@ -152,4 +152,88 @@ His reply was something along the lines of:
|
|||
No! Please don't, you'll put in some arbitrary values with the wrong boundary conditions and arive with some crap result. You would need to redo the calculations and get completely new eigen values and eigen functions. I know how much you guys love formulas. But you need to understand what's happening, I don't know who thought it would be a good idea to make people memorize formulas, certainly not my idea. I want to be confident in the future engineers and the bridges that are built.
|
||||
...
|
||||
Reminds me of what he said when he was talking about George Green, and how nowadays everything is McDonalds style, even our education. Maybe this is what he is referring to.
|
||||
#end of lec 32
|
||||
#end of lec 32
|
||||
#start of lec 33 (Dec 1)
|
||||
#ex
|
||||
imagine along the wire we produce some heat somehow:
|
||||
$\frac{ \partial u }{ \partial t }=\frac{ \partial^2 u }{ \partial x^2 }+\underbrace{ e^x }_{\text{ heat} } \quad 0\leq x\leq 1 \quad t>0$
|
||||
$e^x$ is called a source term, a source of heat.
|
||||
$u(t,0)=-1 \quad u(1,t)=-e \qquad t>0$
|
||||
$u(0,x)=\sin(\pi x)+e^x, \qquad 0\leq x\leq 1$
|
||||
we expect that the solution is two solutions, that are summed together due to superposition.
|
||||
where one of the terms does not depend on time.
|
||||
ie: $u(t,x)=w(t,x)+v(x)$
|
||||
v is the steady state term, w is transient.
|
||||
plug in:
|
||||
$\frac{ \partial w }{ \partial t }=\frac{ \partial^2 w }{ \partial x^2 }+v''(x)+e^x$ v double prime because v is a function of one variable.
|
||||
$w(t,0)+v(0)=-1$ $w(t,1)+v(t)=-e$
|
||||
and $w(t,x)\to_{t\to \infty}0$ (transient term decays to zero.)
|
||||
$\frac{ \partial w }{ \partial t }, \frac{ \partial w }{ \partial x }\underset{ t\to \infty }{ \to }0$
|
||||
$\frac{ \partial w }{ \partial t }=\frac{ \partial^2 w }{ \partial x^2 }+v''(x)+e^x\to_{t\to \infty}v''+e^x=0$
|
||||
and the boundaries as t->infty: $v(0)=-1$ $v(1)=-e$
|
||||
|
||||
$v''=-e^x, \qquad v(0)=-1 \quad v(1)=-e$
|
||||
do you agree? please be awake this is an important class.
|
||||
$w(t,0)=w(t,1)=0$ $t>0$
|
||||
|
||||
integrate twice:
|
||||
$v(x)=-e^x+c_{1}x+c_{2}$
|
||||
$v(0)=-1+c_{2}=-1$
|
||||
$\implies c_{2}=0$
|
||||
$v(1)=-e=-e+c_{1}$
|
||||
$c_{1}=0$
|
||||
|
||||
$v(x)=-e^x$
|
||||
$u(0,x)=\sin(\pi x)+e^x$
|
||||
$w(0,x)-e^x=\sin(\pi x)+e^x$
|
||||
$w(0,x)=\sin(\pi x)+2e^x$ <- a new condition.
|
||||
we have thus transformmed the problem to a problem we can solve with previous tools:
|
||||
$\frac{ \partial u }{ \partial t }=\frac{ \partial^2 u }{ \partial x^2 } \quad 0\leq x\leq 1 \quad t>0$
|
||||
$w(t,0)=w(t,1)=0\quad t>0$
|
||||
$w(0,x)=\sin(\pi x)+2e^x, \quad 0\leq x\leq 1$
|
||||
solve this with seperation of variables, solve the eigen value problem.
|
||||
$w(t,x)=T(t)X(x)$
|
||||
$\frac{T'}{T}=\frac{X''}{X}=-\lambda$
|
||||
skip some steps and the eigen value and functions are:
|
||||
$\lambda_{n}=(n\pi)^2, n=1,2,\dots$
|
||||
$X_{n}(x)=\sin(n\pi x)$
|
||||
|
||||
$\frac{T_{n}'}{T_{n}}=-(n\pi)^2\implies T_{n}(t)=b_{n}e^{-(n\pi)^2t}\sin(n\pi x)$
|
||||
"each time we introduce the heat equation, we are adding one extra step at a time, really everything else is the same and after you practice 2 or 3 times youll get it.""
|
||||
$w(t,x)=\sum_{n=1}^\infty b_{n}e^{-(n\pi)^2t}\sin(n\pi x)$
|
||||
$w(0,x)=\sin \pi x+2e^x=\sum_{n=1}^\infty b_{n}\sin(n\pi x)$
|
||||
that series is nothing but: $b_{1}\sin(\pi x)+b_{2}\sin(2\pi x)+\dots$
|
||||
move sinpix to RHS:
|
||||
$2e^x=(b_{1}-1)\sin(\pi x)+b_{2}\sin(2\pi x)+b_{3}\sin(3\pi x)+\dots$
|
||||
or you could say:
|
||||
$=\sum_{n=1}^\infty c_{n}\sin(\pi x)$
|
||||
$c_{1}=b_{1}-1$ $c_{k}=b_{k}, \quad k=2,3,\dots$
|
||||
we do this because by combining the sin term we get to skip an integral to compute.
|
||||
"if you start paniking on the exams youre done. your fried."
|
||||
now we need the fourier sin series of $2e^x$
|
||||
$c_{n}=2\cdot\frac{2}{1}\int_{0}^1 e^x\sin(n\pi x) \, dx$ we have to solve this by integration by parts
|
||||
how's math 209 going? "stokes theorem i mean for electrical engineers its a must."
|
||||
$c_{n}=4(e^x\cancelto{ 0 }{ \sin(n\pi x) }|_{0} ^1-n\pi \int _{0}^1 e^x\cos(n\pi x)\, dx$
|
||||
$=-4n\pi\left( e^x\cos(n\pi x)|_{0}^1+n\pi \int _{0}^1 e^x\sin(n\pi x)\, dx \right)=-4n\pi(e(-1)^n-1)-4n^2\pi^2\int _{0}^1 e^x\sin(n\pi x)\, dx$
|
||||
$(1+n^2\pi^2)\int_{0}^1 e^x\sin(n\pi x) \, dx=n\pi(1-)$...(missed)
|
||||
|
||||
$c_{n}=4\int _{0}^1 e^x\sin(n\pi x)\, dx=\frac{4n\pi(1-e(-1)^n)}{1+(n\pi)^2}$
|
||||
$b_{1}=c_{1}+1=\frac{4\pi(1+e)}{1+\pi^2}+1$
|
||||
$b_{n}=\frac{4n\pi(1-e(-1)^n)}{1+(n\pi)^2}, \quad n=2,3,\dots$
|
||||
$$u(t,x)=-e^x+w(t,x)=-e^x+\sum_{n=1}^\infty b_{n}e^{-(n\pi)^2t}\sin(n\pi x)$$
|
||||
41 minutes to solve, maybe 5 minutes more for solving the eigen value part. might take 15 on an exam setting.
|
||||
back in the day written exam there was 4 hours, 2to3 questions and then there was an oral exam with your professoor to explain what you wrote. one time he stayed for 8 hours, being cycled back and forth through the line until the professor was satisfied with minev's answer.
|
||||
almost everyone would take above undergrad degree, 3yrs to finish undergrad +2 more years total. (if i didnt mishear him)
|
||||
"first year was the most difficult by ffar, a massacure, half of us would fail. your chances in an exam were 1 of 2."
|
||||
ok i spent 5 minutes talking nonsense but
|
||||
you complaining that your exams are hard theyre not hard. I'm talking about 40 years ago, no computers no cellphones, and it wasnt so bad because we had to use our brain more.
|
||||
|
||||
now we consider a guitar string:
|
||||
![[Partial differential equations (lec 30-32) 2023-12-01 13.49.58.excalidraw]]
|
||||
assuming the thickness of the string is much smaller than the length of the string, which is true.
|
||||
$\frac{ \partial u^2 }{ \partial t^2 }=\alpha^2 \frac{ \partial^2 u }{ \partial x^2 } \quad 0\leq x\leq L, t>0$
|
||||
^ Reminds me of the wave equation from phys 130.
|
||||
$u(t,0)=u(t,L)=0 \qquad t>0$
|
||||
$u(0,x)=f(x)$ $0\leq x\leq L$
|
||||
$\frac{ \partial u }{ \partial t }(0,x)=g(x)$ $0\leq x\leq L$
|
||||
#end of lec 33
|
|
@ -1,103 +1,111 @@
|
|||
|
||||
|
||||
#start of lec 22
|
||||
Finished chapter 7 of the course textbook, Let's begin chapter 8!
|
||||
# Power series
|
||||
#powseries
|
||||
A power series is defined by:
|
||||
$$\sum_{n=0}^\infty a_{n}(X-X_{0})^n=a_{0}+a_{1}(x-x_{0})+a_{2}(x-x_{0})^2+\dots$$
|
||||
It is convergent if:
|
||||
$$\sum_{n=0}^\infty a_{n}(x-x_{0})^n=a_{0}+a_{1}(x-x_{0})+a_{2}(x-x_{0})^2+\dots$$
|
||||
Where $x_{0}$ is a given point of expansion.
|
||||
It is convergent if and only if:
|
||||
$$\sum_{n=0} ^ \infty a_{n}(x-x_{0})^n<\infty \text{ at a given x}$$
|
||||
Otherwise, it is divergent.
|
||||
If $\sum_{n=0}^\infty \mid a_{n}(x-x_{0})^n\mid$ is convergent
|
||||
$\implies\sum_{n=0}^\infty a_{n}(x-x_{0})^n$ is absolutely convergent
|
||||
Just because something is absolutely convergent doesn't mean it is conditionally convergent. think of the harmonic series. It is absolutely convergent but also divergent (conditionally divergent).
|
||||
Just because something is convergent doesn't mean it is absolutely convergent. think of the alternating harmonic series. It is convergent but absolutely diverges.
|
||||
However, if a series is absolutely convergent, then it's definitely convergent as well.
|
||||
|
||||
Theorem: With each $\sum_{n=0}^{\infty}a_{n}(x-x_{0})^n$ we can associate $0\leq \rho\leq \infty$ such that
|
||||
$\sum_{n=0} ^\infty a_{n}(x-x_{0})^n$ is absolutely convergent
|
||||
for all x such that $\mid x-x_{0}\mid<\rho$, divergent for all x where $\mid x-x_{0}\mid>\rho$
|
||||
Theorem: With each $\sum_{n=0}^{\infty}a_{n}(x-x_{0})^n$ we can associate a radius of convergence $\rho$ where $0\leq \rho\leq \infty$.
|
||||
The series is absolutely convergent
|
||||
for all $x$ such that $\mid x-x_{0}\mid<\rho$, and divergent for all $x$ where $\mid x-x_{0}\mid>\rho$
|
||||
"Who keeps stealing the whiteboard erases? (jokingly) It's a useless object, anyways"
|
||||
![[Drawing 2023-10-30 13.12.57.excalidraw.png]]
|
||||
how can we find $\rho$?
|
||||
Ratio test: If $\lim_{ n \to \infty }\mid \frac{a_{n+1}}{a_{n}}\mid=L$
|
||||
then $\rho=\frac{1}{L}$
|
||||
Definition of ratio test: If $\lim_{ n \to \infty }\mid \frac{a_{n+1}}{a_{n}}\mid=L$
|
||||
then the radius of convergence $\rho$ is: $\rho=\frac{1}{L}$
|
||||
|
||||
## Examples:
|
||||
#ex
|
||||
is this convergent? Divergent? and where so?
|
||||
$\sum_{n=0}^\infty \frac{2^{-n}}{n+1}(x-1)^n$
|
||||
determine the convergent set.
|
||||
#ex #powseries
|
||||
Is this infinite series convergent? divergent? and where so?
|
||||
$$\sum_{n=0}^\infty \frac{2^{-n}}{n+1}(x-1)^n$$
|
||||
Determine the convergent set.
|
||||
Use ratio test:
|
||||
$\lim_{ n \to \infty }\mid \frac{a_{n+1}}{a_{n}}\mid=\lim_{ n \to \infty } \frac{2^{-(n+1)}}{n+2} \frac{n+1}{2^{-n}}=\frac{1}{2}\implies \rho=2$
|
||||
so it's convergent on $-1<x<3$, divergent on $\mid x-1\mid>2$
|
||||
But what about on the points $-1$ and $3$?
|
||||
plug in $x_{0}=-1$
|
||||
$\lim_{ n \to \infty }\mid \frac{a_{n+1}}{a_{n}}\mid=\lim_{ n \to \infty } \frac{2^{-(n+1)}}{n+2} \frac{n+1}{2^{-n}}=\lim_{ n \to \infty }\frac{n+1}{2(n+2)}=\frac{1}{2}\implies \rho=2$
|
||||
It's convergent 2 units away from $x_{0}$
|
||||
So it's convergent on $-1<x<3$, divergent on $\mid x-1\mid>2$
|
||||
But what about on the points $-1$ and $3$? Ratio test tells us nothing for these points.
|
||||
Plug in $x=-1$
|
||||
$\sum_{n=0}^{\infty} \frac{2^{-n}}{n+1}(-2)^n=\sum_{n=0}^\infty \frac{(-1)^n}{n+1}<\infty$ <- That is the alternating harmonic series, it is convergent.
|
||||
plug in $x_{0}=3$:
|
||||
plug in $x=3$:
|
||||
$\sum_{n=0}^\infty \frac{2^{-n}}{n+1}2^n=\sum_{n=0}^\infty \frac{1}{n+1}>\infty$ <- harmonic series, this diverges.
|
||||
so the power series is convergent on $[-1,3)$ divergent otherwise.
|
||||
$$\text{ converges only on: } [-1,3)$$
|
||||
|
||||
Assume that $\sum_{n=0}^\infty a_{n}(x-x_{0})^n$ and $\sum_{n=0}^\infty b_{n}(x-x_{0})^n$ are converget with $\rho>0$
|
||||
so the power series is convergent on
|
||||
$$x=[-1,3)$$and diverges otherwise.
|
||||
</br>
|
||||
## Theorems:
|
||||
Assume that $\sum_{n=0}^\infty a_{n}(x-x_{0})^n$ and $\sum_{n=0}^\infty b_{n}(x-x_{0})^n$ are convergent with $\rho>0$
|
||||
Then:
|
||||
1.) $\sum_{n=0}^\infty a_{n}(x-x_{0})^n+\sum_{n=0}^{\infty}b_{n}(x-x_{0})^n=\sum_{n=0}^\infty(a_{n}+b_{n})(x-x_{0})^n$
|
||||
That has a radius of convergence of at least $\rho$.
|
||||
2.) $\left( \sum_{n=0}^\infty a_{n}(x-x_{0})^n \right)\left( \sum_{n=0}^\infty b_{n}(x-x_{0})^n \right) \qquad c_n=\sum_{k=0}^n a_{k}b_{n-k}$(Cauchy)
|
||||
</br>
|
||||
2.) $\left( \sum_{n=0}^\infty a_{n}(x-x_{0})^n \right)\left( \sum_{n=0}^\infty b_{n}(x-x_{0})^n \right)=\sum_{n=0}^\infty c_{n}(x-x_{0})^n$ (called the Cauchy product)
|
||||
Where $c_n=\sum_{k=0}^n a_{k}b_{n-k}$
|
||||
Here's a demonstration that shows why $c_{n}$ equals the expression above:
|
||||
$=(a_{0}+a_{1}(x-x_{0})+a_{2}(x-x_{0})^2+\dots)(b_{0}+b_{1}(x-x_{0})+b_{2}(x-x_{0})^2+\dots)$
|
||||
$=a_{0}b_{0}+(a_{0}b_{1}+a_{1}b_{0})(x-x_{0})+(a_{0}b_{2}+a_{1}b_{1}+a_{2}b_{1})(x-x_{0})^2+\dots$ (Cauchy multiplication)
|
||||
|
||||
more Definitions of power series:
|
||||
If $\sum_{n=0}^{\infty}a_{n}(x-x_{0})^n$ is convergent with $\rho>0$
|
||||
$\mid x-x_{0}\mid<\rho$
|
||||
we can differentiate this infinite sum and get:
|
||||
$=a_{0}b_{0}+(a_{0}b_{1}+a_{1}b_{0})(x-x_{0})+(a_{0}b_{2}+a_{1}b_{1}+a_{2}b_{0})(x-x_{0})^2+\dots$
|
||||
</br>
|
||||
3.) If $\sum_{n=0}^{\infty}a_{n}(x-x_{0})^n$ is convergent with $\rho>0$
|
||||
ie: it's convergent when $\mid x-x_{0}\mid<\rho$
|
||||
Then we can differentiate this infinite sum and get:
|
||||
$\implies y'(x)=\sum_{n=1}^\infty a_{n}n(x-x_{0})^{n-1}$
|
||||
$y''(x)=\sum_{n=2}^\infty a_{n}n(n-1)(x-x_{0})^{n-2}$
|
||||
|
||||
Theorem: If $y(x)$ is infinitely many times differentiable on some interval: $\mid x-x_{0}\mid<\rho$
|
||||
then: $\sum_{n=0}^\infty \frac{y^{(n)}(x_{0})}{n!}(x-x_{0})^n$ (Taylor series)
|
||||
"believe me, taylor series is the most important theorem in engineering."
|
||||
"I mean engineering is all about approximations, do you know how your calculator computes ...? Taylor series!"
|
||||
</br>
|
||||
Theorem: If $y(x)$ is infinitely many times differentiable on some interval $\mid x-x_{0}\mid<\rho$
|
||||
then: $y(x)=\sum_{n=0}^\infty \frac{y^{(n)}(x_{0})}{n!}(x-x_{0})^n$ (Taylor series)
|
||||
"believe me, Taylor series is the most important theorem in engineering."
|
||||
"I mean engineering is all about approximations, do you know how your calculator computes [...]? Taylor series!"
|
||||
"Applied mathematics is all about approximating and then measuring how good your approximation is, it's what engineering is all about." -Prof (loosy quotes, can't keep up with how enthusiastic he is!)
|
||||
|
||||
</br>
|
||||
Definition: If $y(x)$ can be represented with a power series on $\mid x-x_{0}\mid$ then $y(x)$ is an analytic function on $(x_{0}-\rho,x_{0}+\rho)$
|
||||
btw analytic functions are very important in complex calculus MATH301. (i don't have that next term)
|
||||
|
||||
btw analytic functions are very important in complex calculus.
|
||||
</br>
|
||||
### Shifting the index (theorem)
|
||||
$f(x)=\sum_{n=0}^\infty a_{n}(x-x_{0})^n<\infty$
|
||||
$f'(x)=\sum_{n=1}^\infty a_{n}n(x-x_{0})^{n-1}$
|
||||
consider the equation $f'+f=0$
|
||||
$f(x)+f'(x)=\sum_{n=0}^\infty a_{n}(x-x_{0})^n+\sum_{n=1}^\infty a_{n}n(x-x_{0})^{n-1}$
|
||||
let $n-1=k$
|
||||
$=\sum_{n=0}^\infty a_{n}(x-x_{0})^n+\sum_{k=0}^\infty a_{n}(k+1)(x-x_{0})^{k}$
|
||||
$=\sum_{n=0}^\infty(a_{n}+a_{n}(n+1))(x-x_{0})^n$
|
||||
|
||||
$=\sum_{n=0}^\infty a_{n}(x-x_{0})^n+\sum_{k=0}^\infty a_{k+1}(k+1)(x-x_{0})^{k}$
|
||||
$=\sum_{n=0}^\infty(a_{n}+a_{n+1}(n+1))(x-x_{0})^n=0$
|
||||
Last theorem fo' da day:
|
||||
If $\sum_{n=0}^\infty a_{n}(x-x_{0})^n=0$ for all x$\in(x_{0}-\rho,x_{0}+\rho)$ where $\rho>0$
|
||||
$\implies a_{n}=0$, $n=0,1,2,\dots$
|
||||
This means in the above example $a_{n}+a_{n+1}(n+1)=0$
|
||||
This is called a recursive relation and it will come in handy when solving differential equations.
|
||||
#end of lec 22 #start of lec 23
|
||||
Mid terms are almost done being marked!
|
||||
|
||||
## Solving DE using series
|
||||
Let's start using power series to start solving DE!
|
||||
No magic formulas we need to memorize when solving equations using power series (Yay!)
|
||||
#ex
|
||||
$$y'-2xy=0 \qquad x_{0}=0$$
|
||||
note this is separable and linear, so we can already solve this. This time we do it with power series
|
||||
y should be an analytic function (meaning, infinitely many times differentiable)
|
||||
so we should expect we can represent it as a power series
|
||||
Note this is separable and linear, so we can already solve this. This time we do it with power series
|
||||
$y$ should be an analytic function (meaning, infinitely many times differentiable)
|
||||
therefore we should expect we can represent $y$ as a power series:
|
||||
$y(x)=\sum_{n=0}^\infty a_{n}x^n$
|
||||
$y'(x)=\sum_{n=1}^\infty a_{n}nx^{n-1}$
|
||||
plug these into the equation:
|
||||
$\sum_{n=1}^\infty a_{n}nx^{n-1}-\sum_{n=0}^\infty 2a_{n}x^{n+1}=0$
|
||||
if the entire interval is zero, we should expect all the coefficients to equal 0
|
||||
we need to combine the summations.
|
||||
shift the index!
|
||||
shift the index! (so that the exponents on the x are the same)
|
||||
$k=n-1,\ k=n+1$
|
||||
$\sum_{k=0}^\infty a_{k+1}(k+1)x^{k}-\sum_{k=1}^\infty 2a_{k-1}x^k=0$
|
||||
|
||||
|
||||
$a_{1}+\sum_{k=1}^\infty (\underbrace{ a_{k+1}(k+1)-2a_{k-1} }_{ =0 })x^k=0$
|
||||
The whole series equals zerro,
|
||||
The whole series equals zero, (due to the theorem from last lecture.)
|
||||
so $a_{1}=0$ is the first observation
|
||||
second observation:
|
||||
$a_{k+1}=\frac{2}{k+1}a_{k-1}$ where $k=1,2,3,\dots$ This is called a recursive relation. (if we know one index we can produce some other index recursively)
|
||||
from this equation:
|
||||
from these observations:
|
||||
$a_{1}, a_{3}, a_{5}, \dots=0$
|
||||
$a_{2k+1}=0, k=0,1,2,\dots$
|
||||
this means half of our power series disappears!
|
||||
|
@ -106,16 +114,18 @@ $a_{2}$ is related to $a_{0}$ from the above formula
|
|||
$a_{2}=\frac{2}{2}a_{0}$ ($k=1$)
|
||||
$a_{4}=\frac{2}{3+1}a_{2}=\frac{a_{0}}{2}$ ($k=2$)
|
||||
$a_{6}=\frac{2}{5+1} \frac{a_{0}}{2}=\frac{a_{0}}{6}$ ($k=3$)
|
||||
$a_{8}=\frac{1}{4} \frac{a_{0}}{6}=\frac{a_{0}}{24}$ ($k=4$)
|
||||
$a_{8}=\frac{2}{7+1} \frac{a_{0}}{6}=\frac{a_{0}}{24}$ ($k=4$)
|
||||
$a_{9}=\frac{2}{9+1} \frac{a_{0}}{24}=\frac{a_{0}}{120}$ ($k=5$)
|
||||
(note that the $k$ value here is used differently than the $k$ above.)
|
||||
you might start noticing a factorial-y pattern:
|
||||
$a_{2k}=\frac{1}{k!}a_{0}$ where $k=0,1,2,\dots$
|
||||
|
||||
$a_{0}$ is an arbitrary coefficient! We should expect to get one just like we did when solving with previous techniques.
|
||||
$y(x)=a_{0}\sum_{k=0} ^\infty \frac{1}{k!}x^{2k}=a_{0}\sum_{k=0} ^\infty \frac{1}{k!}(x^2)^k$
|
||||
Does this look like something from math 101?
|
||||
Yes! it looks like the taylor series of $e^{x^2}$
|
||||
Yes! it looks like the Taylor series of $e^{x^2}$
|
||||
so:
|
||||
$$y(x)=a_{0}e^{x^2}$$
|
||||
"if we are correct--the same is not true in general in real life--but in mathematics if we are correct we should end up with the same solution" -Prof
|
||||
"if we are correct--the same is not true in general in real life--but in mathematics if we are correct we should end up with the same solution had we solved with another method." -Prof
|
||||
#ex
|
||||
$$z''-x^2z'-xz=0 \qquad \text{about } x_{0}=0$$
|
||||
using regular methods will be problematic,
|
||||
|
@ -124,22 +134,27 @@ if you use laplace transform you will have problems as well.
|
|||
|
||||
lets use power series:
|
||||
assume solution is analytic:
|
||||
$z(x)=\sum_{n=0}^\infty a_{n}nx^{n-1}$
|
||||
$z(x)=\sum_{n=0}^\infty a_{n}x^n$
|
||||
$z'(x)=\sum_{n=1}^\infty a_{n}nx^{n-1}$
|
||||
$z''(x)=\sum_{n=2}^\infty a_{n}n(n-1)x^{n-2}$
|
||||
plug in:
|
||||
$\underset{ n-2=k }{ \sum_{n=2}^\infty a_{n}n(n-1)x^{n-2} }-\underset{ n+1=k }{ \sum_{n=1}^\infty a_{n} nx^{n+1} }-\underset{ n+1=k }{ \sum_{n=0}^\infty a_{n}x^{n+1} }=0$
|
||||
|
||||
shift the index to equalize the powers.
|
||||
>loud clash of clans log in sound, class giggles, "whats so funny?" :D "im not a dictator" something about you are not forced to sit through and watch the lecture if you don't like to, "I dont think everybody should like me."
|
||||
|
||||
$\sum_{k=0}^\infty a_{k+2}(k+2)(k+1)x^k-\sum_{k=2}^\infty a_{k-1}x^k-\sum_{k=1}^\infty a_{k-1}x^k=0$
|
||||
$2a_{2}+6a_{3}+\sum_{k=2}^\infty a_{k+2}(k+2)(k+1)x^k-\sum_{k=2}^\infty a_{k-1}(k-1)x^k-a_{0}x-\sum_{k=2}^\infty a_{k-1}x^k=0$
|
||||
$6a_{2}+(6a_{3}-a_{0})x+\sum_{k=2}^\infty (a_{k+2}(k+1)(k+2)-a_{k-1}k)x^k=0$
|
||||
$\sum_{k=0}^\infty a_{k+2}(k+2)(k+1)x^k-\sum_{k=2}^\infty a_{k-1}(k-1)x^k-\sum_{k=1}^\infty a_{k-1}x^k=0$
|
||||
just as in the previous example, we take out the first terms so that each index starts at the same number.
|
||||
$2a_{2}+6a_{3}x+\sum_{k=2}^\infty a_{k+2}(k+2)(k+1)x^k-\sum_{k=2}^\infty a_{k-1}(k-1)x^k-a_{0}x-\sum_{k=2}^\infty a_{k-1}x^k=0$
|
||||
$6a_{2}+(6a_{3}-a_{0})x+\sum_{k=2}^\infty (a_{k+2}(k+1)(k+2)-a_{k-1}(k-\cancel{ 1 })-\cancel{ a_{k-1} })x^k=0$
|
||||
|
||||
$\underbrace{ 6a_{2} }_{ =0 }+\underbrace{ (6a_{3}-a_{0})x }_{ =0 }+\sum_{k=2}^\infty \underbrace{ (a_{k+2}(k+1)(k+2)-a_{k-1}k)x^k }_{ =0 }=0$
|
||||
$a_{2}=0 \qquad a_{3}=\frac{a_{0}}{6}$
|
||||
$a_{k+2}=\frac{k}{(k+1)(k+2)}a_{k-1}$ where $k=2,3,4,\dots$
|
||||
Finally, a recursive relation!
|
||||
it should be clear that each step of 3 starting from $a_{2}$ should all equal 0.
|
||||
We easily deduce from the recursive relation and $a_{2}=0$ that:
|
||||
$a_{2}, a_{5}, a_{8}, \dots=0$
|
||||
$a_{3k-1}=0$ where $k=1,2,\dots$
|
||||
ie: $a_{3k-1}=0$ where $k=1,2,\dots$
|
||||
$a_{4}=\frac{2}{3\cdot 4}a_{1}$
|
||||
$a_{7}=\frac{5}{6\cdot 7} \frac{2}{3\cdot 4}a_{1}$
|
||||
realize if we multiply here by 5 and 2:
|
||||
|
|
|
@ -36,7 +36,7 @@ $=2(s+10)J_{1}+sJ_{3}=60 \frac{1-e^{-s}}{s}$
|
|||
Now for the second eq, hitting it with the LT yields:
|
||||
$-10J_{1}+(s+10)J_{3}=0$
|
||||
Let's isolate $J_{1}$:
|
||||
$\begin{align} 2(s+10)J_{1}+sJ_{3}&=60 \frac{1-e^{-s}}{s}&\times(s+10) \\ -\quad-10J_{1}+(s+10)J_{3}&=0&\times s \\ \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_&\_\_\_\_\_\_\_\_\_\_\_\_\_\end{align}$
|
||||
$\begin{matrix} 2(s+10)J_{1}+sJ_{3}=60 \frac{1-e^{-s}}{s}&\qquad\quad\times(s+10) \\ -\quad-10J_{1}+(s+10)J_{3}=0&\times s \\ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄\end{matrix}$
|
||||
$=\quad2(s+10)(s+10)J_{1}+\cancel{ sJ_{3}(s+10) }+10sJ_{1}-\cancel{ s(s+10)J_{3} }=(s+10)60 \frac{1-e^{-s}}{s}$
|
||||
$=2(s^2+20s+100+5s)J_{1}=(s+10)60 \frac{1-e^{-s}}{s}$
|
||||
|
||||
|
|
|
@ -25,7 +25,7 @@ I have written these notes for myself, I thought it would be cool to share them.
|
|||
[Power series (lec 22-25)](power-series-lec-22-25.html) (raw notes, not reviewed or revised yet.)
|
||||
[Separation of variables & Eigen value problems (lec 26-28)](separation-of-variables-eigen-value-problems-lec-26-28.html) (raw notes, not reviewed or revised yet.)
|
||||
[Fourier series (lec 28-29)](fourier-series-lec-28-29.html) (raw notes, not reviewed or revised yet.)
|
||||
[Partial differential equations (lec 30-32)](partial-differential-equations-lec-30-32.html) (raw notes, not reviewed or revised yet.)
|
||||
[Partial differential equations (lec 30-33)](partial-differential-equations-lec-30-33.html) (raw notes, not reviewed or revised yet.)
|
||||
</br>
|
||||
[How to solve any DE, a flow chart](Solve-any-DE.png) (Last updated Oct 1st, needs revision. But it gives a nice overview.)
|
||||
[Big LT table (.png)](drawings/bigLTtable.png)
|
||||
|
|
Loading…
Reference in New Issue