Chapter 4
Series Solution of Linear Equations
The solution of a linear equation with analytic coefficients is analytic. That is a deep
result in the theory of linear differential equations. Although there is non guarantee that
the solution of a singular differential equation is expanded as a power series, the Frobe-
nius method provided us with conditions under which such equations possess solutions in
the series form. In the last part of this chapter, we introduce some important equations of
mathematical physics.
4.1 Introduction
4.1.1 Power series
The reader is referred to the appendix of this bo o k for a detailed discussion on the topic.
Here, we assume that the reader is familiar with the numeric series
X
n=0
1
a
n
= a
0
+ a
1
+ a
2
+ ···;
and the notion of convergence and divergence of numeric series. Remember also the ratio
test for the convergence of a numeric series. If
lim
n!1
a
n+1
a
n
< 1;
then the series convergence absolutely, that is,
X
n=0
1
ja
n
j= a¯ < 1. If
lim
n!1
a
n+1
a
n
> 1;
the series diverges. The general form of a power series centered at a point x
0
is
X
n=0
1
c
n
(x ¡x
0
)
n
= c
0
+ c
1
(x ¡x
0
) + c
2
(x ¡x
0
)
2
+ ···: (4.1)
1
Definition 4.1. The series
X
n=1
1
c
n
(x ¡ x
0
)
n
is convergent at a point x = a if numeric
series
X
n=1
1
c
n
(a ¡x
0
)
n
is convergent.
Example 4.1. Consider the following series
X
n=0
1
x
n
= 1 + x + x
2
+ ·· ·:
Here c
n
= 1, and x
0
= 0. Note that the series is a geometric series for any jxj< 1, and thus
the series conve rges to
1
1 ¡ x
.
Let us use the ratio test to determine the values of x for which power series (4.1) con-
verges. According to the test, the series is absolutely convergent if
lim
k ! 1
c
k+1
(x ¡x
0
)
k+1
c
k
(x ¡x
0
)
k
< 1;
that implies
jx ¡x
0
j lim
k ! 1
c
k+1
c
k
< 1;
or equivalently
jx ¡x
0
j< lim
k ! 1
c
k
c
k+1
:
The above inequality gives a range for x for which the series converges absolutely, that is,
X
n=0
1
jc
n
(x ¡x
0
)j
n
;
convergent. The value L := lim
k ! 1
c
k
c
k+1
is called the radius of convergence o f the series,
and thus the series converges for all x in (x
0
¡ L; x
0
+ L). This interval is called the
domain of convergence of the series.
Example 4.2. Consider series 1 + x + x
2
+ ···. The test implies
L: = lim
n!1
c
n
c
n+1
= 1
and thus the series converges for all x in interval (¡1; 1). The series represents the func-
tion f (x) =
1
1 ¡ x
expanded at x
0
= 0. The radius of convergence of series
X
n=0
1
1
n!
x
n
is
L = lim
n!1
1
n!
1
(n + 1)!
= lim
n!1
(n + 1) = 1;
and thus the series c onverges for all x 2 (¡1; 1). The series represents funct ion e
x
expanded at x
0
= 0.
2 Series Solution of Linear Equations
4.1.2 Analytic function s
Definition 4.2. A real valued function f is called analytic at x
0
if there i s a n open
interval I = (x
0
¡L; x
0
+ L), for some L > 0, and constants c
k
such that
f(x) = c
0
+ c
1
(x ¡x
0
) + c
2
(x ¡x
0
)
2
+ ·· ·; (4.2)
for all x 2I.
The definition states that for any a 2I, the following convergence holds
lim
n!1
[c
0
+ c
1
(a ¡x
0
) + ···+ c
n
(a ¡x
0
)
n
] = f(a):
Problem 4.1. If a function f is analytic at a point x
0
with the domain o f convergence I, show that
f is analytic at all points x 2I.
Hint: Without loss of generality assume x
0
= 0 and f or a 2I write
f(x) =
X
n=0
1
c
n
(x ¡a + a)
n
:
Use binomial formula and write
f(x) =
X
k=0
1
d
k
(x ¡a)
k
;
where
d
k
=
X
n=k
1
c
n
n
k
a
n¡k
:
Show jd
k
j< 1 for all k.
Theorem 4.1. Assume that f is analytic at x
0
and
f(x) =
X
n=0
1
c
n
(x ¡x
0
)
n
:
Then f and f
0
are continuous at x
0
and furthermore
f
0
(x) =
X
n=1
1
nc
n
(x ¡x
0
)
n¡1
:
Proof. With o ut loss of generality, let us assume x
0
= 0. Consider the series
f(x) =
X
n=0
1
c
n
x
n
; (4.3)
and assume it converges in I. Choose δx sufficiently small such that x + δx 2I. Therefore,
the series
X
n=0
1
c
n
(x + δx)
n
; (4.4)
converges to f (x + δx) and we have
f(x + δx) ¡f (x) =
X
n=1
1
c
n
[(x + δx)
n
¡x
n
]: (4.5)
4.1 Introduction 3
Use the intermediate value theorem to write
(x + δx)
n
¡x
n
= n(x + ξ
n
)
n¡1
δx; (4.6)
for some ξ
n
in the segment <x; x + δx > . Thus
jf(x + δx) ¡f (x)jδx
X
n=1
1
n jc
n
jjx + δxj
n¡1
:
Note th at the series in t he right hand side is convergent according to th e ratio te st.
Therefore, we have
lim
δx!0
f(x + δx) ¡f (x) = 0;
and this proves the continuity of f . Now, define function g as
g(x) =
X
n=1
1
nc
n
x
n¡1
: (4.7)
Clearly g is well defined because the series in the right hand side is convergent. We have
f(x + δx) ¡ f(x)
δx
¡g(x)
X
n=2
1
n jc
n
jj(x + ξ
n
)
n¡1
¡x
n¡1
j: (4.8)
Use the intermediate value theorem and write
j(x + ξ
n
)
n¡1
¡x
n¡1
j(n ¡1) jx + δxj
n¡2
jδxj: (4.9)
Therefore
f(x + δx) ¡f(x)
δx
¡g(x)
jδxj
X
n=2
1
(n ¡1) n jc
n
jjx + δxj
n¡2
:
Since the series in the right hand side c o nverges, we have
f(x + δx) ¡f (x)
δx
¡g(x)
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
δx!0
0: (4.10)
Therefore, g(x) is the derivative of f(x). The continuity of g is proved by a sim ilar argu-
ment.
Corollary 4.1. If a function f is analytic, t hen it is differentiable of any order and all
f
(n)
are analytic with the same domain of convergence.
Example 4.3. Function f(x) = xjxj is not analytic at x
0
= 0. In f a ct, f
00
(0) does not
exists and then f
2
can not be determined. Note that every analytic function is continu-
ously differentiable of any order. These functions are generally called C
1
functions. If f
is analytic, it is C
1
, however, every C
1
function is not analytic. For example, the func-
tion f(x) = e
¡1/x
2
is C
1
in any open interval around x
0
= 0 but it i s not analytic at this
point. In fact, if we write
e
¡1/x
2
= c
0
+ c
1
x + c
2
x
2
+ ···; (4.11)
4 Series Solution of Linear Equations
then c
0
= e
¡1/x
2
j
x=0
=0, and
c
1
=
d
dx
e
¡1/x
2
x =0
= 0;
and similarly we obtain all c
n
= 0. Thus, e
¡1/x
2
is not analytic at x
0
= 0 even th o ugh it is
C
1
at x
0
= 0.
We use the following proposition in our subsequent discussions. The proof is straight-
forward and the reader is asked for prove it.
Proposition 4.1. If f and g are analytic functions in an open interval I, then functions
f ± g and f :g are analytic in I as well.
Theorem 4.2. Assume that f(x) is analytic at x
0
. Then c
n
, the coefficients of the series
of f at x
0
are as follows
c
n
=
1
n!
f
(n)
(x
0
); (4.12)
where f
(n)
stands for the n
th
derivative of f. Moreover, f
0
(x) has the following series rep-
resentation
f
0
(x) =
X
n=1
1
nc
n
(x ¡x
0
)
n¡1
:
Therefore, an analytic function f at x
0
can be represented as the following series
f(x) =
X
n=0
1
1
n!
f
(n)
(x
0
)(x ¡x
0
)
n
; (4.13)
for all x in an open interval I at x
0
. The above representation is called the Taylor's
series of f at x
0
.
Example 4.4. Function f(x) = e
x
is analytic everywhere. At x
0
= 0, the function has the
familiar expansion
e
x
= 1 + x +
1
2!
x
2
+
1
3!
x
3
+ ·· ·:
Note that f
(n)
(0) = 1 and thus c
n
=
1
n!
. The domain of convergence for the expansion is
L = lim
n!1
1
n!
1
(n + 1)!
= 1;
and thus the series converges for x 2(¡1; 1). Similarly, function sin(x) is analytic every-
where. It is si mply seen that
d
n
dx
n
sin(x)j
x=0
=
(
(¡1)
n+1/2
n: odd
0 n: even
;
4.1 Introduction 5
and thus
sin(x) = x ¡
1
3!
x
3
+
1
5!
x
5
¡···:
Function cos(x) has the following series expansion
cos(x) = 1 ¡
1
2!
x
2
+
1
4!
x
4
¡···:
The domain of convergence of sine an d cosine functions are 1.
4.1.3 Partial sums and convergence
Obviously, we are unable to add up infinite terms of a series directly and calculate its
value. Therefore, we should consider the partial sums of an infinite series
S
n
(x) =
X
k=1
n
c
k
(x ¡x
0
)
k
:
Therefore, we find a function sequence (S
n
(x)) for n = 0 ; 1; ···, that we can study its con-
vergence. In addition to the pointwise convergence of the sequence, that is,
lim
n!1
S
n
(a) = f (a);
for any a in the domain of convergence of f, we can define the notion of uniform conver-
gence of the sequence. Let us rst see an example.
Example 4.5. Consider f unction f(x) =
1
1 ¡ x
2
in (¡1 ; 1) with the series expansion
1
1 ¡x
2
= 1 + x
2
+ x
4
+ x
6
+ ···:
Now, consider the following partial sum
S
n
(x) = 1 + x
2
+ x
4
+ ·· ·+ x
2n
=
1 ¡x
2n+2
1 ¡x
2
:
Note that if jxj < 1 then x
2n+2
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
n!1
0. Fig.4.1 shows the graph of function f (x) and
S
n
(x) for a few values of n. Also note that f(x) goes unbounded at x = ±1 and no partial
sums (of any terms) is able to catch up the function in adjacent of these two points. In
other words, for any n > 0, there is some point x
n
2(¡1; 1) such that
jf (x
n
) ¡S
n
(x
n
)j> 1:
On the other hand, let us restrict the domain to
¡1 +
1
m
; 1 ¡
1
m
for any m > 1. Then for
any " > 0, there is n such that
max
x2
h
¡1+
1
m
;1¡
1
m
i
jf(x) ¡S
n
(x)j< ":
For example, for [¡0.99; 0.99] and " = 0.1, we can choose as large as n = 310 to make sure
the above inequality holds for all x in the closed interval .
6 Series Solution of Linear Equations
1
.
0
0
.
5 0
.
5 1
.
0
S
2
(
x
)
S
3
(
x
)
S
4
(
x
)
Figure 4.1.
Example 4.6. The function sin(x) is analytic everywhere and its power series representa-
tion centered at x
0
= 0 is
sin(x) = x ¡
1
3!
x
3
+ ···+
(¡1)
n
(2n + 1)!
x
2n+1
+ ·· ·:
Fig.4.2 shows S
2
(x); S
3
(x); S
4
(x) and S
5
(x) with the graph of the original function sin(x)
in the range [0; 2π] for the partial sum S
n
S
n
(x) =
X
k=0
n
(¡1)
k
(2k + 1)!
x
2k+1
: (4.14)
Again, we can choose n sufficiently large such that S
n
(x) is very close to f (x) in the given
closed interva l, however, there is no such n that be close en o ugh to f (x) in whole domain
(¡1; 1).
1
2
π
π
3
2
π
2
π
S
2
(
x
)
S
3
(
x
)
S
4
(
x
)
S
5
(
x
)
Figure 4.2. Graphs of S
n
(x) and sinx.
Theorem 4.3. Assume that a function f is analytic with the domain of convergence I,
and J I is a closed subinterval. Then for any arbitrary " > 0, there is N(") > 0 such that
max
x2J
jf(x) ¡S
N
(x)j< ";
4.1 Introduction 7
where S
N
is the partial sum of the expansion of f(x) at x
0
up to order N.
Proof. For any x 2[a; b], the series S
n
(x) converges to f(x) and thus there is N = N(x; ")
such that
jf(x) ¡S
N
(x)j< ":
Let (x
n
) [a; b] be a sequence th a t maximize N, that is N(x
n+1
; ") N(x
n
; "). Since [a; b]
is closed, x
n
converges to some x¯ 2 [a; b]. On the other hand, S
n
(x¯) converges to f(x¯) a nd
thus there is N
¯
such that for all n > N
¯
, we have
jf (x¯) ¡S
n
(x¯)j< ";
and this completes the proof.
Problems
Problem 4. 2 . Show that in any closed interval [a; b] (¡1; 1), and any " > 0, there is N
0
= N
0
(")
such that
max
x2[a;b]
j(1 ¡x)
¡1
¡S
n
(x)j< "; 8n N
0
;
where S
n
(x) = 1 + ···+ x
n
.
Problem 4.3. Consider the integral
I(x) =
Z
0
x
sin(t)
t
dt: (4.15)
It is known that I(x) can not be written in terms of elementary functions.
i. By substituting the series of sin(t) in the integral, find a power series for I(x).
ii. Use the alternating series concept and find a partial s um I
n
(x) such that
jI(1) ¡I
n
(1)j< 10
¡3
:
Repeat this for x = 3, that is jI(3) ¡I
n
(3)j< 10
¡3
.
Problem 4.4. Use the power series of e
¡x
to approximate the following integral with the accuracy
10
¡4
I =
Z
1
2
e
¡x
x
dx:
Problem 4.5. Sketch the graph of each function and determine those that are analytic at x
0
= 0. For
each analytic function, obtai n the radius of convergence for the associated series.
i. f (x) = e
¡jxj
ii. f (x) = (4 ¡x
2
)
¡1
iii. f (x) = x
2
jxj
iv. f(x) = sin(j1 + xj)
Problem 4.6. Find the radius of convergence of the following series
i.
X
n=1
1
(¡1)
n
n
4
n
(x ¡1)
n
8 Series Solution of Linear Equations
ii.
X
n=1
1
3
n
n + 1
(2x + 1)
n
iii.
X
n=1
1
2
n
n
n
(2x + 1)
n
Problem 4.7. Show that the series
S(x) = x
2
¡
2
3
4!
x
4
+
2
5
6!
x
6
¡···;
converges to the function f (x) = sin
2
x. Find a power series expansion for f(x) = cos
2
(x).
Problem 4.8. Here we give another pr oof for the formula (4.12).
a) Write f as
f(x) =
X
n=0
1
c
n
x
n
:
By the aid of binomial formula
(x + x
0
)
n
=
X
l=0
n
n
l
x
l
x
0
n¡l
;
derive
f(x + x
0
) =
X
n=0
1
c
n
X
l=0
n
n
l
x
0
n¡l
x
l
:
b) For l = 0 calculate the series and show it is f (x
0
).
c) For l = 1 calculate the series and show it is f
0
(x
0
).
d) For l = 2 calculate the series and show it is
1
2
f
00
(x
0
).
Problem 4.9. If f , g are analytic functions, prove that f ± g and f :g are analytic. Hint: may need
the formula
X
n=0
1
f
n
x
n
!
X
n=0
1
g
n
x
n
!
=
X
n=0
1
X
k=0
n
f
k
g
n¡ k
!
x
n
:
Problem 4.10. Plot the function f (x) = e
¡1/jxj
for ¡1 < x 1. Is it possible to find a series repre-
sentation of f around x
0
= 0?
4.2 Linear differ ential equati ons: Analytic solutions
4.2.1 Equations with analytic coefficients
Consider the the initial value problem
y
00
+ p(x) y
0
+ q(x) y = 0
y(x
0
) = y
0
; y
0
(x
0
) = y
1
: (4.16)
4.2 Linear differential equations: Analytic solutions 9
As we know, there is no general method to solve the problem in closed form , like an expo-
nential, trigonometric or a polynomial function, but it does not mean that the solution
can not be expressed in terms of a series.
Theorem 4.4. Assume that functions p; q are analytic at x
0
with the radius of conver-
gence L. Then problem (4:16) has an analytic solution at x
0
with the minimum radius of
convergence L.
The theorem states that the problem has a unique solution y(x), and the solution can
be expressed in terms of a power series as
y =
X
n=0
1
y
n
(x ¡x
0
)
n
: (4.17)
Here the coe fficients y
n
are unknown, and if we are able to determine them somehow, the
true solution y(x) can be at least approxima ted by a partial sum. The complete pro o f o f
the theorem needs some technical tools that is beyond the scope of this book. However,
the following problem shows how the coefficients can b e calculated.
Problem 4.11. Assume that p; q are analytic functions at x
0
. Consider the problem
y
00
+ p(x) y
0
+ q(x) y = 0
y(x
0
) = y
0
; y
0
(x
0
) = y
1
:
We s how that the coefficients of the series solution are derived by the formula
y
n+2
=
¡1
(n + 1)(n + 2)
X
k=0
n
[(k + 1)p
n¡k
y
k+1
+ q
n¡k
y
k
]; n 0; (4.18)
where p
n
; q
n
are coefficients of the series of p(x) and q(x) respectively, i.e.,
(
p(x) = p
0
+ p
1
(x ¡x
0
) + p
2
(x ¡x
0
)
2
+ ···
q(x) = q
0
+ q
1
(x ¡x
0
) + q
2
(x ¡x
0
)
2
+ ···
: (4.19)
i. By the relation
y
(n+2)
= ¡(p(x) y
0
)
(n)
¡(q(x) y)
(n)
;
conclude
y
(n+2)
(x) = ¡
X
k=0
n
n
k
[p
(n¡k)
(x) y
(k+1)
(x) +q
(n¡k)
(x) y
(k)
(x)]:
ii. Now put x = 0 in the above equation and conclude
(n + 2)! y
n+2
= ¡
X
k=0
n
n
k
[(n ¡k)! (k + 1)! p
n¡k
y
k+1
+(n ¡k)!k! q
n¡k
y
k
]:
iii. Simplify the above identity and conclude (4.18).
Example 4.7. Consider the following initial value problem
y
00
+ (x + 1) y
0
+ xy = 0
y(0) = 1; y
0
(0) = ¡1
: (4.20)
10 Series Solution of Linear Equations
It is simply seen that y(x) = e
¡x
is the u nique solution of the problem. Let us derive that
solution by the series method. Since x
0
= 0, we write the sol ution as
y(x) =
X
n=0
1
y
n
x
n
:
Since p
0
= 1; p
1
= 1, and p
n
= 0 for n 2, and q
1
= 1, q
0
= q
n
= 0 fo r n 2, the summation
in formula (4.18) is nonzero only for k = n ¡1; n, and thus
y
n+2
=
¡1
(n + 1)(n + 2)
[(n + 1)p
0
y
n+1
+ np
1
y
n
+ q
1
y
n¡1
];
and by substituting p
0
; p
1
; q
1
, we obtain
y
n+2
= ¡
y
n+1
n + 2
¡
ny
n
(n + 1)(n + 2)
¡
y
n¡1
(n + 1)(n + 2)
:
The above formula defi ne s a recursive formula for the coefficients of the series solution of
the given differential equation. Note that y(0) = y
0
= 1 and y
0
(0) = y
1
= ¡1, and thus
y
2
=
¡y
1
2
=
1
2
; y
3
=
¡1
6
; y
4
=
1
24
; ···:
Therefore, the series solution has the fo rm
y(x) = 1 ¡x +
1
2
x
2
¡
1
6
x
3
+
1
24
x
4
¡···:
The above series is the representation of y(x) = e
¡x
at x
0
= 0.
Example 4.8. (Cont.) Let us derive the recursive formula in the previous example by
the direct calculation. Write the sol ution as the following series
y(x) =
X
n=0
1
y
n
x
n
:
Since y(x) is analytic, then we can differentiate the series term by term a nd thus by for-
mula (4.21) we can write
y
0
(x) =
X
n=1
1
ny
n
x
n¡1
; and y
00
(x) =
X
n=2
1
n(n ¡1)y
n
x
n¡2
:
Note that the series of y
0
starts at n = 1 and y
00
starts at n = 2. Substituting these formula
into the equation, we reach
y
00
+ (x + 1)y
0
+ xy =
X
n=2
1
n(n ¡1)y
n
x
n¡2
+ (x + 1)
X
n=1
1
ny
n
x
n¡1
+ x
X
n=0
1
y
n
x
n
= 0:
A simple algebraic simplification gives
X
n=2
1
n(n ¡1)y
n
x
n¡2
+
X
n=1
1
ny
n
x
n
+
X
n=1
1
ny
n
x
n¡1
+
X
n=0
1
y
n
x
n+1
= 0:
4.2 Linear differential equations: Analytic solutions 11
Here we h ave four summations. In order to merge these summations, we first make expo-
nents of x in all summations equal. Taking m = n ¡2 in the first summation we reach
X
n=2
1
n(n ¡1)y
n
x
n¡2
=
X
m=0
1
(m + 2)(m + 1)y
m+2
x
m
:
In the second summation, we do not change the exponent and just take m = n. In the
third summation, we take m = n ¡1 and then
X
n=1
1
ny
n
x
n¡1
=
X
m=0
1
(m + 1)y
n+1
x
m
:
In the forth summation, we take m = n + 1, we reach
X
m=1
1
y
m¡1
x
m
:
Therefore, we obtain
X
m=0
1
(m + 2)(m + 1)y
m+2
x
m
+
X
m=1
1
my
m
x
m
+
X
m=0
1
(m + 1)y
m+1
x
m
+
X
m=1
1
y
m¡1
x
m
= 0:
Now, first and third summations start from m = 0 while second and fourth summation
start from m = 1. For this, we pull out one term from first and third summations and
write
2y
2
+ y
1
+
X
m=1
1
[(m + 2)(m + 1)y
m+2
+ my
m
+ (m + 1)y
m+1
+ y
m¡1
] x
m
= 0:
Since the left hand side is identically zero for all x, we obtain the relations
2y
2
+ y
1
= 0; and (m + 2)(m + 1)y
m+2
+ my
m
+ (m + 1)y
m+1
+ y
m¡1
= 0:
From the first identity we obtain y
2
=
1
2
. From the second identity we obtain the following
formula that is called a recursive formula for y
m
y
m+2
= ¡
y
m+1
m + 2
¡
my
m
(m + 2)(m + 1)
¡
y
m¡1
(m + 2)(m + 1)
:
Example 4.9. (Cont.) Instead of finding a recursive formula for the equation, we can
calculate only a few terms of the coefficients by the following method. Write the solution
as follows
y(x) = y
0
+ y
1
x + y
2
x
2
+ ·· ·:
and substitute y into the equation. We have
y
0
(x) = y
1
+ 2y
2
x + 3y
3
x
2
+ ·· ·; and y
00
(x) = 2y
2
+ 6 y
3
x + 12y
4
x
2
+ ·· ·; (4.21)
and thus
(2y
2
+ 6y
3
x + 12y
4
x
2
+ ···) + (x + 1)(y
1
+ 2y
2
x + 3y
3
x
2
+ ···) + x(y
0
+ y
1
x + y
2
x
2
+ ·· ·) = 0:
12 Series Solution of Linear Equations
Since the series in the left hand side is identically zero for all x, all coefficients of x
n
for
arbitrary n 0 must be zero. We have
0-terms. (terms with x in power of zero)
2y
2
+ y
1
= 0;
and thus y
2
= ¡y
1
/2 =
1
2
.
1-terms. (coefficient of x)
6y
3
+ y
1
+ 2y
2
+ y
0
= 0;
and thus y
3
=
¡1
6
.
2-terms. (coefficients of x
2
)
12y
4
+ 2y
2
+ 3y
3
+ y
1
= 0;
and thus y
4
=
1
24
.
We can continue the calculation to find y
5
; y
6
; ···. As it is observed, the coefficients are as
before.
Example 4.10. Find ve terms of the series s o lution of the following problem
y
00
+ e
x
y
0
+ sin(x)y = 1 ¡2x
y(0) = 1; y
0
(0) = ¡1
:
We substitute the series of y; y
0
; y
00
along with the series of e
x
; sin(x) into the equation.
With
8
>
>
<
>
>
:
y(x) = y
0
+ y
1
x + y
2
x
2
+ ···
y
0
(x) = y
1
+ 2y
2
x + 3y
3
x
2
+ ·· ·
y
00
(x) = 2y
2
+ 6y
3
x + 12y
4
x
2
+ ·· ·
;
and
8
<
:
e
x
= 1 + x +
1
2
x
2
+ ···
sin(x) = x ¡
1
6
x
3
+ ···
;
we reach
(2y
2
+ 6y
3
x + 12y
4
x
2
+ ·· ·) +
1 + x +
1
2
x
2
+ ···
(y
1
+ 2y
2
x + 3y
3
x
2
+ ·· ·) +
+
x ¡
1
6
x
3
+ ···
(y
0
+ y
1
x + y
2
x
2
+ ·· ·) = 1 ¡2x
0-terms. (terms with x in power of zero)
2y
2
+ y
1
= 1 )y
2
= 1:
1-terms. (coefficient of x)
6y
3
+ y
1
+ 2y
2
+ y
0
= ¡2 )y
3
= ¡
2
3
:
4.2 Linear differential equations: Analytic solutions 13
2-terms. (coefficients of x
2
)
12y
4
+ 3y
3
+ 2y
2
+
3
2
y
1
= 0 ) y
4
=
1
8
:
Therefore, the series solution is
y(x) = 1 ¡x + x
2
¡
2
3
x
3
+
1
8
x
4
¡···:
Example 4.11. Find a recursive formula for the coefficients of the series solution of the
following problem a nd verify that the series is conve rgent
(
(1 ¡x
2
)y
00
+ 2y = 0
y(0) = y
0
; y
0
(0) = y
1
:
We first note that the coefficient of y
00
that is (1 ¡x
2
) goes zero at x = ±1. In other word,
here q(x) =
2
1 ¡ x
2
is analytic at x
0
= 0 with th e radius of convergence L = 1. According to
the theo rem (4.4), the minimum radius of convergence of the series solution is L. We see
how this domain of convergenc e shows itself in the series solution. By taking y as
y(x) =
X
n=0
1
y
n
x
n
;
we reach
y
00
(x) =
X
n=2
1
n(n ¡1)y
n
x
n¡2
:
Substituting into the equation, we reach
X
n=2
1
n(n ¡1)y
n
x
n¡2
¡
X
n=2
1
n(n ¡1)y
n
x
n
+
X
n=0
1
2y
n
x
n
= 0:
By taking m = n ¡2, we reach
X
n=2
1
n(n ¡1)y
n
x
n¡2
=
X
m=0
1
(m + 2)(m + 1)y
m+2
x
m
:
Notice that without loss of generality we can assume that the the second summation
starts from n = 0, that is,
X
n=2
1
n(n ¡1)y
n
x
n
=
X
n=0
1
n(n ¡1)y
n
x
n
:
Putting altogether, we ob tain
X
m=0
1
[(m + 2)(m + 1)y
m+2
¡m(m ¡1)y
m
+ 2y
m
] x
m
= 0;
14 Series Solution of Linear Equations
and therefore
y
m+2
=
m ¡2
m + 2
y
m
; m 0
By the above recursive formula, we calculate
y
2
= ¡y
0
; y
3
=
¡1
3
y
1
; y
4
= 0; y
5
=
1
5
y
3
; y
6
= 0; y
7
=
3
7
y
5
; y
8
= 0; ···:
Therefore, the truncated s o lution is
y(x) y
0
(1 ¡x
2
) + y
1
x ¡
1
3
x
3
¡
1
15
x
5
¡
3
105
x
7
¡···:
Let us calculate the radius of convergence of the series by the aid of the recursive formula.
Remember that the radius of convergence is determined by the formula
L := lim
n!1
y
n
y
n+1
:
The recursive of formula states
n + 2
n ¡2
=
y
n
y
n+2
=
y
n
y
n+1
y
n+1
y
n+2
;
and thus
L
2
= lim
n!1
y
n
y
n+1
y
n+1
y
n+2
= lim
n!1
n + 2
n ¡2
= 1;
and thus the series converges in the inte rval x 2(¡1; 1) with the radius of convergence L =
1, the same radius of convergence of q(x) =
2
1 ¡ x
2
. However, if y
1
= 0, then the series solu-
tion reduces to a polynomial solution y(x) = y
0
(1 ¡ x
2
), and thus with the radius of con-
vergence L = 1. This justifi es the claim why the radius of convergence of the solution is
equal at least to the radius of convergence of p and q.
Let us summarize what we discussed in this section. We saw that if p; q are analytic
functions, then the problem (4.16) has a unique analytic so luti o n where its radius of con-
vergence is at least equal to the radius of convergence of p; q. In addition, if y
0
(x
0
) = 0,
the recursive formula for coefficients y
n
are only depends on y
0
, and thus
y(x) = y
0
(1 + c
1
x + c
2
x
2
+ ·· ·):
Similarly, if y(x
0
) = 0, all coefficients will depends on y
1
, and therefore
y(x) = y
1
(x + d
2
x
2
+ d
3
x
3
+ ·· ·):
4.2.2 Linear Equation with non-analytic coe cients
We solve here three equations with non-analytic coefficients to illustrate the difficulty that
arises in this case.
4.2 Linear differential equations: Analytic solutions 15
Example 4.12. Consider the problem
y
00
+ jxjy = 0
y(0) = y
0
; y
0
(0) = y
1
: (4.22)
The existence and u niqueness are guaranteed for the problem since the coefficients are
continuous. Note that the function jxj is not analytic at x
0
= 0 and for this, the assump-
tion on the analyticity of the solution may fa il. Let us first find a series s o luti on to the
problem for x > 0. The equation i n this domain reads
y
00
+ xy = 0
y(0) = y
0
; y
0
(0) = y
1
:
The recursive formula fo r the co efficients of the serie s solution is
y
n+2
= ¡
1
(n + 2)(n + 1)
y
n¡1
; n 1;
and y
2
= 0. Let us denote this solution by y
+
(x)
y
+
(x) = y
0
+ y
1
x ¡
y
0
6
x
3
¡
y
1
2
x
4
+
y
0
180
x
6
+
y
1
84
x
7
¡···:
For x < 0, the problem reads
y
00
¡xy = 0
y(0) = y
0
; y
0
(0) = y
1
and the recursive formula changes to
y
n+2
=
1
(n + 2)(n + 1)
y
n¡1
; n 1:
and again y
2
= 0. Let us denote this solution by y
¡
(x)
y
¡
(x) = y
0
+ y
1
x +
y
0
6
x
3
+
y
1
2
x
4
+
y
0
180
x
6
+
y
1
84
x
7
+ ·· ·:
Two solution y
+
and y
¡
connect a t x
0
= 0 smoothly of order 2, i.e., y
+
(k)
(0) = y
¡
(k)
(0) for k =
0; 1; 2 but not for k 3. Hence, the derived series solution is not analytic at x
0
. The figure
(4.3) shows the solution for y
0
= 1 an d y
1
= ¡1. The dashed line is the tangent line at x
0
.
2
1 1 2
4
2
2
4
Figure 4.3.
16 Series Solution of Linear Equations
Example 4.13. Consider the following problem:
(
x
2
y
00
+ (x ¡x
2
) y
0
¡y = 0
y(0) = y
0
; y
0
(0) = y
1
: (4.23)
Apparently the problem does not satis fy the c onditions of the existence theorem and
therefore, there is no guarantee that the problem possesses a solution (analytic or non-
analytic). Let us try to find a series solution and see the result. By substituting y =
X
n=0
1
y
n
x
n
into the equation, we reach
X
n=2
1
n(n ¡1) y
n
x
n
+
X
n=0
1
ny
n
x
n
¡
X
n=0
1
ny
n
x
n+1
¡
X
n=0
1
y
n
x
n
= 0: (4.24)
Merging summations leads to the following equation
¡y
0
+
X
n=2
1
(n
2
¡1) y
n
¡(n ¡1)y
n¡1
) x
n
= 0; (4.25)
and therefore y
0
= 0 and
y
n
=
1
n + 1
y
n¡1
; n 2: (4.26)
Note that if y
0
=/ 0, there is no series solution to the problem. To justify that the series
converges to the true solution, we have to show that the obtained series has a non-zero
radius of convergence. This is justified easily by the relation
R = lim
n!1
y
n¡1
y
n
= lim
n!1
n + 1 = 1; (4.27)
and thus this seri es converges to the true solution in (¡1; 1). A few terms of the series
are
y
2
=
1
3
y
1
; y
3
=
1
4
y
2
=
1
3 ×4
y
1
; y
4
=
1
5
y
3
=
1
3 ×4 ×5
y
1
; ·· ·: (4.28)
Observe that y
n
=
2
(n + 1)!
y
1
and thus
y = 2y
1
1
2!
x +
1
3!
x
2
+
1
4!
x
3
+ ···
: (4.29)
Therefore we are able to find only one analytic solution to the equation
x
2
y
00
+ (x ¡x
2
) y
0
¡y = 0;
at x
0
= 0, and it corresponds to y
0
= 0. It is simply verified that the series in the bracket is
the expansion of the function
y(x) =
e
x
¡1 ¡x
x
: (4.30)
Observe that
lim
x!0
y(x) = 0:
4.2 Linear differential equations: Analytic solutions 17
In order to find a second linearly indepen de nt solution, we can use the reduction of order
method and obtain y(x) =
x + 1
x
. Note that the new solution is unbounded at x = 0.
Example 4.14. Consider the following equation
(
x
3
y
00
¡y = 0
y(0) = y
0
; y
0
(0) = y
1
: (4.31)
Like the second example, t he problem does not satisfy the condition of the existence the-
orem and thus a solution is not guaranteed for the problem. Let us try to nd a series
solution to the problem. We have
X
n=2
1
n(n ¡1) y
n
x
n+1
¡
X
n=0
1
y
n
x
n
= 0: (4.32)
A simplification gives
¡y
0
¡y
1
x ¡ y
2
x
2
+
X
n=3
1
( (n ¡1)(n ¡2) y
n¡1
¡y
n
)x
n
= 0; (4.33)
and thus y
0
= y
1
= y
2
= 0. The recursive formula for n 3 is
y
n
= (n ¡1)(n ¡2)y
n¡1
: (4.34)
According to the formula, we derive y
n
= 0 for all n 3 and thus y
n
= 0 for all n. There-
fore, the problem has a solution if and only if y
0
= y
1
= 0 and in this case, the only pos-
sible solution is the trivial one y 0.
Problems
Problem 4.12. By the mathematical induction prove the following for mula
(fg)
(n)
=
X
k=0
n
n!
(n ¡k)!k!
f
(k)
g
(n¡k)
:
Use this formula to find a recursive formula for the series solution to the equation
y
00
+ xy = 0
y(0) = 1; y
0
(0) = 0
;
and determine the radius of convergence of the series.
Hint: Write
y
(n)
(0) = ¡(xy(x))
(n¡2)
j
x=0
;
and use the above formula.
Problem 4.13. Consider the initial value problem
y
0
+ 2xy = 0
y(0) = 1
:
i. Find the closed form solution to the problem
ii. Find a series solution and compare it with the closed form solution.
18 Series Solution of Linear Equations
Problem 4.14. Consider the initial value problem
xy
0
¡y = 0
y(1) = 1
:
i. Find the closed form solution to the problem
ii. Find a series solution and compare it with the closed form solution.
Problem 4.15. Find a series solution to the problem
y
00
+ xy
0
+ y = 0
y(0) = 1; y
0
(0) = 0
:
Verify that the solution is the expansion of the solution y = e
¡x
2
/2
.
Problem 4.16. Find a series solution to each of the following problems
i.
y
00
+ 2y
0
+ y = 0
y(0) = 0; y
0
(0) = 1
ii.
xy
00
+ y = 0
y(0) = 0; y
0
(0) = 1
iii.
xy
00
+ y = 0
y(0) = 1; y
0
(0) = 0
iv.
(
x
2
y
00
+ 4xy
0
+ 2y = 0
y(0) = y
0
; y
0
(0) = y
1
Problem 4.17. Consider the problem
(
x
2
y
00
¡2y = 0
y(0) = y
0
(0) = 0
:
Clearly the problem has the trivial solution y(x) 0. Try to find a series solution to the problem.
Why the problem has multiple solutions?
Problem 4.18. If x
0
is non-zero, it is convenient to shift it to zero. Consider the following problem:
y
00
¡xy
0
¡y = 0
y(1) = 1; y
0
(1) = 0
: (4.35)
a) Take t = x ¡1 and write the equation in terms of t.
b) Find a power series solution to the new equation and then rewrite the solution in terms of x
Problem 4.19. The following equation is called a Cauchy-Euler equation
x
2
y
00
+ xy
0
¡ y = 0 :
This equation has solutions y
1
= x and y
2
= x
¡1
which y(0) = 0 and y
2
(0) is unbounded at x
0
= 0.
i. Set appropriate initial conditions (at x
0
= 0) such that the problem has a bounded solution at
x
0
= 0.
ii. Try to solve the pr ob lem (with the initial conditions you set for the equation) by the power
series method.
Problem 4.20. Find four nonzero terms of the power series solution to each of th e following equa-
tions.
i. y
00
+ xy
0
+ e
x
y = 0; y(0) = 1; y
0
(0) = 1
4.2 Linear differential equations: Analytic solutions 19
ii. y
00
¡sin(x) y = cos(x); y(0) = 1; y
0
(0) = ¡1
iii. (1 ¡x
2
)y
00
¡2xy
0
+ sin(x)y = 0, y(0) = 1; y
0
(0) = 0.
iv. e
x
y
00
+ 3xy
0
¡tan(x) y = sec(x), y(0) = 0; y
0
(0) = 1.
v. cos(x) y
00
+ e
x
y
0
+ y = sin(x), y(0) = 1 ; y
0
(0) = ¡1.
Problem 4.21. Consider the equation
y
00
+ y = sin(2x)
y(0) = 1; y
0
(0) = 0
:
i. This a linear equation with constant coefficients. Find a solution of the equation.
ii. Now, expand sin(2x) and find a power series solution to the equation and compare two solu-
tions.
Problem 4.22. Consider the equation
(
y
00
+ y = tan
¡1
(x)
y(0) = 1; y
0
(0) = 0
:
Find a recursive formula for the series solution of th e equation and calculate 5 no n-zero terms.
Problem 4.23. For each of the following problems, nd the recursive formula for the power series
solution and write down a series containing at least 5 nonzero terms.
i.
y
00
+ (1 + x)y = 0
y(0) = 1; y
0
(0) = 0
ii.
(
y
00
+ xy
0
+ x
2
y = 0
y(0) = 0; y
0
(0) = 1
iii.
y
00
+ xy
0
+ y = 0
y(0) = 1; y
0
(0) = ¡1
Problem 4.24. For the equation
y
00
+ xy
0
+ y =
1
1 ¡x
a) Show that c
n
, the coefficient of the power series solution of the equation around x
0
= 0, satis-
fies the following recursive formula for n 0:
y
n+2
=
1
(n + 1)(n + 2)
¡
y
n
n + 2
:
b) Calculate y
0
to y
4
for the initial conditions y(0) = 0; y
0
(0) = 1.
c) Show that the above formula is equivalent to
y
n+3
=
n + 1
n + 3
y
n+2
+
n + 1
(n + 2)(n + 3)
y
n
¡
1
n + 3
y
n+1
:
d) Find the radius of convergence of the series solution.
Problem 4.25. The application of power series method for non-linear equations i s limited. Consider
the following IVP.
y
00
+ x sin(y) = 0
y(0) = 1; y
0
(0) = 0
:
20 Series Solution of Linear Equations
Linearize t he equation around the working point and obtain 5 non-zero term of the power series solu-
tion. Use a computer software to compare the obtained solution with the numeric one given by the
software.
4.3 Singular equations
Definition 4.3. Consider the equation
y
00
+ p(x) y
0
+ q(x) y = 0: (4.36)
If functions p; q are analytic at x
0
, then x
0
is called a regular point for the equation. If at
least one of p(x) or q(x) are not analytic at x
0
, the point is called a singular point for the
equation. There are two kinds of singular points:
If functions (x ¡ x
0
)p(x) and (x ¡x
0
)
2
q(x) are analytic at x
0
, then x
0
is a regular-
singular point.
If at least one of functions (x ¡x
0
)p(x) or (x ¡x
0
)
2
q(x) are non-analytic, the point
is called an essential singular or singular-singular point .
Example 4.15. The point x
0
= 0 is a regular point for the following equation
(1 + x
2
)y
00
+ xy = 0; (4.37)
since p(x) = 0 and q(x) =
x
1 + x
2
are both analytic at x
0
= 0. The point x = ¡1 is a regular-
singular point for the following equation
(1 ¡x
2
)y
00
+ sin(1 ¡x)y
0
+ (1 ¡x)y = 0 : (4.38)
In fact, for p(x) =
sin(1 ¡ x)
1 ¡ x
2
, q(x) =
1 ¡ x
1 ¡ x
2
, the only singular point is x = ¡1. Note that both
p(x) and q(x) have removable singularity at x = 1 as
lim
x!1
sin(1 ¡x)
1 ¡x
2
= lim
x!1
1 ¡x
1 ¡x
2
=
1
2
:
Moreover,
(1 + x) p(x) = (1 + x)
sin(1 ¡x)
1 ¡x
2
=
sin(1 ¡x)
1 ¡x
;
(1 + x)
2
q(x) = (1 + x)
2
1 ¡x
1 ¡x
2
=
1 ¡x
2
1 ¡x
;
are analytic at x = ¡1. The point x
0
= 0 is an essential singular point for the equation
y
00
+ jxjy = 0. In fact, x
2
q(x) = x
2
jxj is not a nalytic at x
0
.
4.3.1 Cauchy-Euler equations
The re a son for classifying a po int will be clear when we discuss the Cauchy-Euler equa-
tion. The general form of a homogeneous equidimensional or Cauchy-Euler equation is
(x ¡x
0
)
2
y
00
+ a(x ¡x
0
) y
0
+ by = 0; (4.39)
4.3 Singular equations 21
where a and b are some constants. Here, we have
p(x) =
a
x ¡x
0
; q(x) =
b
(x ¡x
0
)
2
;
and thus x
0
is a regular-singular point for the equation. Fortunately, there is a simple
transformation that converts the a bove equation to an equation with c o nstant coefficients.
Let x ¡x
0
= e
t
for x > x
0
. We have
dy
dt
=
dy
dx
dx
dt
=
dy
dx
(x ¡x
0
); (4.40)
d
2
y
dt
2
=
d
dt
dy
dx
(x ¡x
0
)
=
d
dx
dy
dx
(x ¡x
0
)
(x ¡x
0
) =
d
2
y
dx
2
(x ¡x
0
)
2
+
dy
dt
: (4.41)
Substituting above formula into (4.39), yields
d
2
y
dt
2
+ (a ¡1)
dy
dt
+ by = 0; (4.42)
which is a second order equation with c o nstant coefficients. The characteristic polynomial
of the new equation is
f(s) = s
2
+ (a ¡1)s + b:
Case 1. If f(s) = 0 has two real distinct roots s
1
; s
2
, then the new equation has two
solutions e
s
1
t
, e
s
2
t
, and by t ransformation x ¡ x
0
= e
t
, two solutions for the
Cauchy-Euler equation are obtained y
1
(x) = (x ¡x
0
)
s
1
; y
2
(x) = (x ¡x
0
)
s
2
.
Case 2. If f(s) = 0 has a repeated root s
1
= s
2
= s, then the equation has two solu-
tions e
st
; te
st
, and then y
1
(x) = (x ¡x
0
)
s
, y
2
(x) = (x ¡x
0
)
s
ln(x ¡x
0
).
Case 3. If f(s) = 0 has complex roots s = σ ± i!, the equation has two solutions
y
1
(x) = (x ¡x
0
)
σ
cos(!ln(x ¡x
0
)), y
2
(x) = (x ¡x
0
)
σ
sin(! ln(x ¡x
0
)).
Example 4.16. The substitution x = e
t
transforms the equation
x
2
y
00
+ xy
0
¡y = 0;
into equation
d
2
y
dt
2
¡ y = 0 with solutions y
1
= e
t
and y
2
= e
¡t
, and thus y
1
(x) = x a nd
y
2
(x) = x
¡1
. Note that y
2
(x) goes unbounded when x approaches zero. Now, consider the
following equation
x
2
y
00
¡xy
0
+ y = 0 :
The Cauchy-Euler characteristic equati o n is
s
2
¡2s + 1 = 0;
22 Series Solution of Linear Equations
and thus y
1
(t) = e
t
and y
2
(t) = te
t
are two solutions of its transformed equation, and thus
y
1
(x) = x, y
2
(x) = x ln(x) are two solutions to the above equation. Note that both solu-
tions vanishes at x = 0. Consider the following equation
(x ¡1)
2
y
00
+ (x ¡1)y
0
+ y = 0 : (4.43)
We take x ¡1 = e
t
and obtain the following constant coefficients equation
d
2
y
dt
2
+ y = 0: (4.44)
It is simply seen that the original equation has following solutions
y
1
(x) = cos(ln (x ¡1)); y
2
(x) = sin(ln(x ¡1)):
Example 4.17. Consider the following equation
x
2
y
00
¡2xy
0
+ 2y = x
3
e
x
:
The homogeneous solutions to the equation are y
1
= x and y
2
= x
2
. The particular solution
is obtained by the variation of parameters method
y
p
(x) = ¡x
Z
xe
x
x
2
x
2
+ x
2
Z
x
2
e
x
x
2
= xe
x
:
The general solution is y(x) = c
1
x + c
2
x
2
+ xe
x
.
4.3.2 Regular s ingular equations: I
Assume that x
0
is a regular-singular point for (4.36). If we multiply the equation by (x ¡
x
0
)
2
, we derive the following one
(x ¡x
0
)
2
y
00
+ (x ¡x
0
)
2
p(x) y
0
+ (x ¡x
0
)
2
q(x)y = 0:
Let us denote function (x ¡x
0
)p(x) by a(x) and (x ¡x
0
)
2
q(x) by b(x). Then we can write
the equation as follows
(x ¡x
0
)
2
y
00
+ (x ¡x
0
) a(x) y
0
+ b(x)y = 0: (4.45)
Observe that t he o btained equation is very similar to a Cauchy-Euler equation except
that a; b are not constant in the present case. Since a(x); b(x) are analytic, we can express
them as
a(x) =
X
n=0
1
a
n
(x ¡x
0
)
n
; and b(x) =
X
n=0
1
b
n
(x ¡x
0
)
n
;
for some co nstants a
n
; b
n
. C o mpa ring Eq.4.45 with the Cauchy-Euler equation, we can
write the solution as
y = (x ¡x
0
)
s
g(x);
for an analytic function g(x), where s is a constant. The following equation is called the
characteristic polynomial of Eq. (4.45):
f(s) = s
2
+ (a
0
¡1)s + b
0
; (4.46)
4.3 Singular equations 23
where a
0
; b
0
are rst coefficient series representation of a(x) and b(x) respectively.
Theorem 4.5. Assume that x
0
is a regular-singular point of (4:36) and that f(s) = 0 has
two real roots s
1
s
2
. Then one solu tion of the equation is
y
1
(x) = (x ¡x
0
)
s
1
X
n=0
1
c
n
(x ¡x
0
)
n
; (4.47)
The value c
0
is ar bitrary and c
n
are determined by the following recursive formula:
c
n
=
¡1
f(s
1
+ n)
X
k=0
n¡1
((k + s
1
) a
n¡k
+ b
n¡k
)c
k
: (4.48)
Proof. We prove the relation for x
0
= 0. First we rewrite the equation (4.36) as
x
2
y
00
+ xa(x)y
0
+ b(x)y = 0: (4.49)
This is very similar to a Cauchy-Euler equation, and thus it is ju stifiable to assume the
solution in the following form
y(x) = x
s
g(x); (4.50)
for some constant s and an analytic function g. Without loss of generality, we can assume
that g(0) =/ 0, otherwise we can rewrite y as y = x
s+1
g~(x) where g~(0) =/ 0. Let us substi-
tute (4.50) into (4.49). We have
y
0
(x) = sx
s¡1
g(x) + x
s
g
0
(x);
y
00
(x) = s(s ¡1) x
s¡2
g(x) + 2sx
s¡1
g
0
(x) + x
s
g
00
(x):
Substitution the above formula into the equation gives
x
s
fx
2
g
00
+ x(a(x) + 2s) g
0
(x) + F (s; x) g(x)g= 0
where
F (s; x) = s
2
+ (a(x) ¡1)s + a(x):
In order to have the above identity valid in a neighborhood o f x
0
, it is necessary to have
x
2
g
00
+ x(a(x) + 2s) g
0
(x) + F (s; x) g(x) = 0: (4.51)
Now, let x !0 and obtain F (s; 0) g(0) = 0 and thus F (s; 0) = 0. Note that F (s; 0) = f (s),
the characteristic polynomial of the equation. For a moment assume that roo ts of f(s) are
real (n o t necessarily di stinct). We take the biggest root s
1
for s in (4.50). Now let us find
a series expansion for g(x). Write it as
g(x) =
X
n=0
1
c
n
x
n
;
where c
0
is arbitrary (we can take it equal to 1) and calculate the n order derivative of
both sides of (4.51) at x = 0. We have
(x
2
g
00
)
(n)
j
x =0
=¡(F (s
1
; x) g)
(n)
j
x=0
¡2s
1
(xg
0
)
(n)
j
x =0
¡(xa(x) g
0
)
(n)
j
x=0
: (4.52)
24 Series Solution of Linear Equations
Since
(x
2
g
00
)
(n)
=
X
k=0
n
n
k
(x
2
)
(n¡k)
g
(k+2)
;
is non-zero at x = 0 only for n ¡k = 2 , we obtain
(x
2
g
00
)
(n)
j
x=0
=n(n ¡1) g
(n)
(0): (4.53)
Note that g
(n)
(0) = n!c
n
and th en
(x
2
g
00
)
(n)
j
x =0
=n(n ¡1) n!c
n
:
Similarly we have
(xg
0
)
(n)
j
x=0
=(n! )nc
n
:
For the first te rm in the right hand side of (4.52), we have
(F (s
1
; x) g)
(n)
=
X
k=0
n
n
k
F (s
1
; x)
(n¡k)
g
(k+2)
:
Note that for k = n, the expression F (s
1
; 0) is zero. For 0 k n ¡1, we have
F (s
1
; 0)
(n¡k)
= (n ¡k)!(s
1
a
n¡k
+ b
n¡k
): (4.54)
Therefore
(F (s
1
; x) g)
(n)
j
x =0
=n!
X
k=0
n¡1
(s
1
a
n¡k
+ b
n¡ k
) c
k
: (4.55)
For the last expression in (4.52), we have
(xa(x) g
0
)
(n)
j
x=0
=n!
X
k=0
n¡ 1
(k + 1) a
n¡k¡1
c
k+1
: (4.56)
If we take k = k + 1, then we reach
(xa(x) g
0
)
(n)
j
x=0
=n!
X
k=0
n
ka
n¡k
c
k
: (4.57)
Substitution all above formula into (4.52) gives the recursive formula (4.48).
Example 4.18. Let us find a homogeneous solutions to the following equation
2x
2
y
00
¡x(1 + x) y
0
+ y = 0 ; (4.58)
for x
0
= 0. Note that zero is a regular-singular point of the equation and
a(x) = ¡
1
2
¡
1
2
x; b(x) =
1
2
: (4.59)
The characteristic polynomial is
f(s) = s
2
¡
3
2
s +
1
2
; (4.60)
4.3 Singular equations 25
with two roots s
1
= 1, s
2
=
1
2
. According to the theorem (4.5), the equation has one solu-
tion
y
1
= x(c
0
+ c
1
x + c
2
x
2
+ :::); (4.61)
where c
n
; n 1 are obtained by (4.48). Note that a
n
= 0 for n 2, and b
n
= 0 for n 1 and
then the summation is zero for k n ¡ 2. We obtain the following recursive formula for
the coefficie nts
c
n
=
(s
1
+ n ¡1)
2(s
1
+ n ¡1)
¡
s
1
+ n ¡
1
2
c
n¡1
: (4.62)
Since s
1
= 1, the above formula reads
c
n
=
1
2n + 1
c
n¡1
: (4.63)
Since c
0
is arbitrary, we can safely take it equal to 1. Calculating few terms gives the fo l-
lowing homogeneous solution to the equation
y
1
(x) = x
1 +
1
3
x +
1
15
x
2
+
1
105
x
3
+ ·· ·
: (4.64)
Example 4.19. Consider the following equation
x
2
y
00
¡x(1 ¡x) y
0
+ y = 0: (4.65)
Again x
0
= 0 is a regular-singular point for th e equation. With a(x) = ¡1 + x and b(x) = 1,
the characteristic poly nom ial is f( s) = (s ¡1)
2
with repeated root s
1
= 1. Notice that a
n
=
0 for n 2 a nd b
n
= 0 for n 1. Thus the summation in (4.48) runs only for k = n ¡ 1.
The recursive formula (4.48) gives
c
n
= ¡
1
n
c
n¡1
; (4.66)
and thus for c
0
= 1 we obtain
y
1
= x
1 ¡x +
1
2
x
2
¡
1
6
x
3
+ ···
= xe
¡x
: (4.67)
Example 4.20. Consider the following equation
x
2
y
00
+ x(1 ¡x) y
0
¡y = 0: (4.68)
Here a(x) = 1 ¡x and b(x) = ¡1 and x
0
= 0 is a regular-singular point of the equation. We
have f(s) = s
2
¡1 and then s
1
= 1, s
2
= ¡1. The recursive formula is
c
n
=
1
n + 2
c
n¡1
; (4.69)
and thus
y
1
= x
1 +
1
3
x +
1
12
x
2
+
1
60
x
3
+ ···
: (4.70)
It is seen that the obtained series is the expansion of the following function y = 2
e
x
¡ 1 ¡ x
x
.
26 Series Solution of Linear Equations
4.3.3 Regular s ingular equations: II
The st ructure of a second linearly independent solution for a regular-singular point
depends on the second root of f (s). For our subsequent discussion, we need to rewrite
(4.48) in the following form
c
n
(s) =
¡1
f(s + n)
X
k=0
n¡1
((k + s) a
n¡k
+ b
n¡k
)c
k
(s); (4.71)
Here the formula emphasizes the dependence on s that can be any of two roots of the
characteristic polyn omial (4.46). Here c
n
(s
1
) is the same c
n
we used before. Again we
assume that s
1
; s
2
, the roots of f(s) are real
s
1
¡ s
2
is non-integer.
If s
1
¡s
2
is non-integer, a second linearly independent s o lution y
2
is
y
2
= (x ¡x
0
)
s
2
X
n=0
1
c
n
(s
2
) (x ¡x
0
)
n
: (4.72)
Example 4.21. Consider the equation (4.58). Since s
1
¡s
2
=
1
2
is non-integer, the second
solution is
y
2
= x
p
c
0
+ c
1
1
2
x + c
2
1
2
x
2
+ ···
:
The recursive formula fo r c
n
¡
1
2
is
c
n
1
2
=
1
2n
c
n¡1
1
2
: (4.73)
Calculation of few terms gives
y
2
= x
p
1 +
1
2
x +
1
8
x
2
+
1
48
x
3
+ ···
: (4.74)
Note that y
2
0
(x) goes unbounded when x approaches 0.
s
1
= s
2
.
If f (s) has a repeated root s
1
, the second solution y
2
is
y
2
(x) = y
1
(x) ln(x ¡x
0
) + (x ¡x
0
)
s
1
X
n=1
1
h
n
(x ¡x
0
)
n
; (4.75)
where h
n
are determined by the following formula
h
n
=
dc
n
ds
(s
1
): (4.76)
Proposition 4.2. A recursive formula for h
n
is as follows
h
n
= ¡
1
n
2
X
k=0
n¡1
f[(k + s
1
)a
n¡k
+ b
n¡k
] h
k
+ a
n¡k
c
k
g¡
2
n
c
n
: (4.77)
4.3 Singular equations 27
Proof. By reduction of order method if y
1
(x) is a solution of the equation, we can write
y
2
(x) the second solution as
y
2
= y
1
(x)
Z
e
¡
R
p(x)/x
y
1
2
(x)
dx: (4.78)
Notice that
e
¡
R
p(x)/x
y
1
2
(x)
= x
¡2s
x
¡p
0
e
¡
p
1
x +
1
2
p
2
x
2
+···
g
2
(x)
: (4.79)
Since p
0
= 1 ¡2s (note that f(s) = 0 has the repetit ive root), we have x
¡2s
x
¡p
0
= x
¡1
and
then
e
¡
R
p(x)/x
y
1
2
(x)
= x
¡1
(α
0
+ α
1
x + α
2
x
2
+ ·· ·); (4.80)
for some sequence (α
n
). Note that α
0
=
1
y
0
2
=/ 0. The second solution y
2
(x) is then derived
by
y
2
= y
1
(x)
Z
x
¡1
(α
0
+ α
1
x + α
2
x
2
+ ·· ·) = α
0
y
1
(x) ln(x) +
+α
1
φ
1
(x)x +
1
2
α
2
φ
2
(x) x
2
+ ···:
Let us take α
0
= 1 for simplicity and write
y
2
= y
1
(x) ln(x) + x
s
α
1
xg(x) +
1
2
α
2
x
2
g(x) + ···
: (4.81)
Recall that y
1
= x
s
g(x). We write the series in the bracket as
h(x) =
X
k=1
1
h
n
x
n
; (4.82)
and find h
n
. Similar to t he proof of the theorem (4.5), if we substitute
y
2
= x
s
[g(x) ln(x) + h(x)]; (4.83)
into the equation, we reach
2xg
0
+ (2s + a(x) ¡1) g + x
2
h
00
+ x(a(x) + 2s)h
0
+ F (s; x) h = 0
(4.84)
In order to find h
n
for n 1, we calculate the n order derivative of the ab ove equation.
We have
2(xg
0
)
(n)
+ [(2s + a(x) ¡1) g]
(n)
+ [x
2
h
00
+ x(a(x) + 2s)h
0
+ F (s; x) h]
(n)
= 0:
If we follow the proof of the theorem (4.5), we obtain
2nc
n
+
X
k=0
n¡1
a
n¡k
c
k
+ f (s + n) h
n
+
X
k=0
n¡1
[(k + s)a
n¡k
+ b
n¡k
] h
k
= 0:
The above recursive formula is the same as one given in the theorem after a straightfor-
ward simplification.
28 Series Solution of Linear Equations
Example 4.22. Consider the equation (4.65). Since s
1
= s
2
, we can write the second solu-
tion as
y
2
= xe
¡x
ln(x) + x(h
1
x + h
2
x
2
+ :::. ):
The coefficients h
n
can be calculated by formula (4.76) or the recursive formula (4.77). If
we sue (4.77), we get
h
n
= ¡
1
n
2
(nh
n¡1
+ c
n¡1
) ¡
2
n
c
n
: (4.85)
Here are some values of h
n
h
1
= 1; h
2
= ¡
3
4
; h
3
=
11
36
; :::
and then
y
2
= xe
¡x
ln(x) + x
2
1 ¡
3
4
x +
11
36
x
2
¡···
: (4.86)
It is interesting to find the second solution by the a id of reduction of order method. Since
y
1
= xe
¡x
, a linearly independent solution y
2
is
y
2
= y
1
(x)
Z
exp
¡
R
1 ¡ x
x
y
1
2
(x)
= xe
¡x
Z
e
x
x
:
If we write the argument in the integral as the series
e
x
x
=
1
x
+ 1 +
1
2
x + ···;
we reach
y
2
= xe
¡x
ln(x) + xe
¡x
x +
1
4
x
2
+
1
18
x
3
+ ···
:
Finally, if we expand e
¡x
, we obtain
e
¡x
x +
1
4
x
2
+
1
18
x
3
+ ···
=
1 ¡x +
1
2
x
2
+ ···
x +
1
4
x
2
+
1
18
x
3
+ ···
=
=x ¡
3
4
x
2
+
11
36
x
3
+ ·· ·:
Putting all together we obtain the same solution we derived above by (4.77).
s
1
¡ s
2
= m an integer.
Let the roots of f(s) are an integer, that is, s
1
¡s
2
= m 2Z. Then, a second l inearly i nde-
pendent solution is
y
2
= αy
1
(x) ln(x ¡x
0
) + (x ¡x
0
)
s
2
X
n=0
1
e
n
(x ¡x
0
)
n
; (4.87)
where the constant α and coefficients e
n
are determined by the following formula
α = lim
s!s
2
(s ¡s
2
) c
m
(s); where m = s
1
¡s
2
; (4.88)
e
n
=
d
ds
(s ¡s
2
) c
n
(s)j
s=s
2
: (4.89)
4.3 Singular equations 29
4.3.4 Regular s ingular equations: III
If f (s) has complex roots s
1;2
= σ ±i! then s
1
¡s
2
is n on integer and thus we can w rite
y
1
(x) = x
σ
x
i!
X
n=0
1
c
n
(s
1
) x
n
; y
2
(x) = x
σ
x
i!
X
n=0
1
c
n
(s
2
) x
n
: (4.90)
It is seen that c
n
(s
2
) = c
n
(s
1
), and thus
y
2
(x) = x
σ
x
¡i!
X
n=0
1
c
n
(s
1
) x
n
: (4.91)
According to the superposition property, we can derive two real solutions
1
2
(y
1
+ y
2
) and
1
2i
(y
1
¡y
2
) for the equation. It is simply verifies that two real solutions are as follows
y
1
(x) = x
σ
X
n=0
1
fRe c
n
(s
1
) cos(!lnx) ¡Im c
n
(s
1
) sin(! lnx)gx
n
(4.92)
and
y
2
(x) = x
σ
X
n=0
1
fRe c
n
(s
1
) sin(!lnx) + Im c
n
(s
1
) cos(!lnx)gx
n
(4.93)
Example 4.23. The f o llowing equation
x
2
y
00
+ x(1 + x)y
0
+ y = 0 ;
has the index equation
s
2
+ 1 = 0 ;
and thus s
1;2
= ±i. The formula for c
n
(i) is
c
n
(i) = ¡
n ¡1 + i
n(n + 2i)
c
n¡1
(i):
and by assuming c
0
= 1, we obtain
c
1
(i) = ¡0.4 ¡0.2i; c
2
(i) = 0.1 + 0.05i; c
3
(i) = ¡0.022 ¡0.008i; :::
and thus
y
1
(x) = 1 + [¡0.4 cos(ln x) + 0.2 sin(ln x)]x + [0.1 cos(ln x) ¡0.05 sin(ln x)]x
2
+ ···
y
2
(x) = 1 + [¡0.4 sin(ln x) ¡0.2 cos(ln x)]x + [0.1 sin(ln x) + 0.05 cos(ln x)]x
2
+ ·· ·:
Problems
Problem 4.26. Classify the singular points of each of the following equations
i. x
3
y
00
¡x sin(x) y
0
+ (1 ¡cos(x)) y = 0,
ii. x(1 + x) y
00
+ y
0
¡ y = 0 ,
iii. (1 ¡x
2
) y
00
+ xy
0
¡e
x
y = 0,
iv. sin
2
(x)y
00
+ y
0
+ xy = 0.
30 Series Solution of Linear Equations
Problem 4.27. Find the general solution to the following Cauc hy-Euler equations:
i. x
2
y
00
+ 6xy
0
+ 6y = 0.
ii. x
2
y
00
+ 7xy
0
+ 9 y = 0.
iii. x
2
y
00
+ xy
0
+ 4y = 0.
iv. x
2
y
00
+ 3xy
0
+ 2 y = x cos (ln(x)).
v. (2x + 1)
2
y
00
+ 2(2x + 1)y
0
¡4y = x.
Problem 4.28. Consider the following equation
xy
00
+ 2(1 ¡x)y
0
+ (x ¡2)y = 0
a) Find a series solution for the equation around x
0
= 0. Verify that the obtained series is the
expansion of φ
1
= e
x
.
b) Use the reduction of order method to find φ
2
(x), the second solution of the equation.
c) Use variation of parameters method to find the general solution to the following equation:
xy
00
+ 2(1 ¡x)y
0
+ (x ¡2)y = xe
x
Problem 4.29. Consider the following equation:
xy
00
¡(1 ¡x)y
0
+ y = 0 :
a) Use Frobenius method to obtain a solution to the problem.
b) Verify that y(x) = x
2
e
¡x
is a solution to the problem
xy
00
¡(1 ¡x)y
0
+ y = 0
y(0) = y
0
(0) = 0
:
This implies that the above problem has multiple solution. Why?
c) Obtain 5 nonzero terms of the second solution.
Problem 4.30. Consider the equation
x
2
y
00
+ x(1 + x)y
0
¡(1 ¡2x)y = 0:
i. Use Frobenius method and show one solution is y(x) = xe
¡x
.
ii. Use reduction of order and conclude that the second solution is
z(x) = xe
¡x
Z
e
x
x
3
dx:
iii. Expand the integral and calculate few terms of the second solution.
iv. Calculate the second solution by the m ethod described in this section and compare two solu-
tions.
Problem 4.31. Consider the equation
x
2
y
00
¡x(1 ¡x)y
0
+ y = 0 :
i. Use power series method and conclude that the first solution is y(x) = xe
¡x
.
ii. Use reduction of order to obtain the second solution as
z(x) = xe
¡x
Z
e
x
x
dx:
iii. Expand the e
x
and calculate few terms of the above solutions.
4.3 Singular equations 31
iv. Calculate the second solution by the m ethod described in this section and compare two solu-
tions.
Problem 4.32. For each of th e following equation, try to find two solutions. For each solution calcu-
late few terms.
i. 4x
2
y
00
¡2x(1 + x) y
0
+ 2y = 0.
ii. x
2
y
00
+ xy
0
+ (x
2
¡1) y = 0.
iii. x
2
y
00
+ x(1 + 2x) y
0
+ (x ¡2)y = 0.
iv. 4x
2
y
00
+ 4x(1 + 2x) y
0
¡ y = 0 :
v. x
2
y
00
¡xy
0
+ (1 ¡x)y = 0:
vi. x
2
y
00
+ xy
0
¡(x +
1
9
) y = 0
vii. x
2
y
00
+ x(x
2
+ 1)y
0
¡
1
4
y = 0
viii. x
2
y
00
+ x(1 ¡2x)y
0
+ (x ¡
2
9
)y = 0
ix. x
2
y
00
+ x(1 + x) y
0
+ (
4
3
x ¡
1
9
)y = 0
Problem 4.33. For the equation
x(1 ¡x) y
00
+ y
0
+ (1 ¡x) y = 0; x > 0
a) Find two roots of the characteristic equation for x
0
= 0.
b) Show that y
n
the coefficients of firs t series solution y(x) is obtained by the following recursive
formula for y
1
= ¡y
0
and f or n 1
y
n+1
=
n
2
¡n ¡1
(n + 1)
2
y
n
+
1
(n + 1)
2
y
n¡1
c) Obtain the interval of convergence for y(x) using the recursive formula.
d) Show that for the seco nd solution z(x), the coefficients (d
n
) of the series solution are obtained
through the following recursive formula
d
n
= ¡
2
n
c
n
¡
1
n
2
X
k=0
n¡ 1
c
k
¡
1
n
2
(
X
k=0
n¡1
kd
k
+ d
n¡1
)
Problem 4.34. Here we obtain the power series solution associated with the complex roots of char-
acteristic equation. For the equation
x
2
y
00
+ x(1 ¡x) y
0
+ y = 0 ; x > 0
i. Show that s
1
= i; s
2
= ¡i are roots of the characteristic equation for x
0
= 0.
ii. Show that the coefficients of power series solution at x
0
= 0 are obtained through the formula
y
n
(i) =
n
2
¡n + 2
n(n
2
+ 4)
+ i
2 ¡n
n(n
2
+ 4)
y
n¡1
(i) = y
n
(¡i)
iii. Find four non-zero terms of each series solution.
4.4 Differential equations of mathematical physics
In this section, we stu dy a few equations that a re frequently used in mathematical
physics. They appear again the second part of this book where we study partial differen-
tial equations.
32 Series Solution of Linear Equations
4.4.1 Hermite equation
The general form of Hermite equation is
y
00
¡2xy
0
+ λy = 0; (4.94)
where λ 2 R is a parameter. The point x
0
= 0 is a regular point of the equation and the
recursive formula o f the coefficients is
y
n+2
=
2n ¡λ
(n + 2)(n + 1)
y
n
; n 0: (4.95)
Here the values y
0
and y
1
are arbitrary and thus the equation admits two linearly inde-
pendent analytic solutions. Observe that for λ = 2n, the coefficient y
n+2
is zero and then
y
n+2k
= 0 for any k. Therefore, one of the solutions is a polynomial of order n w hich is
denoted by H
n
(x) and is called the Hermite polynomial after the French M a thematician
Charles Hermite (1822 1901).
Example 4.24. Consider the following initial value problem
y
00
¡2xy
0
+ 10y = 0
y(0) = y
0
; y
0
(0) = y
1
: (4.96)
The recursive formula (4.95) implies y
7
= y
9
= y
11
= ·· ·= 0, and thus
y = y
0
1 ¡5x
2
+
5
2
x
4
+
1
6
x
6
+ ···
+ y
1
x ¡
4
3
x
3
+
4
15
x
5
: (4.97)
Now, if y
0
= 0, we obtain the polynomial solution
H
5
(x) = x ¡
4
3
x
3
+
4
15
x
5
: (4.98)
Proposition 4.3. (Rodrigues formula) The polynomial solution to the Hermite equa-
tion is obtained by the following formula
H
n
(x) = (¡1)
n
e
x
2 d
n
dx
n
e
¡x
2
: (4.99)
Proof. We verify that H
n
satisfies (4.94). For the sake of s implicity, let us use the no ta-
tion D
n
=
d
n
dx
n
. By substituting H
n
into the equation, we reach
D
n+2
e
¡x
2
+ 2xD
n+1
e
¡x
2
+ 2(n + 1)D
n
e
¡x
2
= 0: (4.100)
We claim that the above identity is true. We have
D
n+2
e
¡x
2
= ¡2D
n+1
(xe
¡x
2
) = ¡2
X
k=0
n+1
n + 1
k
D
k
xD
n+1¡k
(e
¡x
2
);
and by simplifying Simplification the right hand side, we get
D
n+2
e
¡x
2
= ¡2xD
n+1
e
¡x
2
¡2(n + 1)D
n
e
¡x
2
; (4.101)
4.4 Differential equations of mathematical physics 33
and thus the claim.
Proposition 4.4. H
n
(x) is even function if n is even and odd if n is odd.
Proof. According to the formula (4.99) and the fact that the d erivatives of e ven polyno-
mials are odd and vice versa, the function
D
n
e
¡x
2
= D(D
n¡ 1
e
¡x
2
); (4.102)
is even if D
n¡1
e
¡x
2
is odd an d odd if D
n¡1
e
¡x
2
is even. But, H
0
= 1 is even and H
1
= 2x
is odd and this j ustifies the claim.
Remark 4.1. T he Hermite equation finds its application in quantum mechanics. Usu-
ally, the Hermite equation is written in the following eigenvalue problem form
d
dx
[e
¡x
2
y
0
] = ¡λe
¡x
2
y: (4.103)
Here λ is called an eigenvalue and a non-zero solution of the equation is called an eigen-
function φ(x). Physicists are interested in eigenfunctions with the bounded energy, that is,
E[φ] =
Z
0
1
e
¡x
2
φ
2
(x) dx < 1: (4.104)
It is seen that th e above integral diverge if λ =/ 2n and converges if λ = 2n, and thus
φ(x) = H
n
(x) are only acceptable eigenfunctions.
4.4.2 Chebyshev equation
The general form of the Chebyshev equation is
(1 ¡x
2
)y
00
¡xy
0
+ λy = 0; (4.105)
where λ 2 R is a parameter. The point x
0
= 0 is a regular point for the equation and the
convergence interval of the series solution is (¡1; 1). The recursive formula for the coeffi-
cients of the series solution is
y
n+2
=
n
2
¡λ
(n + 2)(n + 1)
y
n
; n 0 (4.106)
with y
0
and y
1
arbitrary and thus two linearly independent analytic solutions. Observe
that if λ = n
2
then y
n+2
= 0 and therefore y
n+2k
= 0 for all k 0. This implies tha t one
solution to the equation is a polynomial of order n. T his polynomial is denoted by T
n
(x),
and is called the Chebyshev polynomial after the Russian mathematician Pafnuty
Chebyshev (1821-1894).
Example 4.25. Consider the following initial value problem
(
(1 ¡x
2
)y
00
¡xy + 4y = 0
y(0) = y
0
; y
0
(0) = y
1
: (4.107)
34 Series Solution of Linear Equations
The recursive formula im plies y
2
= ¡2y
0
and that y
2k
= 0 for all k 2. The solution is
y = y
0
(1 ¡2x
2
) + y
1
x ¡
1
2
x
3
¡
1
8
x
5
¡···
: (4.108)
If y
1
= 0, the above series reduces to the polynomial 1 ¡2x
2
.
Proposition 4.5. The polynomial solution of (4:105) is derived by the formula
T
n
(x) = cos(n cos
¡1
x): (4.109)
Proof. I f we take x = cos(θ) in (4.105), the equation is transformed to the form
d
2
y
dθ
2
+ λy = 0: (4.110)
Clearly, the above equation has one solution y = cos( λ
p
θ). Assuming λ = n
2
, n 2 Z, we
obtain y = cos(n cos
¡1
x). We show that this solution is a polynomial of order n. By the
formula
cos() =
1
2
[(e
)
n
+ (e
¡
)
n
] =
1
2
[(cosθ + i sinθ)
n
+ (cosθ ¡i sinθ)
n
]; (4.111)
and the binomial formula
(a + b)
n
=
X
k=0
n
n
k
a
n¡k
b
k
; (4.112)
we derive
cos() =
X
k:even
n
n
k
(¡1)
k/2
cos
n¡k
θ sin
k
θ: (4.113)
Now replace sin
k
θ = (1 ¡cos
2
θ)
k/2
for k even, and x = cosθ, to obtain
cos(ncos
¡1
x) =
X
k :even
n
n
k
x
n¡k
(1 ¡x
2
)
k/2
; (4.114)
which is a polynomia l of order n.
Properties of Chebyshev polynomial.
T
n
(x) have important properties and are e xtensively used in the approximation of func-
tions, and also in mathematical physics. We discuss some of its properties below.
1. The solution of a Chebyshev equation goes unbounded at x = ±1 if λ =/ n
2
. The
only possibility that the so lution remain bounded is the case λ = n
2
for n, an
integer.
2. T
n
(x) is an even function for n even and an odd function fo r n odd.
3. T
n
(x) s a tis fie s the f o llowing recursive formula
T
n+1
(x) = 2xT
n
(x) ¡T
n¡1
(x); (4.115)
4.4 Differential equations of mathematical physics 35
where T
0
(x) = 1 and T
1
(x) = x. In fact, the ab ove formula is another form of the
familiar identity
cos((n + 1)θ) + cos((n ¡1)θ) = 2 cosθ cos(): (4.116)
The figure (4.4) shows a few of the Chebyshev polynomials.
1
.
0
0
.
5 0
.
0 0
.
5 1
.
0
1
1
T
1
(
x
)
T
2
(
x
)
T
3
(
x
)
Figure 4.4. The graphs of some Chebyshev polynomials.
4.
Since every continuous function can be app roxim ated by polynomials, we c an
approximate a continuous function defined in ¡1 x 1 by T
n
(x), that is,
f(x)
=
c
0
T
0
+ c
1
T
1
(x) + ···+ c
n
T
N
(x): (4.117)
The advantage of expanding f in terms of T
n
comes from the following orthogonal
property :
Z
¡1
1
1
1 ¡x
2
p
T
n
(x) T
m
(x)dx =
8
>
>
<
>
>
:
0 n =/ m
π n = m = 0
π
2
n = m =/ 0
: (4.118)
The proof is left as an exercise. This property let us to determine constants c
k
in
(4.117) as follows. For c
0
, we multiply (4.117) by
1
1 ¡ x
2
p
T
0
and integrate in the
interval (¡1; 1). Since T
0
= 1, according to the orthogonal ity property, we obtain
Z
¡1
1
1
1 ¡x
2
p
f(x)dx = c
0
Z
¡1
1
1
1 ¡x
2
p
dx = c
0
π; (4.119)
and thus
c
0
=
1
π
Z
¡1
1
1
1 ¡x
2
p
f(x) dx: (4.120)
Rep eating the calculation for k 1, we reach
Z
¡1
1
1
1 ¡x
2
p
f(x) T
k
(x) dx = c
k
Z
¡1
1
1
1 ¡x
2
p
T
k
2
(x)dx:
But by trigonometric substituting, we have
Z
¡1
1
1
1 ¡x
2
p
T
k
2
(x)dx =
Z
0
π
cos
2
() =
π
2
;
36 Series Solution of Linear Equations
and therefore
c
k
=
2
π
Z
¡1
1
1
1 ¡x
2
p
f(x) T
k
(x): (4.121)
In the problem set, we asked the reader to approximate a continuous function with
some polynom ial and compare the results.
4.4.3 Legendre equation
The general form of Legendre equations is
(1 ¡x
2
)y
00
¡2xy
0
+ λy = 0; (4.122)
where λ is a real value. The point x
0
= 0 is regular and the equation is defined in the
interval ¡1 < x < 1. The recursive formula f o r the co efficients of the series solution is
y
n+2
=
n(n + 1) ¡λ
(n + 2)(n + 1)
y
n
: (4.123)
The radius of convergence of the series is L = 1. If λ = n(n + 1) for some po sitive integer
value n, then one solution becomes a polynomial which is denoted by P
n
(x) and is called
the Legendre polynomial after the French mathematician Adrien Marie Legendre
(1752-1833).
Example 4.26. Consider the problem
(
(1 ¡x
2
)y
00
¡2xy
0
+ 6y = 0
y(0) = y
0
; y
0
(0) = y
1
: (4.124)
Here λ = 6 = 2 ×3 a nd then the equation have a polynomial solution. By the recursive for-
mula, we have y
2
= ¡3 y
0
, and y
2k
= 0 for k 2. Therefore, the general solution is
y = y
0
(1 ¡3x
2
) + y
1
x ¡
2
3
x
3
+
1
5
x
5
+ ···
: (4.125)
If y
1
= 0, then y = 1 ¡3 x
2
is the Legendre polynomial soluti o n.
Proposition 4.6. (Rodrigues) P
n
(x) are derived by the formula
P
n
(x) =
(¡1)
n
2
n
n!
d
n
dx
n
(1 ¡x
2
)
n
: (4.126)
Proof. The factor in the front of the derivative is just to normalize the polynomials.
First, we have
D
n+1
[(1 ¡x
2
) D(1 ¡x
2
)
n
] =
X
k=0
n+1
n + 1
k
D
k
(1 ¡x
2
) D
n+2¡k
(1 ¡x
2
)
n
: (4.127)
4.4 Differential equations of mathematical physics 37
Simplifying the above formula gives
D
n+1
[(1 ¡x
2
) D(1 ¡x
2
)
n
] = (1 ¡x
2
) D
n+2
(1 ¡x
2
)
n
¡2x(n + 1) D
n+1
(1 ¡x
2
)
n
¡
¡n(n + 1)D
n
(1 ¡x
2
)
n
:
On the other hand, we have
D
n+1
[(1 ¡x
2
) D(1 ¡x
2
)
n
] = ¡2nD
n+1
[x (1 ¡x
2
)
n
] =
¡2nxD
n+1
(1 ¡x
2
)
n
¡2n(n + 1) D
n
(1 ¡x
2
)
n
:
Equating two above identities, gives
(1 ¡x
2
) D
2
P
n
(x) ¡2xDP
n
(x) + n(n + 1) P
n
(x) = 0; (4.128)
which is the Legendre equation.
Properties of Legendre polynomials.
We use Legendre equation frequently in the second part of this book. Here, we intro-
duce some of its important properties.
1. The solution of Legendre equation goes unbounded at x = ±1 except the polyno-
mial solution for λ = n(n + 1). In this case, the equation has a polynomial solution
P
n
(x).
2. It is verified immediately from the Rodrigues formula that P
n
(x) is even function
fo r n even and odd for n odd. The figure (4 .5) shows the graphs of some
Legendre polynomials:
1
.
0
0
.
5 0
.
0 0
.
5 1
.
0
1
1
P
1
(
x
)
P
2
(
x
)
P
3
(
x
)
Figure 4.5. The graphs of some Legendre polynomials.
3. Polynomials P
n
(x) s a tis fies the following orthogonality property
Z
¡1
1
P
n
(x)P
m
(x) dx =
(
0 n =/ m
2
2n + 1
n = m
: (4.129)
We can approximate continuous fun ctions by P
n
(x) in the interval ¡1 x 1.
That is, if f is continuously defined in [¡1; 1], then
f(x)
=
c
0
P
0
(x) + ···+ c
n
P
n
(x); (4.130)
38 Series Solution of Linear Equations
where by (4.129), the coefficients are determined by the formula
c
n
=
2n + 1
2
Z
¡1
1
f(x) P
n
(x) dx: (4.131)
4.4.4 Bessel equation
The general form of the Bessel equation is
x
2
y
00
+ xy
0
+ (x
2
¡λ
2
)y = 0; (4.132)
where λ 2 R is a parameter. The point x
0
= 0 is a regular-singular point for the equation.
In order to keep the solution, we must impose the following in itial conditions
lim
x!0
y(x): bounded; and, lim
x!0
y
0
(x): bounded: (4.133)
Note that a(x) = 1 and b(x) = x
2
¡λ
2
and th e characteristic polynomial is f(s) = s
2
¡ λ
2
,
with roots s
1
= λ and s
2
= ¡λ. Therefore, one solution is
y(x) = x
λ
X
n=0
1
c
n
x
n
;
where c
n
are calculated from formula (4.48). A direct simplification yields
8
>
>
>
>
<
>
>
>
>
:
c
n
= ¡
1
n(n + 2λ)
c
n¡2
c
0
= arbitrary
c
1
= 0
: (4.134)
Note that c
n
= 0 for n = 2k + 1 and for n = 2k, we have
c
2k
= ¡
1
2
2
k(k + λ)
c
2k¡2
; k = 1; 2; ···:
In particular, if λ = m an integer, then
c
2k
=
(¡1)
k
m!
2
2k
k! (k + m)!
;
and thus
y(x) = c
0
X
k=0
1
(¡1)
k
m!
2
2k
k! (k + m)!
x
2k+m
:
For c
0
=
1
m!2
m
, we obtain the Bessel function of the first type
J
m
(x) =
X
k=0
1
(¡1)
k
k!(k + m)!
x
2
2k+m
:
The second solution of the Bessel equation can be derived by the method outlined in
this chapter. Since s
1
¡s
2
= 2λ, if 2λ is not an integer, the second soluti on is
Y
λ
(x) = x
¡λ
X
n=0
1
c
n
(¡λ)x
n
; (4.135)
4.4 Differential equations of mathematical physics 39
where c
n
(¡λ) can be calculated by formula (4.71). If 2λ = n is an integer, the second solu-
tion is determined by the method we explained in the previous section. The second solu-
tion Y
λ
(x) is called the Bessel function of second type. The f o llowing gure shows J
1/2
(x)
and J
2
(x) in the same coo rdinate.
5 10 15 20
0
.
2
0
.
2
0
.
4
0
.
6
J
2
(
x
)
J
1
/
2
(
x
)
Figure 4.6.
Observe the quasi-periodicity of the Bessel functions. We can justify this by the fol-
lowing argument. By the substitution u = x
p
y, the equation (4.132) becomes (see the
problem set)
u
00
+
1 ¡
λ
2
¡
1
4
x
2
!
u = 0: (4.136)
When x ! 1, the equation (4.136) looks like a harmonic oscillator with the solutions u =
A
0
sin(x + '
0
). This justifies the fact that for x sufficiently large, the solution y(x) is
y
A
0
x
p
sin(x + '
0
): (4.137)
4.4.5 Gauss hype r-geome tric equation
The general form of Gauss equation is
x(x ¡1) y
00
+ [(α + β + 1)x ¡ γ]y
0
+ αβy = 0; (4.138)
where α; β ; γ are constants. Although, the form of the equati on seems somehow far
reaching, the reader is asked to verify that all equat ions we studied above are specific
instances of th is general equation. Note that the Gauss equation has tow regular-singular
points x
0
= 0 and x
1
= 1. At x
0
= 0, we have
p(x) =
(α + β + 1)x ¡ γ
x ¡1
; and q(x) =
αβx
x ¡1
; (4.139)
and the characteristic poly nom ial is
f(s) = s
2
+ (γ ¡1)s; (4.140)
40 Series Solution of Linear Equations
with roots s = 0 and s = 1 ¡ γ. If γ is not an integer, then there are two independent solu-
tions
y
1
(x) =
X
n=0
1
y
n
(0) x
n
; and y
2
(x) = x
1¡ γ
X
n=0
1
y
n
(1 ¡γ) x
n
: (4.141)
If we substitute y
1
(x) and y
2
(x) into the Gauss equation, we get the following recursive
formula for the coeffici ents
c
n
(0) =
(α + n ¡1)(β + n ¡1)
n(n ¡1 + γ)
c
n¡1
(0); (4.142)
and
c
n
(1 ¡γ) =
(n + α ¡ γ)(n + β ¡γ)
n(n + 1 ¡γ)
c
n
(1 ¡γ): (4.143)
A straightforward calculation gives the following formula if y
0
= 1:
c
n
(0) =
Q
k=1
n
(α + k ¡1)(β + k ¡1)
n!
Q
k=1
n
(γ + k ¡1)
; (4.144)
c
n
(1 ¡γ) =
Q
k=1
n
(k + α ¡ γ)(k + β ¡ γ)
n!
Q
k=1
n
(k + 1 ¡γ)
: (4.145)
The series with the coefficients c
n
(0) is denoted by F (α; β; γ; x) and are called hype r-geo-
metric functions. Accordingly, the series with the coefficients c
n
(1 ¡ γ) is written F (α ¡
γ + 1; β ¡ γ + 1; 2 ¡ γ; x). We conclude that for γ a non-integer, the solution to (4.138) is
φ(x) = c
1
F (α; β; γ; x) + c
2
x
1¡ γ
F (α ¡γ + 1; β ¡ γ + 1; 2 ¡γ; x): (4.146)
For the solution at x
1
= 1, we take th e substitution t = 1 ¡x to rewrite the equation a s
t(t ¡1)y
t
00
+ [(α + β + 1) t ¡ γ
0
] y
t
0
+ αβy
t
= 0; (4.147)
where γ
0
= α + β + 1 ¡ γ. Thus, for γ a non-integer, the so lution at x
1
= 1 can be written
as
φ(x) = c
1
F (α; β; γ
0
; 1 ¡x) + c
2
(1 ¡x)
1¡ γ
0
F (γ ¡ β; γ ¡α; 2 ¡γ
0
; 1 ¡x): (4.148)
For γ an integer, the second solution is determined by the method we presented in the
previous section.
Problems
Problem 4.35. Ver ify that the radius of convergence of the ser ies generated by (4.95) is infinity.
Problem 4.36. Find a polynomial solution for the following equation:
y
00
¡2xy
0
+ 8y = 0;
and compare it with H
4
(x).
Problem 4.37. Show that the H
n
(x) satisfy the following recursive formula
H
n+1
¡2xH
n
+ 2nH
n¡1
= 0:
4.4 Differential equations of mathematical physics 41
Problem 4.38. Show the following relation for H
n
d
dx
H
n
= 2nH
n¡1
:
Problem 4.39. Find the radius of the convergence of the series generated by (4.106).
Problem 4.40. Use mathematical induction to prove that cos() is a polynomial in terms of cos(θ)
and conclude that one solution of the Chebyshev equation is a polynomial.
Problem 4.41. Show that T
n
(x) even if n is even number and odd if n is odd.
Problem 4.42. Show that the Chebyshev polynomial satisfies the following recursive formula
T
n+1
(x) = 2 xT
n
(x) ¡T
n¡1
(x):
Use the above formula to calculate T
n
(x) for n = 0; 1; 2; 3.
Problem 4.43. Prove the orthogonality property (4.118).
Problem 4.44. Find two solutions of the following equations
i. (1 ¡x
2
)y
00
¡xy + 9y = 0:
ii. (1 ¡x
2
)y
00
¡xy + 16y = 0:
Problem 4.45. Find an approximation of the following analytic functions of in th e interval (¡1; 1) in
terms of polynomials (T
0
; :: :; T
4
) and compare them with the approximation by (1; x; :::; x
4
):
i. f (x) = e
x
,
ii. f (x) = sin(x).
Problem 4.46. Solve the following equation
(1 ¡x
2
)y
00
¡2xy
0
+ 2y = 0:
Problem 4.47. Show that the substitution u = x
p
y tr ansforms the Bessel equation into the fol-
lowing equation
u
00
+
1 ¡
λ
2
¡
1
4
x
2
!
u = 0:
Since the above equation looks like a simple harmonic oscillator for large x, it justifies that y (x) ! 0
when x !1.
Problem 4.48. Consider the following equation
y
00
+ cx
m
y = 0;
for c > 0 and m =/ ¡2.
i. Apply the substitution y = x
p
u to obtain
x
2
u
00
+ xu
0
+ (cx
m+2
¡
1
4
)u = 0:
ii. Now apply the substitution
t =
2 c
p
x
m+2
2
m + 2
;
to obtain
t
2
u
00
+ tu
0
+ (t
2
¡
1
(m + 2)
2
)u = 0:
iii. Now solve the following equation using the above substitution:
y
00
+ 9x
3
y = 0:
42 Series Solution of Linear Equations
Problem 4.49. Show that the substitution t = 2e
x/2
transforms the following equation into a Bessel
equation
y
00
+ (e
x
¡m
2
)y = 0:
Use the above substitution to solve the following equation
y
00
+ (e
x
¡4)y = 0:
Problem 4.50. Show that the equation
(x ¡r
1
)(x ¡r
2
) y
00
+ a(x ¡r
3
) y
0
+ by = 0
can be transformed to the Gauss equation through the substitution
x = (r
2
¡r
1
)z + r
1
:
Use the above transformation to solve the following equation
3x(x ¡2 ) y
00
¡(x + 3)y
0
+ y = 0
4.4 Differential equations of mathematical physics 43