Chapter 4
1D Linear Second-Order Equations
In this chapter, our primary focus is on introducing the series solution approach for 1D heat
and wave differential equations. Building upon the foundations laid in the preceding chapters,
where we explored the derivation of these equations in higher dimensions and analyzed
their overarching behavior, we now delve into the specific context of solving heat and wave
equations within b o unded media extending along the x-axis, defined as the interval (x
0
; x
1
).
As we navigate this chapter, we encounter a crucial factor that heavily influences the
nature of the solution: the conditions at the boundary points x=x
0
and x=x
1
. This prompts
the eme rgence of boundary value problems a s a pivo tal concept in th is context. To enhance
our understanding, we introduce a renowne d theorem, the Sturm-Liouville theorem, w hich
assumes a critical role in our exploration of the e igenfunction expan sion method. This
method becomes instrumental in representing the solutions of 1D partial differential equa-
tions.
4.1 From PDE to e igenvalue problem
This section serves as a bridge between partial differential equations (PDEs) and eigenvalue
problems.
4.1.1 Outline of the method
We'll focus on the context of an open interval =(x
0
; x
1
), where we encounter a PDE that
governs the function u=u(x; t):
u
t
= L[u]
where L takes the f o rm of a 1D differential o perator:
L := a(x)
@
2
@x
2
+ b(x)
@
@x
+ c(x): (4.1)
In our scenarios, a; b, and c are continuou s functions defined within . This equation finds
practical applications, like modeling temperature distribution along an extended conductive
rod within . However, to establish a complete formulation, we need to determine conditions
at both endp o ints, x=x
0
and x=x
1
. This is because the differential equation is confined to
the open interval .
1
We proceed by assuming that the function u adhere s to a particular condition at th ese
endpoints:
αu + βu
x
= 0:
Additionally, the initial temperature distribution, u(x; 0), along the rod must be s pecified.
The interplay between the thermal energy distribution along the rod and heat diffusivity
generates the dynamic heat behavior of the system. By combining these elements, we present
a comprehensive initial-boundary value problem:
8
<
:
u
t
= L[u] on
αu + βu
x
= 0 on bnd(Ω)
u(x; 0) = f(x) initial condition
:
This synthesis of the differential operator L, boundary conditions, and initial condition lays
the foundation fo r solving various heat and wave equations, gradually unraveling the intricate
dynamics of these systems.
As L is a linear operator that operates solely on the spatial variable x, we can treat the
solution of the equation u
t
= L[u] as an ordinary differential equation for u and express it as:
u
t
= L[u], as an ordinary differential equation fo r u and write it as: u(x; t) = φ(x) e
¡λt
where
φ(x) is an unknown function and λ is a constant. To verify its validity, substituting this into
the equation gives:
¡λφ(x) e
¡λt
= e
¡λt
L[φ];
which leads to: L[φ] = ¡λφ. To satisfy the prescribed boundary conditions for u, we arrive
at the subsequent boundary value problem, often termed the eigenvalue problem:
L[φ] = ¡λφ on
αφ + βφ
0
= 0 ob bnd(Ω)
Remark 4.1. The use of terminology ca n be justified by drawing parallels with the e igen-
value problem in linear algebra. Remember that a vector v in R
n
is an eigenvector of a linear
transformation T :R
n
!R
n
if T [v]+λv=0. In the context of our problem, the operator L acts
linearly on the space of smooth functions defined on . Consequently, a function φ: !R
is termed an eigenfunction if L[φ](x) + λφ(x)=0 holds for all x in . Following the analogy
with the eigenvalue problem in linear algebra, the va lue λ is referred to as the associated
eigenvalue of the eigenfunction φ. It's important to note that if φ is an eigenfunction of
the problem, then c φ(x) for any non-zero constant c is also an eigenfunction. Just like
eigenvectors in linear algebra, eigenfunctions are typically considered as non-trivial functions.
Solving an eigenvalue problem involves determining the eigenfunctions φ(x) and eigen-
values λ. When the problem can be solved, the solution of the equation u
t
= L[u] while
adhering to the boundary condition αu + β u
x
= 0 can be expressed as follows: u(x; t) =
φ(x) e
¡λt
.
Example 4.1. Consider the interval (0; 1) and let represent this interval. Let's explore
the eigenvalue problem φ
00
= ¡λφ, where φ(x) is defined on . The boundary conditions
are set as φ(0) = 0 and φ(1) = 0. We want to find values of λ and corresponding non-trivial
functions φ(x) that satisfy this equation.
2 1D Linear Second-Order Equations
First, let's show that the problem h as a solution when λ is greater th a n zero. To do this,
we multiply the equation by φ(x) and integrate it over the interval (0; 1). This gives us an
equation involving integrals and the boundary term:
¡
Z
0
1
jφ
0
(x)j
2
dx + φ
0
(x) φ(x)
0
1
= ¡λ
Z
0
1
jφ(x)j
2
dx;
where we used the integration by parts formula for the integral at the le f t-hand side. The
boundary term φ
0
(x) φ(x)j
0
1
is equal zero due to the given boundary cond itions. This gives
the following formula for λ:
λ =
Z
0
1
jφ
0
(x)j
2
dx
Z
0
1
jφ(x)j
2
dx
0:
If we were to assume λ = 0, we'd find that φ(x) turns out to be a constant function. But
considering the boundary conditions and continuity, we'd be left with the trivial function
φ(x) = 0. Therefore, λ must b e greater than zero.
With this understanding, we proceed to solve the equation φ
00
= ¡λφ as an ordinary
differential equation. The general soluti on for φ(x) can be expressed a s:
φ(x) = A cos( λ
p
x) + B sin( λ
p
x):
Plugging in the boundary condition φ(0) = 0, we find that A = 0. Applying the second
boundary condition, we determine that the eigenvalues λ are given by λ = n
2
π
2
for n = 1; 2; ···.
For each eigenvalue λ
n
, we obtain an associated eigenfunction φ
n
(x) = B
n
sin(nπx),
where B
n
is a non-zero constant. This set of eigenvalues and eigenfunctions forms an infinite
collection: φ
1
(x) =sin(πx), λ
1
= π
2
, φ
2
(x) =sin(2πx), λ
2
= 4π
2
, φ
3
(x) =sin(3πx), λ
3
= 9π
2
and
so on. These eigenfunctions provide the building blocks for s o lv ing more complex differential
equations and understanding the behav ior of solutions on the interval (0; 1).
Exercise 4.1. Let's explore another approach to demonstrate that λ > 0 in the previously discussed
eigenvalue p roblem. Thi s will involve direct calculations. Begin by considering the algebraic character-
istic equation for the differential equation φ
00
= ¡λφ, which results in r = ± ¡λ
p
.
Task 1: Suppose λ < 0. Proceed by establishing the g ener al solution for this ordinary differential
equation (ODE). Afterward, apply the given boundary conditions and deduce that the only possible
outcome is φ = 0.
Task 2 : Assume that λ = 0. Derive the general solution for this ODE and arrive at the conclusion
that φ = 0 is again the only feasible solution.
Task 3: Now f ocus on the case where λ> 0. Determin e the non-trivial eigenfunctions that correspond
to these positive eigenvalues.
Exercise 4.2. Building upon the insights gained from the previous example, let's apply that knowledge
to draw conclusions about the heat problem given as:
u
t
= u
xx
x 2(0; 1)
u(0; t) = u(1; t) = 0
:
Task: Utilize the outcomes from the previous example and deduce that this heat problem possesses an
infinite number of solutions, denoted as u
n
(x; t) = e
¡n
2
π
2
t
sin(nπx) for any positive integer n = 1; 2; ···.
These solutions are often referred to as “s eparated solutions” due to their form.
4.1 From PDE to eigenvalue problem 3
Additionally, employing the sup erposition principle, you can express the general solution as:
u(x; t) =
X
n=1
1
c
n
u
n
(x; t);
Here, the constants c
n
can take any arbitrary values, provided that the infinite series converges to a valid
function.
4.1.2 Separation of variables
The derivation of the eigenva lue problem discussed a bove can also be expressed in terms of
the separation o f variables technique. By taking u(x; t) as the separated function:
u(x; t) = φ(x) U(t);
and substituting this into the heat equation, we arrive at:
U
0
U
=
L[φ]
φ
;
where L[φ] represents the operator applied to φ(x). This equation is possible only when the
ratios are equal to the same dimensionless constant. Let's denote this constant by ¡λ. The
minus sign is used for historical reasons and does not carry a physical meaning, but it will
become clear why this choice is more appropriate for subsequent calculations.
By introducing ¡λ as the eigenvalue, we obtain the following ordinary differential equa-
tion for U (t):
U
0
= ¡λU ;
which is solved as U(t)=Ce
¡λt
, where C is a c o nstant. The eigenvalue λ and eigenfunction
φ(x) are determined by solving the second ordinary differential equation:
L[φ] = ¡λφ;
subject to the boundary conditions: αφ + βφ
0
= 0 at the boundary points.
The derivation holds true for gene ral 1 D pa rtial differential equations. Consider the
following second-order PD E for u(x; t) on the interval x 2[x
0
; x
1
]:
8
<
:
a
1
(t)u
tt
+ a
2
(t) u
t
= L[u]
α
1
u(x
0
; t) + β
1
u
x
(x
0
; t) = 0
α
2
u(x
1
; t) + β
2
u
x
(x
1
; t) = 0
;
where a
1
(t) a nd a
2
(t) a re continuous functions. The equation is general enough to encompass
all types of 1D heat and wave equations. To nd a solution to this PDE, we consider the
solution in the separated fo rm as u(x; t)=U(t)φ(x). By substituting this separated function
into the PDE, we obtain the equality
a
1
(t)U
00
+ a
2
(t)U
0
U
=
L[φ]
φ
:
4 1D Linear Second-Order Equations
Obviously, the equality holds only if the ratio is a dimensionless constant since x and t are
indep endent variables. Therefore, we can write
a
1
(t)U
00
+ a
2
(t)U
0
U
=
L[φ]
φ
= ¡λ;
where λ is a constant. Here, the negative sign does not carry any significant physical meaning;
it is merely a historical convention. Therefore, the separated form of the solution leads to
the following ordinary differential equations:
1. For U(t), we have the equation:
a
1
(t) U
00
+ a
2
(t) U
0
= ¡λU ;
2. For φ(x), we have the eigenvalue problem:
8
<
:
L[φ] = ¡λφ on
α
1
φ(x
0
) + β
1
φ
0
(x
0
) = 0
α
2
φ(x
0
) + β
2
φ
0
(x
1
) = 0
:
4.1.3 Series solution
Superposition solutions hold value when the coefficients c
n
are carefully chosen to ensure
that the infinite series converges into a smooth function u(x; t). The convergence of infinite
function s eries is a complex topic beyond this book's scope. Nevertheless, we provide illus-
trative examples to convey the concept of f unction series convergence.
Example 4.2. Consider the interval (0; 1), w here we delve into the followin g infinite series:
X
n=1
1
1
n
sin(x):
Upon observation, it's evident that this series doesn't converge to a continuous function
within the interval (0; 1). A clear instance a rises when x =
1
2
, res ulting in the series becoming
a numerical series:
X
n=1
1
(¡1)
n¡1
2n ¡1
:
This numerical series exhibits conditional convergence, lacking absolute convergence. Now
explore the s eries:
X
n=1
1
1
n
2
sin(x):
We ascertain that this series demonstrates a bso lute convergence. This is evident from the
inequality:
X
n=1
1
1
n
2
sin(nπx)
X
n=1
1
1
n
2
< 1:
The figure below depicts the truncated series of the above infinite series up to 100 terms :
4.1 From PDE to eigenvalue problem 5
0 0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
1.2
These examples underscore the intricate convergence behaviors of infinite series and their
implications on the continuity of functions within a specified interval.
In the follow ing sections, we will delve into other ty pes of convergence. However, the
discussed form of convergence remains fundamentally crucial for comprehending initial value
problems. Consider th e subsequent heat problem as an example:
u
t
= u
xx
x 2(0; 1)
u(0; t) = u(1; t) = 0
;
alongside the initial condition u(x; 0) = f(x). Given the general series solution for this heat
problem as:
u(x; t) =
X
n=1
1
c
n
e
¡λ
n
t
sin(nπx):
The significance of the convergenc e of this infinite series as t approaches 0 becomes evident.
This convergence property implies:
lim
t!0
u(x; t) = lim
t!0
X
n=1
1
c
n
e
¡λ
n
t
sin(nπx) =
X
n=1
1
c
n
sin(nπx) = f (x):
Consequently, the coefficients c_n must be chosen carefully, ensuring that:
lim
N !1
X
n=1
N
c
n
sin(nπx) = f (x):
As an illustration, consider the case where f (x) = x(1 ¡ x). For this scenario, appropriate
values for c
n
are determined as follows:
c
n
=
(
8
n
3
π
3
n: odd
0 n: even
:
With this specific selection of c
n
, the series:
X
n=1
1
c
n
e
¡λ
n
t
sin(nπx);
6 1D Linear Second-Order Equations
successfully converges to the initial condition as t approaches 0, aligning with the prescribed
values of c
n
.
Exercise 4.3. Let be the interval Ω: (0; 1):
a) Solve the following eigenvalue problem
φ
00
= ¡λφ on
φ
0
(0) = φ
0
(1) = 0 on bnd(Ω)
:
b) Use this eigenvalue problem and solve the following heat problem
u
t
= ku
xx
on
u
x
(0; t) = u
x
(1; t) = 0
;
for k > 0.
Exercise 4.4. Let be the interval Ω: (0; 1):
a) Solve the following eigenvalue problem
φ
00
= ¡λφ on
φ(0) = φ
0
(1) = 0 on bnd(Ω)
:
b) Use this eigenvalue problem and solve the following heat problem
u
t
= ku
xx
on
u(0; t) = u
x
(1; t) = 0
;
for k > 0.
c) Find the steady state solution
lim
t!1
u(x; t):
Exercise 4.5. Let be the interval Ω: (0; 1):
a) Solve the following eigenvalue problem
φ
00
= ¡λφ on
φ
0
(0) = φ(1) = 0 on bnd(Ω)
:
b) Use this eigenvalue problem and solve the following heat problem
u
t
= ku
xx
on
u
x
(0; t) = u(1; t) = 0
;
for k > 0.
c) Find the steady state solution
lim
t!1
u(x; t):
Exercise 4.6. Let be the interval Ω: (0; 1):
a) Fin d eigenvalues and eigenfunctions of the following eigenvalue problem
8
>
>
>
>
<
>
>
>
>
:
φ
00
= ¡λφ
φ
¡
¡
π
2
= 0
φ
¡
π
2
= 0
:
b) Use these eigenfunct ions and solve the following wave equation
8
>
>
>
>
>
>
>
>
<
>
>
>
>
>
>
>
>
:
u
tt
= c
2
u
xx
x 2
¡
¡
π
2
;
π
2
u
¡
¡
π
2
; t
= u
¡
π
2
; t
= 0
u(x; 0) = cos(x) + sin(2x)
u
t
(x; 0) = 0
:
4.1 From PDE to eigenvalue problem 7
4.1.4 Problems
Problem 4.1. Find the eigenvalues an d eigenfunctions of the following eigenvalue pr oblem
(
φ
00
= ¡λφ
φ
¡
1
2
= φ
¡
3
4
= 0
:
Problem 4.2. Find the eigenvalues an d eigenfunctions of the following eigenvalue pr oblem
8
<
:
φ
00
= ¡λφ
φ(¡1) = φ(1)
φ
0
(¡1) = φ
0
(1)
:
Problem 4.3. Find the eigenvalues an d eigenfunctions of the following eigenvalue pr oblem
8
>
>
>
>
<
>
>
>
>
:
φ
00
= ¡λφ
φ
¡
¡1
2
= 0
φ
0
¡
1
2
= 0
:
Problem 4.4. Find the general ser ies solution of the following wave equation
(
u
tt
= c
2
u
xx
x 2(0; 1)
u(0; t) = u(1; t) = 0
:
Problem 4.5. Consider the following damped wave equation
(
u
tt
+ 2ξu
t
= c
2
u
xx
x 2(0; 1)
u(0; t) = u(1; t) = 0
:
a) For what value of damping factor ξ, the system is in the underdamped state? The underdamped
state is the state where the t-compo nent of the solution is in the form
U(t) = e
σt
[A cos(!t) + sin(!t)];
for some real parameters σ and !.
b) Find the general series solution of the problem when the system is under damped.
Problem 4.6. Consider the following eigenvalue problem on x 2(1; e)
8
<
:
L[φ] = ¡λφ
φ(1) = 0
φ(e) = 0
;
where L stands for the differential operator
L := x
2
d
2
dx
2
+ x
d
dx
:
a) Use the transformation x = e
s
and convert the given eigenvalue problem to the the following one
8
<
:
'
00
(s) = ¡λ'(s)
'(0) = 0
'(1) = 0
:
Solve t he transformed problem and find eigenvalues and eigenfunctions of the oper ator L.
b) Use this results and solve the following problem
8
>
>
<
>
>
:
u
t
= x
2
u
xx
+ xu
x
u(1; t) = 0
u(e; t) = 0
:
8 1D Linear Second-Order Equations
Problem 4.7. Consider the following eigenvalue problem on x 2(0; 1)
8
<
:
φ
00
= ¡λφ
φ(0) = 0
φ(1) + φ
0
(1) = 0
:
a) Prove that the problem is solvable for a non-trivial function φ only if λ > 0.
b) Verify that there are infinitely many eigenvalues λ
1
< λ
2
< ···. Find the smallest eigenvalue λ
1
by
a numerical method and the correspond eigenfunction φ
1
(x).
Problem 4.8. Consider the following eigenvalue problem on x 2(0; 1)
8
<
:
φ
00
= ¡λφ
φ(0) = 0
φ(1) ¡φ
0
(1) = 0
:
a) Prove that the problem is solvable for a non-trivial function φ only if λ 0.
b) Verify that there are infinitely many eigenvalues 0 = λ
0
< λ
1
< ···. What is the rst eigenfunction
φ
0
(x).
Problem 4.9. Consider the following eigenvalue problem on x 2(0; 1):
8
<
:
φ
00
+ 2φ
0
= ¡λφ
φ(0) = 0
φ(1) = 0
:
Show that the eigenvalues of the problem is strictly greater than 1. The root of the characteristic equation
of the ordinary differential equation φ + 2φ + λφ = 0 are
r
1;2
= ¡1 ± 1 ¡λ
p
:
Give argument why λ > 0 is the condition for the solvability of the eigenvalue problem for a non-trivial
eigenfunction φ(x).
Problem 4.10. We Consider the eigenvalue problem
8
<
:
φ
00
+ 2φ
0
= ¡λφ
φ(0) = 0
φ(1) = 0
:
a) Multiply the equation by φ and integrate the equation in [0; 1]. Use the boundary condition and
show
Z
0
1
φ
0
(x) φ(x) dx = 0:
b) The part a) gives the result
λ =
Z
0
1
jφ
0
(x)j
2
dx
Z
0
1
jφ(x)j
2
dx
:
Prove the inequality
Z
0
1
jφ(x)j
2
dx <
Z
0
1
jφ
0
(x)j
2
dx;
if φ(0) = φ(1) = 0. Hint: write
φ(x) =
Z
0
x
φ
0
(x) dx;
and conclude
jφ(x)j
Z
0
1
jφ
0
(x)jdx:
4.1 From PDE to eigenvalue problem 9
Use the Cau chy-Schwarz inequality
Z
0
1
jφ
0
(x)jdx <
Z
0
1
jφ
0
(x)j
2
dx
1
2
;
and prove the claim.
4.2 Sturm-L iouville problem
As we observed above, the solution of a general linear 1D partial differential equation strongly
relies on th e solvability of the following eigenvalue problem
8
<
:
L[φ] = ¡λφ
α
1
φ(x
0
) + β
1
φ
0
(x
0
) = 0
α
2
φ(x
0
) + β
2
φ
0
(x
1
) = 0
;
where L is the differential operator (4.1).
While there is no general method to solve this problem, the Sturm-Liouville theorem
provides valuable insights into its solutions. This theorem is a powerful tool in mathematical
physics and plays a significant role in u nderstanding the behavior of differential equations.
To ga in a rigorous understanding of the assertions of the theorem, it is essential to
familiarize ourselves with some advanced mathematical concepts. These include function
spaces, the notion of inner product, and the concepts of convergence of function series or
sequences.
4.2.1 Inner product
Recall the concept of the dot product between two vectors in R
n
. Extending this operation
to functions, we define the inner product as a natural generalization. Let f(x) and g(x) be
two real-va lued functions define d on the interval [x
0
; x
1
]. The inner pro duct hf ; gi is de fined
as the integral
hf ; gi=
Z
x
0
x
1
f(x) g(x) dx: (4.2)
The defined inner product exhibits the following prop erties:
1. Positivity: hf ; f i0, and hf ; f i= 0 only if f 0.
2. Commutativity: hf ; g i=hg; f i
3. Homogeneity: hλf ; gi=λhf ; g i for any λ 2R
4. Additivity: hf ; g + hi=hf ; gi+hf ; hi, as long as the associated integrals exist.
To grasp the natural extension of the dot product, let us recall its definition between two
arbitrary vectors u~ = (u
1
; :::; u
n
) and v~ =(v
1
; :::; v
n
), which is given by:
u~ ·v~ =
X
j=1
n
u
j
v
j
:
10 1D Linear Second-Order Equations
It is worth noting that the Riemann sum of the integral (4.2) can be expressed a s:
hf ; g i= lim
n!1
X
j=1
n
f(x
j
) g(x
j
) δx
j
:
Exercise 4.7. Another approach to defining the inner product, which we will utilize in our subsequent
discussions, is known as the weighted inner product. In this case, we introduce a positive function σ(x).
The weighted product, denoted as hf ; gi
σ
, is defined as:
hf ; gi
σ
:=
Z
x
0
x
1
f(x) g(x) σ(x) dx
Verify that this weighted inner product also sa tisfies th e properties of positi vity, commutativity, homo-
geneity, an d additivity, just like the previously defined inner products.
Now, we can define the notion of orthogonality between two functions using the inner
product.
Definition 4.1. Two non-trivial real-valued functions f and g defined on the interval [x
0
; x
1
]
are called orthogonal if their inner product satisfies hf ; gi=0.
Example 4 .3. Consider the set of functions
sin
¡
L
x
; x 2[0; L]
for n=1; 2; 3; :::; which
are defined on the interval [0; L]. It is evident that these functions a re mutually orthogonal,
meaning that th eir inner p roduct s a tis fie s:
D
sin
L
x
; sin
L
x
E
=
Z
0
L
sin
L
x
sin
L
x
dx = 0;
for n =/ m. However, when n=m, we have:
sin
L
x
2
=
Z
0
L
sin
2
L
x
dx =
L
2
:
Similarly, the functions in the set
cos
¡
L
x
; x 2[0; L]
for n=0; 1; 2; 3; ::: are also mutually
orthogonal. Additionally, the inner product satisfies:
D
sin
L
x
; cos
L
x
E
= 0;
for all integer values of n and m.
In add ition to the orthogonality, we use the inner production notion and define the mag-
nitude of a function. Let f(x) be a function defined on the interval [x
0
; x
1
]: The magnitude
or norm of f , compatible with the inner product h;i, is defined as kf k= hf ; f i
p
. In general,
the norm o f a function should satisfy the follow ing properties
1. kf k0, and if kf k= 0, then f 0.
2. kλf k= jλjkf k for any λ 2R.
3. kf + gkkf k+ kgk.
4.2 S turm-Liouville problem 11
Exercise 4.8. Prove that functions in the set
f1; cos(nx); sin(nx)g
defined in the interval [¡π; π] are mutually orthogonal:
hcos(nx); cos(mx)i= 0; n =/ m
hcos(nx); sin(mx)i= 0; 8n; m
Find the norm of each function.
Exercise 4.9. Show that the functions in the set
sin
2n ¡1
2
πx
;
are mutually orthogonal. What are the norms of these functions?
4.2.2 Approximating a function by orthogonal functions
The significance of the orthogonality condition in our subsequent discussions lies in the
following f a ct: Assume that f (x) is a function defined on the interval [x
0
; x
1
]. Furthermore,
assume that the functions in the set fφ
n
(x); x 2[x
0
; x
1
]g are mutually orthogonal with respect
to the inner product h; i, and that f(x) is approximated in terms of these functions as:
f(x) c
1
φ
1
(x) + c
2
φ
2
(x) + ···;
for some undetermined co efficients c
n
. T he orth o g o nality condition enables u s to determine
these parameters as:
c
n
=
hf ; φ
n
i
kφ
n
k
2
:
Note that for any fixed j, we have
hf ; φ
j
i= c
1
hφ
1
; φ
j
i+ ···+ c
j
hφ
j
; φ
j
i+ ···+ c
n
hφ
n
; φ
j
i+ ···= c
j
hφ
j
; φ
j
i;
and thus
c
j
=
hf ; φ
j
i
kφ
j
k
2
;
for any fixed j, where kφ
j
k
2
= hφ
j
; φ
j
i.
Example 4.4. The functions in the set fsin(nπx)g
n=1
1
defined on the domain x 2[0; 1] are
mutually orthogonal. We can now find the best approximation of the function f (x) = x,
x 2[0; 1] in terms of the functions in the set fsin(nπx)g
n=1
N
for so me number N 1. The best
approximation function f
^
(x) is given by
f
^
(x) =
X
n=1
N
hf ; sin(nπx)i
ksin(nπx)k
2
sin(nπx):
We have
hf ; sin(nπx)i=
Z
0
1
x sin(nπx) dx =
¡cos()
and
ksin(nπx)k
2
:= hsin(nπx); sin(nπx)i=
Z
0
1
jsin(nπx)j
2
dx =
1
2
;
12 1D Linear Second-Order Equations
and then, the best approximation is obtained as
f
^
(x) =
X
n=1
N
¡2 cos()
sin(nπx):
The figures below show the g raph of f(x) and f
^
(x) for some values of N:
0 0.5 1
0
0.2
0.4
0.6
0.8
1
0 0.5 1
0
0.2
0.4
0.6
0.8
1
0 0.5 1
0
0.2
0.4
0.6
0.8
1
0 0.5 1
0
0.2
0.4
0.6
0.8
1
Remark 4.2. (Fourier sine series) In the a bove e xample, we used the set of eigenfunctions
φ
n
= fsin(nπx)g of the eigenvalue problem
φ
00
= ¡λφ
φ(0) = φ(1) = 0
:
The series in terms of these eigenfunctions is known as Fourier sine series. The figure below
illustrate some of these orthogonal functions:
0 0.2 0.4 0.6 0.8 1
-1
-0.5
0
0.5
1
4.2 S turm-Liouville problem 13
Example 4.5. The functions in the set fcos(nπx)g
n=0
1
are mutually orthogonal. The best
approximation of the function f (x) = x in the set fcos(nπx)g
n=0
N
is given by
f
^
(x) =
X
n=0
N
c
n
cos(nπx):
For n = 0, we have
c
0
=
Z
0
1
x dx =
1
2
;
and for n = 1; 2; ···, the coefficients c
n
are
c
n
= 2
Z
0
1
x c o s(nπx) =
2(1 ¡cos())
n
2
π
2
:
The figures below show the graph of f(x) = x and its best approximation for some values of
N:
0 0.5 1
0
0.5
1
0 0.5 1
0
0.5
1
0 0.5 1
0
0.5
1
0 0.5 1
0
0.5
1
Remark 4.3. The functions φ
n
= cos(nπx) that we have used to approximate the function
f(x) are the eigenfunctions of the eigenvalue problem
φ
00
= ¡λφ
φ
0
(0) = φ
0
(1) = 0
:
14 1D Linear Second-Order Equations
The series in terms of these eigenfunctions i s known as Fourier cosine series. The figure below
illustrate some of these orthogonal functions:
0 0.2 0.4 0.6 0.8 1
-1
-0.5
0
0.5
1
Exercise 4.10. Express the function f (x) = x defined on x 2 [0; 1] in terms of mutually orthogonal
functions φ
n
= sin
¡
2n ¡ 1
2
πx
.
Exercise 4.11. Express the function f (x) = x defined on x 2 [0; 1] in terms of mutually orthogonal
functions φ
n
= cos
¡
2n ¡ 1
2
πx
.
Exercise 4.12. Express the function f(x) = 1 + x defined on x 2[¡1; 1] in terms of mutually orthogonal
functions φ
n
= f1; cos(nπx); sin(nπx)g. These functions are the eigenfunction of the eigenvalue problem
8
<
:
φ
00
= ¡λφ
φ(¡1) = φ(1)
φ
0
(¡1) = φ
0
(1)
:
The series in terms of these functions are known as Fourier series.
4.2.3 Sturm-Liouville theorem
The theorem for the eigenvalue problem can be stated as follows:
Theorem 4.1. Consider the eigenvalue problem defined by
8
<
:
a(x)φ
00
+ b(x)φ
0
+ c(x)φ = ¡λφ
α
1
φ(x
0
) + β
1
φ
0
(x
0
) = 0
α
2
φ(x
1
) + β
2
φ
0
(x
1
) = 0
; (4.3)
where a; b, and c are continuous functions with a(x)>0 in the interval [x
0
; x
1
].
1.
The problem has infinitely many real eigenvalues arranged in increasing order:
λ
1
<λ
2
<λ
3
<···. Furthermore, the eigenvalues diverge to infinity as n tends to infinity:
lim
n!1
λ
n
=1:
4.2 S turm-Liouville problem 15
2. For each eigenvalue λ
n
, there exists a unique eigenfunction φ
n
(x) (up to a constant
multiplication).
3. The eigenfunctions φ
n
(x) for n=1; 2; ··· are mutually orthogonal with respect to the
weight function σ(x) defined as
σ(x) =
1
a(x)
e
Z
b(x)
a(x)
:
This orthogonality is represented by the equation hφ
n
; φ
m
i
σ
=0 for n=/ m.
4. The set of functions fφ
n
(x)g for n=1; 2; ··· forms a basis for piecewise continuously
differentiable functions defined in the interval [x ¡0; x
1
]. This implies that any piece-
wise continuously differentiable function f (x) defined in [x
1
; x
1
] can be approximated
as
f
^
N
(x)
X
n=1
N
hf ; φ
n
i
σ
kφ
n
k
σ
2
φ
n
(x)
in the following sense:
lim
N !1
jf (x) ¡ f
^
N
(x)j= 0;
as long as x is a continuity point for f.
This theorem establishes the existence of eigenvalues and eigenfunctions, their orthogo-
nality, and thei r role in approximating piecewise continuously differentiable functions.
The result of this theorem is crucial in solving linear second-order PDEs. As an example,
let's consider the equation:
8
<
:
u
t
= L[u]
α
1
φ(x
0
) + β
1
φ
0
(x
0
) = 0
α
2
φ(x
1
) + β
2
φ
0
(x
1
) = 0
;
where L is the differential o perator (4.1). By assuming a separable solution o f the form
u(x; t)= φ(x)e
¡λt
, we arrive at the eigenvalue problem (4.3). For each pair (λ
n
; φ
n
), we obtain
a solution u
n
(x; t)=e
¡λ
n
t
φ
n
(x), which is referred to as the separated solution.
Example 4.6. Consider the function f (x) defined as:
f(x) =
(
1
1
4
x
3
4
0 otherwise
defined for x 2(0; 1). We can expand this f unction using different basis sets generated from
various eigenvalue problems. The figure below illustrates the representation of this function
using four sets of eigenfunctions, truncated to N = 10 terms.
16 1D Linear Second-Order Equations
0 0.5 1
0
0.2
0.4
0.6
0.8
1
0 0.5 1
0
0.2
0.4
0.6
0.8
1
0 0.5 1
0
0.2
0.4
0.6
0.8
1
0 0.5 1
0
0.2
0.4
0.6
0.8
1
1. The first figure at the left is generated using the set of functions fsin(nπx)g
n=1
10
which
are eigenfunctions of the following problems:
φ
00
= ¡λφ
φ(0) = φ(1) = 0
:
The weight function for this set is σ(x) = 1.
2. The second figure at the right is generated using the set of functions fcos(nπx)g
n=1
10
which are eigenfunctions of the following problems:
φ
00
= ¡λφ
φ
0
(0) = φ
0
(1) = 0
:
The weight function for this set is σ(x) = 1.
3. The third figure is generated using the set of functions fsin(z
n
x)g
n=0
10
where z
n
are
the roots of the equation
sin(z) + z cos(z) = 0:
These functions are eigenfunction o f the eigenvalue problem
8
<
:
φ
00
= ¡λφ
φ(0) = 0
φ(1) + φ
0
(1) = 0
:
The weight function for this set is σ(x) = 1.
4.2 S turm-Liouville problem 17
4. The last g ure is generated using the set of functions fe
¡x
sin(nπx)g
n=1
10
which are
eigenfunctions of the following problems:
φ
00
+ 2φ
0
= ¡λφ
φ(0) = φ(1) = 0
:
The weight function for this set is σ(x) = e
2x
.
It is worth noting that in all series representations of the f unction f(x), the series co nverge
to the average of the right and left limits of f (x) at its di scontinuity points located at x =
1
4
and x =
3
4
. This convergence behavior is a characteristic feature of series expansions for
functions with discontinuities.
Exercise 4.13. Consider the following eigenvalue problem on x 2(1; e)
(
x
2
φ
00
+ xφ
0
= ¡λφ
φ(1) = φ(e) = 0
:
a) Determine the weight function σ(x) that makes the eigenfunctions of the problem orthogonal.
b) The eigenfunctions o f the problem are φ
n
(x) = sin( ln x). Calculate directly that these eigen-
functions are orthogonal with respect to σ(x).
Exercise 4.14. Consider the following eigenvalue problem
8
<
:
φ
00
+ 2φ
0
= ¡λφ
φ(0) = 0
φ
0
(1) = 0
:
Determines the eigenfunctions of the problem and the weight function σ(x) for the orthogonality of the
these e igenfunctions. Note that the first eigenvalue is λ = 1, and for λ > 1, you derive an equation of the
form:
z cos(z) ¡sin(z) = 0;
which has infinitely many roots.
4.2.4 Problems
Problem 4.11. There are alternative forms of the inner product beyond the one defined in equation
(4.2). Let's consider the set of continuously differentiable functions defined on the interval [x
0
; x
1
],
denoted by C
1
(x
0
; x
1
). We can explore a different definition of the inner product, denoted as hf ; g i,
which is given by:
hf ; gi=
Z
x
0
x
1
f(x) g(x) dx +
Z
x
0
x
1
f
0
(x) g
0
(x) dx:
As an exercise, we can show that this alternative definition satisfies the fundamental properties of
po sitivity, commutativity, homogeneity, and additivity, just like the inner product defined in equation
(4.2).
Problem 4.12. Let T
2
be the space of all 2 ×2 matrices.
i. Show that the following operation is an inner product in T
2
.
hA; Bi= tr A
t
B;
ii. Generalize this f act for T
n
, the space of all n ×n matrices.
iii. Is the o peration hA; Bi= tr AB an inner product in T
n
?
18 1D Linear Second-Order Equations
Problem 4.13. Several theorems in plane geometry can be proved by the concept of dot product. Here
are two of such problems:
a) Prove the cosine law in triangles, i.e.,
c
2
= a
2
+ b
2
¡2ab cos(θ);
where θ is the angle between sid es a and b.
b) Consider the triangle shown in the figure where the side d bisects the side c. Use vector operations
to prove the Apollonius law a
2
+ b
2
=
1
2
c
2
+ 2d
2
a
b
c
d
Problem 4.14. Propose a norm for T
2
, the set of 2 ×2 matrices.
Exercise 4.15. (Cauchy inequality) Cauchy's inequality sta tes the following inequality:
hf ; gikf kkgk:
To prove this inequality, we start by expanding the right-hand side of the following inequality for any
complex number λ:
0 hf + λg; f + λgi:
Using this inequality, we can then show the Cauchy inequality.
kf + g kkf k+ kgk:
Use this inequality and prove the triangle inequality
kf + gkkf k+ kgk
Problem 4.15. Prove the following version of the triangle inequality for arbitrary positive number " > 0.
kf + g k
1
"
kf k+ " kgk:
Problem 4.16. If x
1
; :::; x
n
are positive numbers, prove the following identities:
a) n
2
(x
1
+ ···+ x
n
)
1
x
1
+ ···+
1
x
n
b) (x
1
+ ···+ x
n
)
2
n(x
1
2
+ ···+ x
n
2
).
Problem 4.17. If the re lation kλ
1
f + λ
2
gk = kλ
2
f + λ
1
gk holds for all real numbers λ
1
; λ
2
, show
kf k= kgk.
Exercise 4.16. Let C([x
0
; x
1
]) be the set of all continuous functions defined on [x
0
; x
1
].
a) Show that the operation kf k
1
defines as
kf k
1
= max
x2[x
0
;x
1
]
jf(x)j;
is a norm for f 2C([x
0
; x
1
]). For this, you n eed to show that this operation satisfies all properties
of a norm.
b) For any inner product, show that the following equality holds
kf + g k
2
¡kf ¡ gk
2
= 4hf ; gi:
4.2 S turm-Liouville problem 19
Use this to prove that there is no inner product hf ; f i
p
= kf k
1
.
Problem 4.18. Assume that fφ
n
(x)g f or n = 1; 2; ··· is a set of mutually orthogonal functions. The best
approximation of a functions f(x) in the set fφ
n
(x)g
n=1
N
is defined as
f
^
(x) =
X
n=1
N
c
n
φ
n
(x);
such that
kf ¡ f
^
kkf ¡ gk;
for any other g(x) constructed by a linear combination of functions in the set fφ
n
(x)g
n=1
N
. If the norm
is compatible with the inner product h; i, prove that f
^
is the best approximation of f if c
n
are
c
n
=
hf ; φ
n
i
hφ
n
; φ
n
i
:
Hint: Define h = g ¡ f
^
, and use the inequality kf ¡ g k0.
Problem 4.19. Let f
^
is the best approximation of function f in terms of the mutually orthogonal
functions in the set fφ
n
g
n=1
N
.
a) s how tha t f ¡ f
^
is orthogonal to all functions φ
n
.
b) For any function g as
g =
X
n=1
N
b
n
φ
n
;
for a rbitrary b
n
, show that hf ¡ f
^
; gi= 0.
Problem 4.20. As we observed in previous sections, the set of orthogonal functions
sin
¡
L
x
for
n = 1; 2; ···, and x 2[0; L] are eigenfunctions of the eigenvalue problems
φ
00
= ¡λφ
φ(0) = φ(L) = 0
:
Similarly, the set of orthogonal functions
cos
¡
L
x
for n = 0; 1; 2; ··· are generated by the eigenvalue
problem
φ
00
= ¡λφ
φ
0
(0) = φ
0
(L) = 0
:
Consider t he following eigenvalue problem:
8
<
:
φ
00
= ¡λφ
φ(0) = 0
φ(1) + φ
0
(1) = 0
:
a) Use a numerical method and find first six eigenvalues o f the problem λ
1
; :::; λ
6
. You can verify
that the eigenfunctions φ
n
(x) = sin( λ
n
p
x) are mutually orthogonal for the values of λ
1
; :::; λ
6
.
b) Find the bets approximation of the function f(x) = sin(πx) for x 2[0; 1] in terms of fφ
n
(x)g
n=1
6
.
You can use the following code in Matlab:
g=@(s) sin(s)+s.*cos(s);
z=zeros(1,6);
for n=1:6
z(n)=fzero(g,3*n);
end
C=integral(@(x) sin(pi*x).*sin(z(1:6)'.*x),0,1,'arrayvalue',true)./
integral(@(x) sin(z(1:6)'.*x).^2,0,1,'arrayvalue',true);
x=0:0.01:1;
fhat=sum(C.*sin(z(1:6)'.*x));
plot(x,sin(pi*x),x,fhat)
20 1D Linear Second-Order Equations
In the above code, the set z are equal to λ
p
.
Problem 4.21. In the book of ODEs, we saw that Legendre polynomials P
n
(x) are orthogonal functions
in [ ¡1; 1]. Find the best approximation of f (x) = sin(πx), x 2 [¡1; 1] in the set fP
n
(x)g
n=1
4
. You can
use the following code in Matlab to draw f(x) and its best approximation in the same coordinate. The
cod e provides also the error:
kf ¡ f
^
k:=
Z
¡1
1
jsin(πx) ¡ f
^
(x)j
2
s
:
N=4;
C=integral(@(x) sin(pi*x).*legendreP(1:N,x),-1,1,'arrayvalued',tr ue).*(2*(1:N)+1)/
2;
x=-1:0.01:1;
fhat=0;
for i=1:N
fhat=fhat+C(i).*legendreP(i,x);
end
plot(x,sin(pi*x),x,sum);
err=sqrt(trapz((sin(pi*x)-fhat).^2));
title('error=',err)
Problem 4.22. Consider the set of functions f'
0
= 1; '
1
= x; '
2
= x
2
g defined in the domain [¡1; 1].
a) Fin d the best approximation of function f(x) = sin(πx) in terms of f'
n
g
n=0
2
. It is worth noting
that these functions are orthogonal. To solve this, let's write f
~
= a
0
+ a
1
x + a
2
x
2
, and minimize
the norm kf ¡ f
~
2
k by assuming that f ¡ f
~
2
is orthogonal to functions in the set f'
n
g, i.e.,
Z
¡1
1
(sin(x) ¡a
0
x
2
¡b
0
x ¡c
0
)x
n
= 0;
for n = 0; 1; 2.
b) Determine the square error defined by th e following formula:
Z
¡1
1
jf(x) ¡ f
~
2
(x)j
2
dx:
c) Repeat parts a) and b) for the set f'
0
= 1; '
1
= x; '
2
= x
2
; '
3
= x
3
g. As you can see, we need to
recalculate all the coefficients. However, this is not the case for orthogonal functions. This is one
of the reasons why orthogonality is highly valued in app lied sciences.
4.3 From Strum-L iouville problem to PDEs
4.3.1 Homogeneous 1D problems
To demonstrate the application of the Sturm-Liouville problem in solving partial differential
equations, we consider the homogeneous heat equation given by:
8
>
>
>
>
>
>
<
>
>
>
>
>
>
:
u
t
= L[u] x 2(x
0
; x
1
)
α
1
u(x
0
; t) + β
1
u
x
(x
0
; t) = 0
α
2
u(x
1
; t) + β
2
u
x
(x
1
; t) = 0
u(x; 0) = f(x)
;
where L represents a linear second-order partial differential operator in the variable x:
L[u] = a(x) u
x x
+ b(x) u
x
+ c(x) u:
4.3 From Strum-Liouville p roblem to PDEs 21
The associated eigenvalue problem for the spatial variable x is:
8
<
:
L[φ(x)] = ¡λφ(x);
α
1
φ(x
0
) + β
1
φ
0
(x
0
) = 0
α
2
φ(x
1
) + β
2
φ
0
(x
1
) = 0
:
According to the Sturm-Liouville theorem, there exists an infinite set of eigenvalues λ
<λ
<···
with corresponding orthogonal eigenfunctions φ(x); φ(x); ···, which are orthogonal with
respect to the weight function σ(x). Furthermore, the s et fφ(x)g forms a basis f o r piece-
wise continuously differentiable functions defined on the interval (x; x). Therefore, the
desired solution u(x; t) can be expressed as a series in terms of this set:
u(x; t) =
X
n=1
1
U
n
(t) φ
n
(x);
where U(t) are coefficient functions that need to be determined appropriately for the series
to be a valid solution to the problem. Substituting this series into the equation yields:
X
n=1
1
U
n
0
(t) φ
n
(x) =
X
n=1
1
U
n
(t) L [φ
n
(x)] =
X
n=1
1
¡λ
n
U
n
(t) φ
n
(x):
This equality leads to the following ordinary differential equation for U (t):
U
n
0
+ λ
n
U
n
= 0;
which can be solved to obtain the function
U
n
(t) = C
n
e
¡λ
n
t
:
Finally, we arrive at the solution:
u(x; t) =
X
n=1
1
C
n
e
¡λ
n
t
φ
n
(x):
To ensure that this series satisfies the initial condition at t=0, we require:
lim
t!0
u(x; t) = u(x; 0) =
X
n=1
1
C
n
φ
n
(x):
By the g iven initial condition, we obtain the following equation:
f(x) =
X
n=1
1
C
n
φ
n
(x);
22 1D Linear Second-Order Equations
where the orthogonality of eigenfunctions φ(x) determines C as:
C
n
=
hf ; φ
n
i
σ
kφ
n
k
2
=
Z
x
0
x
1
f(x) φ
n
(x) σ(x) dx
Z
x
0
x
1
φ
n
2
(x) σ(x) dx
:
Finally, the true solution of the given problem is:
u(x; t) =
X
n=1
1
hf ; φ
n
i
σ
kφ
n
k
2
e
¡λ
n
t
φ
n
(x):
This series solution known as the eigenfunction series solution provides converges to a
smooth so lution for t > 0 even if the initial condition is not continuous. This method
can be employed to solve a wide range of 1D second-order PDEs in t and x for a two-
variable function u(x; t).
Example. We will solve the standard heat problem with the given conditions:
8
>
>
>
>
>
>
<
>
>
>
>
>
>
:
u
t
= ku
x x
x 2(0; 1)
u(0; t) = 0
u(1; t) = 0
u(x; 0) = f(x)
:
The associated eigenvalue problem is
φ
00
= ¡λφ
φ(0) = φ(1) = 0
;
Solving the eigenvalue problem, we obtain the eigenpairs λ
n
=n
2
π
2
and φ
n
= sin(nπx). Using
these eigenfunctions, the superposition solution can be expressed a s:
u(x; t) =
X
n=1
1
c
n
e
¡kn
2
π
2
t
sin(nπx):
The coefficients c
n
can be determined by solving the equation:
f(x) =
X
n=1
1
c
n
sin(nπx):
The orthogonality of the eigenfunctions φ
n
with respect to the weight function σ = 1 leads
to the coefficients:
c
n
= 2
Z
0
1
f(x) sin(x) dx:
Let's consider the specific initial condition given as:
f(x) =
(
1
1
4
x
3
4
0 otherwise
:
4.3 From Strum-Liouville p roblem to PDEs 23
The coefficients c
n
can be calculated as:
c
n
=
2
cos
4
¡cos
3
4
:
The figures below illustrate the initial condition and the solution at different instances of
time for k = 0.1.
0 0.5 1
0
0.5
1
0 0.5 1
0
0.5
1
0 0.5 1
0
0.5
1
0 0.5 1
0
0.5
1
Example 4.7. Let us solve the following parabolic equation
8
>
>
>
>
>
>
<
>
>
>
>
>
>
:
1
k
u
t
= u
x x
+ 2u
x
u(0; t) = 0
u(1; t) = 0
u(x; 0) = f(x)
;
for k > 0 a constant. The associated eigenvalue problem for the given PDE is
8
<
:
φ
00
+ 2φ
0
= ¡λφ
φ(0) = 0
φ(1) = 0
:
The characteristic equation of the ODE is
r
2
+ 2r + λ = 0;
24 1D Linear Second-Order Equations
which has roots r
1;2
= ¡1 ± 1 ¡λ
p
. It is observed that the eigenvalue problem is s o lvable
only for λ > 1. In th is case, the solution φ(x) is given by:
φ(x) = c
1
e
¡x
cos( λ ¡1
p
x) + c
2
e
¡x
sin( λ ¡1
p
x):
Applying the given boundary conditions, we find c
1
= 0, and λ
n
= 1 + n
2
π
2
with associated
eigenfunctions φ
n
(x)=e
¡x
sin(nπx), which are orthogonal with respect to the weight function
σ(x) = e
2x
. The superposition solution can be expressed as:
u(x; t) =
X
n=1
1
c
n
e
¡(1+n
2
π
2
)kt
e
¡x
sin(nπx):
To determine the coefficients c
n
, we need to satisfy the initial condition f (x). Thus, the series
solution should satisf y the equation:
f(x) =
X
n=1
1
c
n
e
¡x
sin(nπx);
and the coefficients can be computed as:
c
n
= 2
Z
0
1
f(x) e
¡x
sin(nπx) e
2x
dx;
where the f a ctor e
2x
represents the weight function σ(x). The figures below depict the graph
of the initial heat profile f (x) = xe
¡x
and the solution u(x; t) at different time values for
k = 0.1.
0 0.5 1
0
0.1
0.2
0.3
0.4
0 0.5 1
0
0.1
0.2
0.3
0.4
0 0.5 1
0
0.1
0.2
0.3
0.4
0 0.5 1
0
0.1
0.2
0.3
0.4
4.3 From Strum-Liouville p roblem to PDEs 25
Example 4.8. Let's solve the damped wave equation
8
<
:
u
tt
+ 2ξu
t
= u
x x
u(0; t) = 0
u
x
(π; t) = 0
;
where ξ > 0 is the damping factor. This equation represents an elastic string xed at x=0
and free at x=1.
To solve the problem, we first form the associated eigenva lue problem:
8
<
:
φ
00
= ¡λφ
φ(0) = 0
φ
0
(π) = 0
:
The s o luti o ns of the eigenvalue problem are given by the pair s λ
n
=
(2n ¡ )
2
4
, and φ
n
=
sin
¡
2n ¡ 1
2
x
for n = 1; 2; ···. Since the set fφ
n
(x)g forms a basis for fun ctions defined on x2[0;
π], we can write the solution of the given PDE as:
u(x; t) =
X
n=1
1
U
n
(t) φ
n
(x);
where U
n
(t) are undetermined coefficients. This series is known as the eigenfunction expan-
sion of the unknown solution u(x; t). To determine U
n
(t), we substitute the eigenfunction
series into the PDE:
X
n=1
1
[U
n
00
(t) + 2ξU
n
0
(t)] φ
n
(x) =
X
n=1
1
U
n
(t) φ
n
00
(x);
and using the relation φ
n
00
= ¡λ
n
φ
n
, we obtain the following ODE for U
n
(t):
U
n
00
(t) + 2ξU
n
0
(t) + λ
n
U
n
(t) = 0:
Assuming ξ < λ
1
p
, the solution of the above ODE is given by:
U
n
(t) = e
¡ξt
[A
n
cos(!
n
t) + B
n
sin(!
n
t)];
where !
n
= λ
n
¡ξ
2
p
. The superposition solution of the given PDE is:
u(x; t) =
X
n=1
1
e
¡ξt
[A
n
cos(!
n
t) + B
n
sin(!
n
t)] φ
n
(x):
The coefficients A
n
and B
n
are determined using the initial conditions u(x; 0) and u
t
(x; 0) for
the equation. The figures below show the wave for u(x; 0) = 0, u
t
(x; 0) = sin
¡
3π
2
x
and ξ = 0.1
26 1D Linear Second-Order Equations
0 0.5 1
-1
-0.5
0
0.5
1
0 0.5 1
-1
-0.5
0
0.5
1
0 0.5 1
-1
-0.5
0
0.5
1
0 0.5 1
-1
-0.5
0
0.5
1
Exercise 4.17. Find the series solution of the following parabolic type PDE on the domain [1; e]
8
>
>
<
>
>
:
u
t
= x
2
u
xx
+ xu
x
u(1; t) = u(e; 0) = 0
u(x; 0) = ln(x)
:
Exercise 4.18. We aim to solve th e following wave equation
8
>
>
>
>
>
>
>
>
>
>
<
>
>
>
>
>
>
>
>
>
>
:
u
tt
= c
2
u
xx
u(0; t) = 0
u(1; t) + u
x
(1; t) = 0
u(x; 0) = sin(πx)
u
t
(x; 0) = 0
:
a) Determine first 6 eigenvalues and eigenfunctions of the following eigenvalue prob lem
8
<
:
φ
00
= ¡λφ
φ(0) = 0
φ(1) + φ
0
(1) = 0
:
b) Use the d erived eigenfunctions to find a truncated series solution of the wave equation.
Exercise 4.19. Find the eigenva lues and eigenfunctions of the eigenvalue problem:
8
<
:
φ
00
+ 2φ
0
= ¡λφ x 2(0; 1)
φ(0) = 0
φ
0
(1) = 0
:
4.3 From Strum-Liouville p roblem to PDEs 27
Use the result and write the solution of the following heat equation
8
>
>
>
>
>
>
<
>
>
>
>
>
>
:
u
t
= u
xx
+ 2u
x
x 2(0; 1)
u(0; t) = 0
u
x
(1; t) = 0
u(x; 0) = xe
¡x
:
4.3.2 Non-homogeneous eq u ations
In this section, we apply the eigenfunction expansion method to solve non-homogeneous
linear second-order partial differential equati o ns. To illustrate the method, let us consider
the following heat problem:
8
<
:
u
t
= L[u] + h(x; t)
u(0; t) = u(1; t) = 0
u(x; 0) = 0
:
The associated eigenvalue problem for the g iven equation is
L[φ] = ¡λφ
φ(0) = φ(1) = 0
:
Since the set of eigenfunctions fφ
n
(x)g forms a basis for functions defined on the interval
[0; 1], we express the desired solution as a series:
u(x; t) =
X
n=1
1
U
n
(t) φ
n
(x):
Substituting this series into the heat equation results in:
X
n=1
1
[U
n
0
(t) + λ
n
U
n
] φ
n
(x) = h(x; t): (4.4)
To proceed, we can represent the source term h as the series in terms of fφ
n
(x)g as:
h(x; t) =
X
n=1
1
H
n
(t) φ
n
(x);
where H
n
(t)=
hh; φ
n
i
kφ
n
k
2
. Substituting the series representation of h into the series equation (4.4),
we obtain th e following ordinary differential equation for U
n
(t):
U
n
0
(t) + λ
n
U
n
= H
n
(t):
For example, let's consider h(x; t)=e
¡t
φ
1
(x). Expanding h in terms of fφ
n
g yields: H
n
(t) = 0
for n =/ 1 and H
1
(t) = e
¡t
. Therefore, we obtain the following initial value problems:
U
1
0
+ λ
1
U
1
= e
¡t
U
1
(0) = 0
;
U
n
0
+ λ
n
U
n
= 0
U
n
(0) = 0
; n = 2; 3; ···:
28 1D Linear Second-Order Equations
The initial conditions for U(t) are chosen to satis f y the zero initial condition u(x; 0)=0.
Solving this system of equations, we find U
n
(t) = 0 fo r n = 2; 3; ···, and U
1
(t) =
1
λ
1
¡ 1
(e
¡t
¡
e
¡λ
1
t
), and nally the solution is given by:
u(x; t) =
1
λ
1
¡1
(e
¡t
¡e
¡λ
1
t
) φ
1
(x):
Remark 4.4. (System interpretation) Let's examine the a bove example from a system
perspective. We can view the heat equation as a dynamic system that responds to different
input sources. In this case, we have a specific input source represente d by h=H (t)sin(πx).
The heat system can be redefined as:
u = (@
t
¡L)
¡1
[H
1
(t) φ
1
(x)]:
Since φ
1
is an eigenfunction of the operator L, the response of the heat system to this input
is of the general form u=U (t)φ(x). T his can be represented using a block diagram as:
u = U(t) φ
1
(x)
Heat system
H
1
(t)φ
1
(x)
(@
t
¡L)
¡1
Now, let's consider a more general source term h(x; t) = h
i
(t) φ
i
(x) + h
j
(t) φ
j
(x). By
applying the superposition principle, we can express the desired solution as:
u = U
i
(t) φ
i
(x) + U
j
(t) φ
j
(x):
This is depicted schematically in the figure below:
Heat system
u = U
i
(t) φ
i
(x) + U
j
(t) φ
j
(x)
h
j
(t) φ
j
(x)
h
i
(t) φ
i
(x)
+
(@
t
¡L)
¡1
This argument extends for arbitrary summations h =
P
j
H
j
(t) φ
j
(x) as shown below:
u =
P
j
U
j
(t) φ
j
(x)
(@
t
¡L)
¡1
Heat system
P
j
H
j
(t)φ
j
(x)
Here each coefficient function U
j
(t) is determined by solving the associated ordinary
differential equation:
U
j
0
+ λ
j
U
j
= H
j
(t):
Remark 4.5. (Geometric i nterpretation) Another perspective to consider is the ge o -
metric interpretation of the given partial differential equation. Each eigenfunction φ(x) can
be seen as a direction in an infinite-dimensional vector space spanned by the functions in
the set fφ(x)g. Alo ng each direction, the given partial differential equation reduces to an
ordinary differential equation of the form:
U
n
0
+ λ
n
U
n
= H
n
(t):
4.3 From Strum-Liouville p roblem to PDEs 29
In this geometric interpretation, we can view the partial differential equation as an infinite
system of ordinary differential equations, where each equation is defined along the direction
of φ
j
. Drawing a parallel to our approach for solving first-order partial differential equations,
we can observe similarities in the method. In both ca ses, we identify characteristic curves
along which the given partial differential equation transforms into an ordinary differential
equation. This observation highlights the underlying sim ilarity between the two methods
and reinforces th e understanding of the eigenfunction expansion method.
Exercise 4.20. Find the series solution of the following heat prob lem
8
<
:
u
t
= u
xx
+ sin(2πx)
u(0; t) = u(1; t) = 0
u(x; 0) = 0
:
Exercise 4.21. Find the solution of the following wave equation
8
>
>
>
>
>
>
<
>
>
>
>
>
>
:
u
tt
= u
xx
+ sin(t) sin
¡
π
2
x
u(0; t) = u
x
(1; t) = 0
u(x; 0) = 0
u
t
(x; 0) = 0
:
Exercise 4.22. Consider the following heat problem
8
>
>
<
>
>
:
u
t
= u
xx
+ e
¡t
f(x)
u(0; t) = u(1; t) = 0
u(x; 0) = 0
;
where f(x) is the following function
f(x) =
X
n=1
5
¡2(¡1)
n
sin(nπx):
a) Write the solution to the problem.
b) Supp ose f(x) = x. Write down the solution to the equation. Hint: write down f(x) as
f(x) =
X
n=1
1
¡2(¡1)
n
sin(nπx);
employ the same method you used for the part a).
30 1D Linear Second-Order Equations
Exercise 4.23. Consider the following heat equation
8
<
:
u
t
= u
xx
u(0; t) = u(1; t) = 0
u(x; 0) = f (x)
;
where f(x) = φ
1
(x), and φ
n
(x) = sin(nπx) for n = 1; 2; ···.
a) Since the heat system is triggered only by the function φ
1
(x), write the solution u(x; t) as
u(x; t) = U (t) φ
1
(x):
Determine U (t).
b) Now, let f(x) be the following function:
f(x) =
X
n=1
5
¡2(¡1)
n
φ
n
(x):
Find the solution of the equation.
c) Now, let f(x) = x + sin(πx). Determine the solution to the equation.
4.3.3 Non-homogeneous boundary con d itions
So far, our discussion has focused on problems with homogeneous boundary conditions. Now,
let's consider a case where the boundary conditions are non-homogeneous. The following
example will illustr ate this scenario.
Example 4.9. Let's continue with the given equation:
8
>
>
<
>
>
:
u
t
= u
xx
+ 2u
x
¡e
¡2x
u(0; t) = 1
u(1; t) = 1 +
1
2
e
¡2
:
We consider the solution as u(x; t)=V (x)+w(x; t), where V (x) reflects the contribution of
the time-independent sources and w(x; t) satisfies the homogeneous equation. Substituting
this solution into the heat equation gives:
w
t
= V
00
+ 2V
0
+ w
x x
+ 2w
x
+ e
¡2x
:
To keep w as a homogeneous e quation, we take V (x) to satis f y the equation:
V
00
+ 2V
0
= e
¡2x
:
We transfer the non-zero boundary conditions to V (x), resulting in the following boundary
value problem for V :
8
>
>
<
>
>
:
V
00
+ 2V
0
= e
¡2x
V (0) = 1
V (1) = 1 +
1
2
e
¡2
:
4.3 From Strum-Liouville p roblem to PDEs 31
Solving this equation gives us the specific solution for V (x):
V (x) = 1 ¡
1
2
xe
¡2x
:
For w(x; t), we have the following homogeneous equation and boundary conditions:
8
<
:
w
t
= w
x x
+ 2w
x
w(0; t) = 0
w(1; t) = 0
:
The general series solution for w is given by:
w(x; t) =
X
n=1
1
C
n
e
¡λ
n
t
φ
n
(x);
where φ
n
=e
¡x
sin(nπx), and λ
n
=1+n
2
π
2
. Finally, the general solution for the given problem
is obtained as:
u(x; t) = 1 ¡
1
2
xe
¡2x
+
X
n=1
1
C
n
e
¡λ
n
t
φ
n
(x):
The parameters C
n
can be determ ined by the initial condition of the p roblem. For example,
if u(x; 0)=0, then we have:
1
2
xe
¡2x
¡1 =
X
n=1
1
C
n
φ
n
(x):
Using the orthogonality of eigenfunctions with respect to the weight function σ =e
2x
, the
coefficients C
n
are determined as:
C
n
=
V ; φ
n
i
e
2x
kφ
n
k
2
:
Finally, we obtain the solution a s:
u(x; t) = 1 ¡
1
2
xe
¡2x
¡
X
n=1
1
hV ; φ
n
i
e
2x
kφ
n
k
2
e
¡λ
n
t
φ
n
(x):
In general, when the equation or its boundary conditions are functions independent of
time, it can be beneficial to split the solution into two terms: 1) a pure function of x denoted
as V (x), and 2) a function of both x an d t denoted as w(x; t). The d esire d solution can then
be written as the sum of these two terms: u(x; t)=V (x)+w(x; t).
The function V (x) incorporates all the non-homogeneous terms that are independent of
time, while w(x; t) satisfies the zero boundary conditions. This decomposition allows us to
separate the time-independent part f rom the time-dependent part of the solution, making
the problem more manageable.
Example 4.10. Let's solve the following equation
8
>
>
<
>
>
:
u
t
= u
xx
¡6x + e
¡t
sin(πx)
u(t; 0) = 0; u(t; 1) = 1
u(0; x) = x
3
:
32 1D Linear Second-Order Equations
The boundary condition is non-homogeneous. We take u(x; t) as the sum of two terms:
u(t; x) = V (x) + w(t; x):
By substituting u into the differential equation, we obtain:
w
t
= V
00
+ w
xx
¡6x + e
¡t
sin(πx);
and thus the equation for V (x) is:
V
00
= 6x
V (0) = 0; V (1) = 1
:
The differential equation for V (x) is solved and we nd V (x)=x
3
. The equation fo r w(t; x)
becomes:
8
>
>
<
>
>
:
w
t
= w
xx
+ e
¡t
sin(πx)
w(t; 0) = w(t; 1) = 0
w(x; 0) = 0
:
We observe that w(x; t) is triggered onl y by the term h(x; t) = e
¡t
φ
1
( x), wher e
φ
n
(x)= sin(nπx). Therefore, we can write the solution as:
w(t; x) = W (t) sin(πx);
where W (t) is an u ndetermined function. Substituting this series into the equation for
w(t; x), we obtain the following first-order ODE for W (t):
W
0
+ π
2
W
n
= e
¡t
:
The solution to this ODE is:
W (t) = C e
¡π
2
t
+
1
π
2
¡1
e
¡t
:
The initial condition w(x; 0)=0 implies W (0)=0, which determines C = ¡
1
π
2
¡ 1
. Therefore,
the solution u(x; t) is given by:
u(t; x) = x
3
+
1
π
2
¡1
(¡e
¡π
2
t
+ e
¡t
) sin(πx):
It is observed that the derived solution satisfies the partial differe ntial equation, as well as
the given boundary an d initial cond itions. Additionally, we note that:
lim
t!1
u(t; x) = x
3
= V (x);
and thus the function V (x) is also referred to as the steady-state solution in this case. The
function w(x; t) vanishes with tim e and is called the transient solution.
Exercise 4.24. Solve the following heat problem
8
>
>
<
>
>
:
u
t
= u
xx
¡x
u(0; t) = ¡1; u
x
(π; t) = 0
u(x; 0) =
1
6
x
3
:
4.3 From Strum-Liouville p roblem to PDEs 33
Exercise 4.25. Solve the following wave problem
8
>
>
>
>
>
>
>
>
>
>
<
>
>
>
>
>
>
>
>
>
>
:
u
tt
= u
xx
u(0; t) = 1
u
x
(π; t) = 0
u(x; 0) = 0
u
t
(x; 0) = 0
:
4.3.4 Problem
Problem 4.23. Let be the domain :=(0; π). We aim to solve the following problem:
8
>
>
<
>
>
:
u
tt
= u
xx
+ 2u
x
+ sin(t) e
¡x
sin
¡
x
2
on
u(0; t) = 0
u
x
(π; t) + u(π; t) = 0
:
a) Fin d eigenvalues λ and their associated eigenfunctions φ of the following eigenvalue problem:
φ
00
+ 2φ
0
= ¡λφ
φ(0) = 0; φ
0
(π) + φ(π) = 0
:
You can assume that λ > 1. Determine the weight function for the orthogonality of the eigenfunc-
tions too.
b) Use the results from part a) to find the solution of the given wave problem when the initial
condition is given by: u(x; 0)=0, u
t
(x; 0) = 0.
Problem 4.24. Find the eigenvalues and eigenfunctions of the following eigenvalue problem
8
<
:
φ
00
= ¡λφ x 2(¡1; 1)
φ(¡1) = 0
φ(1) = 0
:
Use the result and find the series solution of the following heat problem
8
>
>
>
>
>
>
<
>
>
>
>
>
>
:
u
t
= u
xx
u(¡1; t) = 0
u(1; t) = 0
u(x; 0) = x
2
:
Problem 4.25. Consider the following heat equation
8
>
>
<
>
>
:
u
t
= @
x
[(1 + x)
2
u
x
]
u(0; t) = 0
u
x
(1; t) = 0
:
Find the form of the eigenfunctions and use a numerical method to determine first five eigenvalues and
associated eigenfunctions. Use these values and write downs of the series solution of the heat equation.
Find the first five coefficients o f the series solution if u(x; 0) is given as
u(x; 0) =
100
1 + x
:
Problem 4.26. Consider the following heat equation
8
>
>
<
>
>
:
u
t
= x
2
u
xx
+ 3xu
x
+ u
u(1; t) = 0; u(e; t) + u
x
(e; t) = 0
u(x; 0) = x
:
a) Use a numerical method to find the first 3 eigenvalues of the associated eigenvalue problem
34 1D Linear Second-Order Equations
b) Use these eigenfunct ions and write downs an approximate series solution t o the heat equation.
Problem 4.27. Consider the wave equ ation:
8
>
>
>
>
>
>
<
>
>
>
>
>
>
:
u
tt
= c
2
u
xx
u(0; t) = u(1; t) = 0
u(x; 0) = f (x)
u
t
(x; 0) = 0
:
a) Write the series solution of the equation.
b) Use the trigonometric identity for terms cos(c nπt) sin(nπx) and conclude the solution can be
written as
u(x; t) =
f
odd
(x ¡ct) + f
odd
(x + ct)
2
:
Problem 4.28. (Fourier series) Conside r the following eigenvalue problem
8
<
:
φ
00
(θ) = ¡λφ(θ) θ 2(¡π; π)
φ(¡π) = φ(π)
φ
0
(¡π) = φ
0
(π)
:
The physics of the problem is a circle, and φ(θ) is the function defined on this circle. The boundary
condition of the problem is not Robin condition and then we are not allowed to use the result of the
Sturm-Liouville theorem.
a) Show that the eigenvalues of the problem are λ
n
= n
2
, n = 0; 1; 2; ··· and eigenfunctions are
fcos()g
n=0
1
and fsin()g
n=1
1
. These functions are know and the Fourier eigenfunctions.
b) It is known that the eigenfunctions form an orthogonal basis for piecewise continuously differen-
tiable functions defined on θ 2[¡π; π]. Use the orthogonality condition as find an exp ansion for
the function
f(θ) =
1 ¡π < θ < 0
0 0 < θ < 0
:
You can use the f ollowing code in M atla b and see the graph of f(θ) and its approximation for a
truncated series.
%length
L=pi;
%functon
f=@(x) 1*(x>-L & x<0)+0*(x>0 & x<L);
%number of terms
n=10;
%the coefficients of Fou rier series
a0=integral(@(x) f(x),-L,L)/(2*L);
a=integral(@(x) f(x).*cos((1:n)'.*x*pi/L),-L,L,'arrayvalued',true)/L;
b=integral(@(x) f(x).*sin((1:n)'.*x*pi/L),-L,L,'arrayvalued',true)/L;
%making fhat
x=-L:0.01:L;
fhat=a0+sum(a.*cos((1:n)'*x*pi/L))+sum(b.*sin((1:n)'*x*pi/L));
%plofing functions
plot(x,f(x),x,fhat);
c) Use the above result and solve the following wave equation for u(θ; t) on the unit circle
8
<
:
u
tt
= u
θθ
¡π < θ < π
u(¡π; t) = u(π; t)
u
θ
(¡π; t) = u
θ
(π; t)
4.3 From Strum-Liouville p roblem to PDEs 35
The figures below depicts the graph of the solution a t some instances of time for the initial
conditions
u(θ; 0) =
(
¡
π
12
¡θ
¡
θ +
π
12
¡
π
12
< θ <
π
12
0 otherwise
;
and u
t
(θ; 0) = 0.
Problem 4.29. Solve the following wave problem
8
<
:
u
tt
= u
xx
+ 2u
x
+ u + tx
u(0; t) = u(π; t) = 0
u(x; 0) = x; u
t
(x; 0) = 0
:
Problem 4.30. We know that functions φ
n
(x) = x sin(nπx) for n = 1; 2; ··· solve the following eigenvalue
problem:
(
x
2
φ
00
¡2
0
+ 2φ = ¡λx
2
φ
φ(1) = φ(2 ) = 0
:
a) Determine the associated eigenvalue λ
n
.
b) Write down the series solution of the following problem
8
>
>
<
>
>
:
u
t
= u
xx
¡
2
x
u
x
+
2
x
2
u
u(1; t) = u(2; t) = 0
u(x; 0) = sin(πx)
:
Problem 4.31. Consider the following problem
u
t
= u
xx
u(0; t) = 0; u(π; t) = t
:
The boundary condition at x = π is a function of time t.
a) Take u as: u(x; t) = V (x; t) + w(x; t). Substituting this into the equation results in:
w
t
+ V
t
= V
xx
+ w
xx
:
36 1D Linear Second-Order Equations
Assume that V satisfies the following boundary value problem
8
<
:
V
xx
= 0
V (0; t) = 0
V (π; t) = t
:
Determine the function V (x; t).
b) Substituting this function into the equation for w yields:
8
>
>
<
>
>
:
w
t
+
x
π
= w
xx
w(0; t) = 0
w(π; t) = 0
:
Solve t his equation for w and find the solution to the problem if u(x; 0) = 0.
Problem 4.32. Consider the following wave equation
(
u
tt
= u
xx
+
1
4
u
u(0; t) = 0; u(π; t) = sin(t)
:
a) Take u as: u(x; t) = V (x; t) + w(x; t). Substituting this into the equation results in:
w
tt
+ V
tt
= V
xx
+
1
4
V + w
xx
+
1
4
w:
Assume that V satisfies the following boundary value problem
8
>
>
<
>
>
:
V
xx
+
1
4
V = 0
V (0; t) = 0
V (π; t) = sin(t)
:
Determine the function V (x; t).
b) Substituting this function into the equation for w yields:
8
>
>
<
>
>
:
w
t
¡sin(t) sin
¡
x
2
= w
xx
+
1
4
w
w(0; t) = 0
w(π; t) = 0
:
Solve t his equation for w and find the solution to the problem if u(x; 0) = 0.
Problem 4.33. Solve the following heat problem
8
>
>
<
>
>
:
u
t
= x
2
u
xx
+ 4u
u(1; t) = ¡1; u(e; t) = 1
u(x; 0) = x
Problem 4.34. Solve the following heat problem:
8
<
:
u
t
= u
xx
+ 2u
x
u(0; t) = 0; u(1; t) = 1
u(x; 0) = x
:
Problem 4.35. Solve the following equation
8
>
>
<
>
>
:
u
t
= @
x
[(1 + x)
2
u
x
] ¡2
u(0; t) = 0; u
x
(1; t) = 1
u(x; 0) = 102ln (1 + x)
4.3 From Strum-Liouville p roblem to PDEs 37
Problem 4.36. Solve the following equation
8
>
>
>
>
>
>
<
>
>
>
>
>
>
:
u
t
= @
x
[(1 + x)
2
u
x
] +
t
1 + x
p
u(0; t) = 0; u
x
(1; t) = 1
u(x; 0) =
4x
1 + x
Problem 4.37. Solve the following equation
8
>
>
<
>
>
:
u
t
= u
xx
+ 2u
x
+ e
¡t
e
¡x
sin(πx)
u(0; t) = u(1; t) = 0
u(x; 0) = 0
:
Problem 4.38. Consider the following problem
8
<
:
u
t
= u
xx
u(0; t) = u(1; t) = 0
u(x; 0) = f (x)
:
Show that the solution to the problem is equal to the solution to the following problem
8
<
:
u
t
= u
xx
+ δ(t) f (x)
u(0; t) = u(1; t) = 0
u(x; 0) = 0
;
where δ(t) is the Dirac delta function.
Problem 4.39. Consider the following problem
8
<
:
u
t
= u
xx
+ h(t) f (x)
u(0; t) = u(1; t) = 0
u(x; 0) = 0
;
where h = 0 f or t 0. Show that the series solution of the problem is
u(x; t) =
X
n=1
1
F
n
(h(t) e
¡λ
n
t
) φ
n
(x);
where φ
n
= sin(nπx) are the eigenfunctions of the associated eigenvalue prob lem, λ
n
are the associated
eigenvalues, F
n
are the coefficients of the expansion of f(x) in terms of the functions in fφ
n
g, and
h(t) e
¡λ
n
t
is t he convolution of e
¡λ
n
t
and h(t) defined a s
h(t) e
¡λ
n
t
=
Z
0
t
h(τ) e
¡λ
n
(t¡τ )
:
Problem 4.40. Consider the following problem
8
<
:
u
tt
= u
xx
u(0; t) = u(1; t) = 0
u(x; 0) = 0; u
t
(x; 0) = g(x)
:
Show that the solution to the problem is equal to the solution to the following problem
8
<
:
u
t
= u
xx
+ δ(t) g(x)
u(0; t) = u(1; t) = 0
u(x; 0) = 0; u
t
(x; 0) = 0
;
38 1D Linear Second-Order Equations
where δ(t) is the Dirac delta function.
Problem 4.41. Solve the following damped wave equation
8
<
:
u
tt
+ 2u
t
= u
xx
+ sin(x)
u(0; t) = u(1; t) = 0
u(x; 0) = 0; u
t
(x; 0) = 1
Problem 4.42. Consider the following equation
8
<
:
u
t
= u
xx
+ 2u
x
u(0; t) = 0
u
x
(1; t) = 0
:
a) The associated eigenvalue problem is
φ
00
+ 2φ
0
= ¡λφ
φ(0) = φ(1) = 0
:
Use a numerical method and find first five eigenvalues and the associated eig enfu nctions.
b) Use these set and approximate the solution of t he heat problem if u(x; 0) = 1.
Problem 4.43. Consider the following problem
8
<
:
u
t
= u
xx
+ u
u(0; t) = 0
u(π; t) = 1
:
Let us take u(x; t) = V (x) + w(x; t), and deriving the following equation for V :
V
00
+ V = 0
V (0) = 0; V (π) = 1
:
Show that this equation is not solvable. What is your sug gestion to solve the given heat problem?
4.4 Theoretical aspects of the method
The method we introduced above is known as t he eigenfunction expansion method, which
is based on expanding the desired solution in terms o f the eigenfunctions of an eigenvalue
problem.
4.4.1 Function s p aces and convergence
In linear algebra, we learn that a set of vectors can form a vector space when equipped with
the operations of vector addition (+) and scalar multiplication. If the set is closed under these
operations, it creates a vector space that can serve as the domain or range of linear mappings.
Similarly, in the context of differential operators, the domain and range are vector func-
tion spaces. Consider two functions, f and g, defined on the same domain (x
0
; x
1
). The
vector addition of f and g is defined as follows:
(f + g) (x) = f (x) + g(x);
4.4 Theoretical aspects of the method 39
for all x2(x
0
; x
1
). Similarly, scalar multiplication λf is defined as:
(λf)(x) = λf (x):
These operations maintain closure within the function space.
One important class of function spaces is the C
r
spaces. Recall that a function f(x)
is continuously differentiable in the interval (x
0
; x
1
) if its derivative f
0
(x) is a co ntinuous
function in the same interval. We denote this as f 2C
1
(x
0
; x
1
). A function f is said to be
continuously differentiable of order r in the interval (x
0
; x
1
), denoted as C
r
(x
0
; x
1
), if its k
th
derivative f
(k)
(x) is in t he vector space C(x
0
; x
1
) of continu o us functions, for all k up to r.
Exercise 4.26. Consider the function
f(x) =
(
x sin
¡
1
x
x =/ 0
0 x = 0
:
Show that the function is not differentiable at x = 0. The functions
f(x) =
(
x
2
sin
¡
1
x
x =/ 0
0 x = 0
is differentiable at x = 0 but not in C
1
(¡1; 1).
Indeed, function spaces, such as the space spa nned by fsin(nπx)g for n = 1; 2; :::, are n ot
of finite dimension. They contain linearly independent functions (which are orthogonal in
this case), making it tricky to define conve rgen ce in such spaces. The meaning of a s eries like:
f(x) =
X
n=1
1
c
n
φ
n
(x);
One approach is to define the norm of a function f 2C
r
(x
0
; x
1
) as:
kf k=
X
k=0
r
Z
a
b
ju
(k)
(x)j
2
dx
s
;
where the norm measures the square roo t of the sum of squared derivatives. Convergence of
the series in this n o rm would imply:
lim
N !1
f(x) ¡
X
n=1
N
c
n
φ
n
(x)
= 0:
Another notion of norm for a function f 2C
r
(x
0
; x
1
) denoted by kf k
1
, can be defined as:
kf k
1
=
X
k=0
r
sup
x2(x
0
;x
1
)
jf
(k)
(x)j;
which measures the supremum (or maximum) value of the derivative s. Consequently, the
notion of convergence changes based on this new definition.
40 1D Linear Second-Order Equations
In this book, the focus is mainly on po intwise convergence, which means that for any
ξ 2(x
0
; x
1
), and a ny " > 0, there exists an N = N(ξ,") such that:
f(ξ) ¡
X
n=1
N
c
n
φ
n
(ξ)
< ":
This means that the series converges to the function f(x) at each point ξ w ithin the interval
(x
0
; x
1
) with an arbitrary small error ".
Exercise 4.27. Sh ow that the sequence of functions f
n
(x) = x
n
for x 2[0; 1] and n = 1; 2; ··· converges
po intwise to the function
f(x) =
1 x = 1
0 0 x < 1
:
However, if we define the norm
kuk
1
= sup
x2[0;1]
ju(x)j;
the sequence f
n
(x) does not converges to f(x) in this norm. Show that if we change the norm to
kuk=
Z
0
1
ju(x)j
2
dx
s
;
the sequence f
n
(x) converges to f(x) in this norm.
Exercise 4.28. Consider the sequence of functions
f
n
(x) =
8
<
:
n
p
0 < x <
1
n
0
1
n
x < 1
:
Show that f
n
(x) converges pointwise to the zero functions f (x) = 0 for x 2(0; 1). However, f
n
(x) does
not converges to the zero function in norm compatible with the inner product.
Exercise 4.29. Assume φ
n
(x) are mutually orthogonal functions and the sequence of functions
f
n
(x) =
X
k=1
n
hf ; φ
n
i
kφ
n
k
2
φ
n
;
converges in norm to functions f(x), where the norm is compatible with the inner product h; i. Show
the equality
kf k
2
= lim
n!1
X
k=1
n
hf ; φ
n
i
2
kφ
n
k
2
:
Exercise 4.30. Consider the sequence of functions f
n
(x) =
1
n
p
sin(nx) o n 0 x π for n = 1; 2; ···.
i. Show that f
n
(x) !0 in the norm compatible with the inner product:
hf ; gi=
Z
0
π
f(x)g(x)dx:
ii. If the inner product changes to the following one, show that the sequence f
n
(x) diverges:
hf ; gi=
Z
0
π
f(x)g(x)dx +
Z
0
π
f
0
(x)g
0
(x)dx:
Exercise 4.31. Consider the sequence of functions
f
n
(x) =
X
k=0
n
1
k!
x
k
;
4.4 Theoretical aspects of the method 41
for n = 1; 2; ··· in the interval [0; 1]. Show that the sequence converges pointwise and in norm to the
function f(x) = e
x
.
In this book, we also consider the function spaces o f piecewise continuous functio ns
PC(x
0
; x
1
) an d piecewise continuously differentiable fu nc tions PC
1
(x
0
; x
1
).
Definition 4.2. A function f(x), defined on t he interval (x
0
; x
1
), is called a piecewise
continuous function if:
a) f is continuous every where in the interval, except possibly at some finite points.
b) If z is a discontinuous point for f, then the left and right limits of f at z exist.
c) The right limit of f exists at x
0
, and the left limit exists at x
1
.
Definition 4.3. A function f(x) belongs to the space PC
1
(x
0
; x
1
) if f
0
(x) belongs to the
space PC(x
0
; x
1
), except possibly at some finit e points. In other words, f (x) is piecewise
continuously differentiable, and its derivative f
0
(x) is a piecewise continuous function.
These function spaces a llow us to work with functions that may have discontinuities at
certain points within the interval (x
0
; x
1
). Piecewise continuous functions are characterized
by their continuity properties and the existence of left and right limits at potential discon-
tinuities. Piecewise continuously differentiable functions, on the other hand, additionally
possess piecewise continuous derivatives, except at a finite number of points.
Example 4.11. Let's consider the function:
f(x) =
1 x 2Q
0 x 2Q
c
;
where Q is the set of rational numbers. This function is not piecewise continuous in any
interval (a; b) because it h a s infinitely many discontinuities or jumps within the i nterval.
It is known as the Dirichlet function and serves as a classic example of a function that is
discontinuous almost everywhere.
Example 4.12. Now, let's examine the function:
f(x) =
(
1
x
x =/ 0
0 x = 0
;
This function is no t in the space PC(¡1; 1) because it exhibits an infinite jump at x=0.
Similarly, consider the function:
f(x) =
(
sin
¡
π
x
x =/ 0
0 x = 0
:
This function i s also not in the space PC(¡1; 1) since it oscillates infinitely near x=0.
Example 4.13. Let's examine the function f(x)=x
2
3
. This function belongs to the space
PC(¡1; 1) since it is continuous in the interval (¡1; 1). However, it does not belong to the
space PC
1
(¡1; 1) because its derivative, f
0
(x), is not piecewise continuous within the interval.
42 1D Linear Second-Order Equations
Similarly, the function
f(x) =
(
x
2
sin
¡
π
x
x =/ 0
0 x = 0
is continuous and differentiable in the interval (¡1; 1). However, its derivative , f
0
(x), is not
piecewise continuous at x=0 as it exhibits infinite oscillation near this point.
Exercise 4.32. Which one o f the following functions belong to CP
1
or CP?
i. f (x) = x
2/3
, x 2[0; 1]
ii. f (x) = xjxj, x 2¡1 ; 1
iii. f (x) = jxj
p
, x 2[¡1; 1]
iv. f(x) =
1 0 < x < 1
0 otherwise
.
4.4.2 Proof of the orthogonality of eigenfunctions
The complete proof of the Sturm-Liouville theorem requires advanced tools that a re beyond
the scope of this book. However, we can establish two key aspects of the theorem through
an elementary approach. The crucial step in achieving the proof is to reframe the problem
in a symmetric form. To accomplish this, we need to introduce the concept of symmetry for
a differential operator.
The notion of inner product extends to another important concept known as symmetric
operators. This concept can be related to the concept of symmetric matrices in linear algebra.
In linear alge bra, a square matrix A = [a
ij
] is considered s ymmetric if a
ij
=a
ji
. It is known
that symmetric ma trices have real eigenvalues and orthogonal eigenvectors.
We can extend this concept to lin ear differential operators using the inner product.
Definition 4.4. A linear differential operator L is said to be symmetric with respect to the
inner product h; i if the following equality holds for all functions f and g in its domain:
hL[f ]; gi= hf ; L[g]i:
This definition establishes a symmetry relationship between the operator L and the inner
product, similar to the symmetry observed in symmetric matric es.
Example 4.14. Consider the differential operator L :=
d
2
dx
2
defined on the set of functions
φ(x) in C
2
(0; 1) which satisfies the boundary conditions φ(0) = φ(1) = 0. Let f and g be any
two functions in this set. We have
hL[f ]; gi=
Z
0
1
f
00
(x) g(x) dx;
and through int egration by parts:
Z
0
1
f
00
(x) g(x) dx = f
0
(x) g(x)j
0
1
¡
Z
0
1
f
0
(x) g
0
(x) dx = ¡
Z
0
1
f
0
(x) g
0
(x) dx:
4.4 Theoretical aspects of the method 43
The last equality is obtained by using the boundary conditions for g(x). Another integration
by parts and use the boundary condition f o r f(x) yields
¡
Z
0
1
f
0
(x) g
0
(x) dx =
Z
0
1
f
0
(x) g
00
(x) dx = hf ; L[g]i
and thus L is symmetric on the defined set of functions.
Consider the differential equation:
a(x) φ
00
+ b(x) φ
0
+ c(x)φ = ¡λφ;
where x 2(x
0
; x
1
), a(x) > 0 in [x
0
; x
1
], and a(x); b(x) and c(x) are continuou s functions. By
dividing the e quation by the term
σ =
1
a(x)
e
Z
b(x)
a(x)
;
the equation can be rewritten as
d
dx
e
Z
b(x)
a(x)
φ
0
+
c(x)
a(x)
e
Z
b(x)
a(x)
φ = ¡λ
1
a(x)
e
Z
b(x)
a(x)
φ:
Taking p(x) = e
Z
b(x)
a(x)
, q(x) =
c(x)
a(x)
e
Z
b(x)
a(x)
, the equation can be expressed as:
d
dx
[p(x) φ
0
] + q(x) φ = ¡λσ(x) φ:
Theorem 4.2. The operator
L[φ] :=
d
dx
(p(x) φ
0
) + q(x) φ;
acting on functions in the vector set C
2
(x
0
; x
1
) that satisfy the boundary condition:
α
1
φ(x
0
) + β
1
φ
0
(x
0
) = 0
α
2
φ(x
1
) + β
2
φ
0
(x
1
) = 0
;
is symmetric with respect to the inner product h; i.
Proof. For any functions φ; is th e defined set of functions, we have
hL[φ]; i=
Z
x
0
x
1
d
dx
(p(x) φ
0
) +
Z
x
0
x
1
q(x) φ :
Applying integration by parts to the first integral yields:
Z
x
0
x
1
d
dx
(p(x) φ
0
) = p(x) φ
0
j
x
0
x
1
¡p(x) φ
0
j
x
0
x
1
+
Z
x
0
x
1
d
dx
(p(x)
0
) φ:
We need to show that the boundary terms are zero. The boundary term at x = x
1
is:
p(x
1
)[φ
0
(x
1
) (x
1
) ¡φ(x
1
)
0
(x
1
)] = p(x
1
) W (φ; )(x
1
);
44 1D Linear Second-Order Equations
where W (φ; ) represents the Wronskian of functions φ and . Due t o the boundary
conditions:
α
2
φ(x
1
) + β
2
φ
0
(x
1
) = 0
α
2
(x
1
) + β
2
0
(x
1
) = 0
;
we have W (φ; )(x
1
) =/ 0, otherwise α
2
= β
2
= 0, which is impossible. The same argument
applies to the boundary term at x = x
0
. Therefore, we can conclude that:
Z
x
0
x
1
d
dx
(p(x) φ
0
) =
Z
x
0
x
1
d
dx
(p(x)
0
) φ;
which leads to:
hL[φ]; i= hφ; L[ ]i;
completing the proof.
Theorem 4.3. The eigenvalues of L in the defined vector space of functions are real.
Proof. Assume L[φ] = λφ. If λ is a complex eigenvalue, then by taking complex conjugate
of the equation, we obtain L[φ
¯
] = λ
¯
φ
¯
, where λ
¯
; φ
¯
represents the complex conjugate of λ; φ
respectively. Now, by symmetricity, we have
¡λ
Z
x
0
x
1
jφ(x)j
2
dx = hL[φ]; φ
¯
i= hφ; L[φ
¯
]i= ¡λ
¯
Z
x
0
x
1
jφ(x)j
2
dx;
and thus λ = λ
¯
, which implies λ is a real value.
Theorem 4.4. Suppose the eigenfunctions φ
n
; φ
m
are associated with distinct eigenvalues
λ
n
and λ
m
, respectively, in the eigenvalue problem:
L[φ] = ¡λσφ;
Then φ
n
and φ
m
are orthogonal with respect to the weight function σ.
Proof. By assumption, we have:
L[φ
n
] = ¡λ
n
σ(x) φ
n
L[φ
m
] = ¡λ
m
σ(x) φ
m
;
where λ
n
=/ λ
m
. Using the property of symmetricity, we can write:
¡λ
n
hσφ
n
; φ
m
i= hL[φ
n
]; φ
m
i= hφ
n
; L[φ
m
]i= ¡λ
m
hφ
n
; σφ
m
i
σ
:
From this, we have (λ
n
¡λ
m
)hφ
n
; φ
m
i
σ
= 0 which implies that hφ
n
; φ
m
i
σ
= 0, satisfying the
orthogonality condition.
Exercise 4.33. Write down the following eigenvalue problems in the symmetric Sturm-Liouville
form. Use the energy method in each case and show that their eigenvalues are strictly positives
a)
φ
00
+ 2
0
= ¡λφ
φ(0) = 0; φ
0
(1) = 0
b)
(1 + x)φ
00
+
0
= ¡λφ
φ(0) = φ
0
(0); φ
0
(0) = φ
0
(1)
4.4 Theoretical aspects of the method 45
c)
(
φ
00
+ 2φ
0
= ¡λe
¡2x
φ
φ(0) = 0; φ
0
(1) = 0
Exercise 4.34. Consider the opera tor
L[φ] =
d
dx
(p(x)φ
0
) + q(x)φ;
with t he domain of functions φ in the vector space
U = fφ 2C
2
(x
0
; x
1
); φ(x
0
) = φ(x
1
); φ
0
(x
0
) = φ
0
(x
1
)g:
Prove that if p(x
0
) = p(x
1
), then L is symmetric with respect to the inner product h; i and conclude the
that eigenvalues of the following problem a re real, and their eigenfunctions are orthogonal with respect
to the weight functions σ > 0
8
>
>
<
>
>
:
d
dx
(p(x)φ
0
) + q(x)φ = ¡λσφ
φ(x
0
) = φ(x
1
)
φ
0
(x
0
) = φ
0
(x
1
)
:
Prove that if p(x
0
) = p(x
1
), then the operator which is defined in the equation is symmetric and therefore
its eigenvalues are real and eigenfunctions are orthogonal with respect to weight function σ.
Exercise 4.35. It can be shown that there exists only one eigenfunction (up to a multiplication by
constants) associated to each eigenvalue for the Strum-Liouville problem. The scheme of proof is a s
follows: Let φ
1
and φ
2
be the eig enfunctions of the Sturm-Liouville eigenvalue pr oblem associated to the
same eigenvalue λ. Prove that the Wronskian W (φ
1
; φ
2
) = 0 and conclude that φ
1
and φ
2
are linearly
dependent.
Exercise 4.36. The Sturm-Liouville theorem states that the eigenvalues of the problem forms an
increasing set of value λ
n
!1. This exercise s et a lower bound for the sequence. Show that λ satisfi es
the inequality
λ
¡
Z
x
0
x
1
q(x)jφj
2
Z
x
0
x
1
σ jφj
2
;
and conclude that if q < 0 then λ > 0.
Exercise 4.37. Show that linear operator L[φ] = φ
00
+ φ
0
is not symmetric in the following vector space
U = fφ 2C
2
(0; 1 ); φ(0) = φ(1) = 0g:
Exercise 4.38. Show that linear operator L[φ] = e
x
φ
00
+ e
x
φ
0
is symmetric in the following space
U = fφ 2C
1
(0; 1 ); φ(0) = φ(1) = 0g:
Exercise 4.39. Show that linear operator L[φ] = φ
00
+ e
x
φ is symmetric in the following space
U = fφ 2C
1
(0; 1); φ(0) = φ(1); φ
0
(0) = φ
0
(1)g:
Exercise 4.40. Show that linear operator L[φ] = φ
0
is anti-symmetric in the following space
U = fφ 2C
1
(0; 1 ); φ(0) = φ
0
(0); φ(1) = φ
0
(1)g
An o perator L is anti-symmetric if hL[φ]; i= ¡hφ; L[ ]i.
Exercise 4.41. Suppose G(x) is a continuous function in [0 ; 1], and G(¡x) = G(x). Show that the
op erator
L[φ](x) =
Z
0
1
G(x ¡t)φ(t) dt;
is symmetric.
46 1D Linear Second-Order Equations
Exercise 4.42. Consider the following eigenvalue problem:
(
x
2
φ
00
+
0
+ 2φ = ¡λφ
φ
0
(1) = φ
0
(e) = 0
:
a) Fin d the eigenvalues and eigenfunctions of the problem.
b) Find an expansion of the function f (x) = 1 , x 2 [1; e] in terms of the rst four eigenfunctions of
the problem.
Exercise 4.43. Consider the linear opera tor L[φ] =
d
dx
(e
¡x
φ
0
) with the domain of functions φ in the
vector space
U = fφ 2C
2
(0; ln2); φ(0) = φ(ln2) = 0g:
a) Show that L is symmetric.
b) Find λ
n
and σ(x) such that the eigenfunctions of the eigenvalue problem
L[φ
n
] = ¡λ
n
σ(x) φ
n
;
are φ
n
(x) = sin(nπe
x
).
c) The set fφ
n
g
n=1
1
is a basis for PC
1
(0; ln2). Find the best approximation of the function f(x) = 1
in the space spanfφ
n
g
n=1
5
. That is, find f
1
; :::; f
5
such that
k1 ¡f
1
φ
1
¡···¡ f
5
φ
5
kkf ¡ gk;
for a ny g 2spanfφ
n
g
n=1
5
. Draw this approximation and compare it with the function f.
d) Consider the following heat problem
8
<
:
u
t
= L[u]
u(0; 0) = u(ln2; t) = 0
u(x; 0) = 1
:
Since fφ
n
g
n=1
1
is a basis, we can approximate the solution u as
u(x; 0)
X
n=1
5
U
n
(t) φ
n
(x):
Find the ordinary differential equation o f U
n
(t) along φ
n
(x). This differential equation describes
the change of the temperature along ea ch φ
n
(x) for n = 1; ···5 .
4.4.3 Operation on infini te series
The type of convergence of infinite series imposes certain limitations when wo rking with
them. For instance, consider the function f (x) = 1, x 2 (0; 1) which belongs to the set
PC
1
(0; 1) and can be represented in terms of the basis fsin(nπx)g for n = 1; 2; ··· as
1 =
X
n=1
1
2 ¡2 co s()
sin(nπx):
Taking the derivative of this eq uality yields:
0 =
X
n=1
1
[2 ¡2 cos()] cos(nπx);
However, this result is clearly incorrect since the function g(x)=0 is represented as a non-
trivial series in terms of the orthogonal set fcos(nπx)g. The figure below shows the function
f(x) in x 2(0; 1) and its series representation in a wider domain.
4.4 Theoretical aspects of the method 47
0 1
-1.5
-1
-0.5
0
0.5
1
1.5
-1 0 1 2
-100
-50
0
50
100
Where does this error come f rom? On e possible explanation for this discrepancy lies in
the behavior of the series representation at the endpoints of the interval (0; 1). Although
the function f (x) is continuous on (0; 1), its derivative is not continuous at x = 0 and x = 1.
This leads to a discontinuity in the derivative series representation, which can result in
inconsistencies in the equality above.
Furthermore, when extending the series representation of the function f (x) to a wider
interval beyond (0; 1), it exhibits an odd extension of f (x). This means that a jump is
formed at the endp o ints x = 0 and x = 1, due to the discontinuity of the original function.
Consequently, the derivative of th ese jumps generates a Dirac delta function at these points.
It is important to note that the Dirac delta function is a distribution that is concentrated
at a specific point and has the property of being zero everywhere except at that point,
where it behaves like an impulse. The presence of the Dirac delta function-like behavior in
the derivative seri es further contributes to the discrepancies observed in the equality, as it
introduces additional irregularities near the endpoints.
Now, let's consider the function f (x)=x(1 ¡ x). The series representation o f f (x) in
terms of the basis fsin(nπx)g is given by:
x(1 ¡x) =
X
n=1
1
4 ¡4 cos()
n
3
π
3
sin(nπx):
Taking the derivative of this eq uation yields the equality:
1 ¡2x =
X
n=1
1
4 ¡4 cos()
n
2
π
2
cos(nπx):
It can be verified that the latter series is a true series representation of the function f
0
(x)=1 ¡
2x in terms of the orthogonal basis fcos(nπx)g. The figure below shows the original series
of the function f (x) and its derivative, both truncated to N = 50 terms.
It is important to note that since the odd extension of the function f (x) is continuous
at the endpoints, the derivative of the series of f (x) converges pointwise to the derivative
f
0
(x) in the interval (0; 1). This behavior ensures that the series accurately represents the
derivative of the function within the given interval.
48 1D Linear Second-Order Equations
0 1
-0.3
-0.2
-0.1
0
0.1
0.2
0.3
-1 0 1 2
-1
-0.5
0
0.5
1
Theorem 4.5. Let f be a function belonging to C
2
(x
0
; x
1
), and let fφ
n
(x)g for n=1; 2; ::: be
an orthogonal basis generated by a Sturm-Liouville eigenvalue problem. Su ppose the series
representation of f (x) in the interval (x
0
; x
1
) is given by:
f(x) =
X
n=1
1
c
n
φ
n
(x):
If f(x
0
)= f (x
1
), then the derivative of f(x) can be represented as:
f
0
(x) =
X
n=1
1
c
n
φ
n
0
(x):
There are no restrictions on other operations such as integration, addition, subtraction,
or multiplication when working with series representations. These operations can be applied
to series in a straightforward manner.
Exercise 4.44. Let f (x) and g(x) be functions with series representations as follows:
f(x) =
X
n=1
1
a
n
φ
n
(x); g(x) =
X
n=1
1
b
n
φ
n
(x);
where fφ
n
(x)g is an orthogonal basis generated by a Sturm-Liouville eigenvalue problem.
a) Integration: The integral of the sum of two series can be obtained by integrating each term of the
series individually:
Z
x
0
x
[f(t) + g(t)] dt =
X
n=1
1
[a
n
+ b
n
]
Z
x
0
x
φ
n
(t) dt;
for a ny x 2(x
0
; x
1
).
b) Addition and subtraction: The sum or difference of two series can be obtained by adding or
subtracting the corresponding terms of the series:
f(x) ± g(x) =
X
n=1
1
[a
n
±b
n
] φ
n
(x)
c) Multiplication: The product of two serie s can be obtained by multiplying each term of one series
with each term of the other ser ies
f(x) g(x) =
X
n=1
1
X
m=1
1
a
n
b
m
φ
n
(x) φ
m
(x)
4.4 Theoretical aspects of the method 49
In each case, the resulting series is still a representation of the corresponding operation applied to the
functions f(x) and g(x).
50 1D Linear Second-Order Equations
Appendix A
Fourier series
A. 1 Trigonometric and complex Fourier series
Consider the eigenvalue problem defined on a unit circle ¡π θ π:
8
<
:
φ
00
(θ) = ¡λφ(θ)
φ(¡π) = φ(π)
φ
0
(¡π) = φ
0
(π)
: (A.1)
The periodic boundary condition given in the problem is natural, as φ(θ) should be periodic
with a period of 2π. Notice that φ(θ), the eigenfunction of this problem, is invariant up
to multiplication by a constant; tha t is, if φ(θ) solves the equation, so does (θ) for any
c2R. Although this problem is not in the standard form of the Sturm-Liouville p roblem,
its eigenfunctions and eigenvalues share resemblances with the properties of the standard
Sturm-Liouville problem.
Utilizing the energy method, one can show that the eigenvalues are non-negative:
Z
¡π
π
φ
00
(θ) φ(θ) = ¡λ
Z
¡π
π
jφ(θ)j
2
:
Integration by parts yields:
Z
¡π
π
jφ
0
(θ)j
2
= λ
Z
¡π
π
jφ(θ)j
2
;
Thus, we have λ 0. Furthermore, for λ =0, the eigenfunction is a constant fu nc tion, and
we ca n assume, without loss of generality, that φ
0
(θ)=1. For λ>0, the eigenfunctions are
given by:
φ
n
(θ) 2fco s(); sin()g; n 2N:
Furthermore, the set of eigenfunctions f1; cos(nx); sin(nx)g, n 2N forms a basis for piecewise
continuously differentiable functions define don the unit circle. That is, for any f (θ); ¡π
θ π, there are constants a
n
and b
n
such that:
f(θ) =
X
n=0
1
a
n
cos() +
X
n=1
1
b
n
sin():
Since the set f1; cos(); sin()g is an orthogonal basis with respect to the inner product
h; i defined as:
Z
¡π
π
φ
n
(θ) φ
m
(θ) = 0;
51
the parameters a
n
and b
n
are determined by the inner product of f and cos() and sin()
respectively.
The eigenvalue problem (A.1) can be modified to represent functions on any symmetric
domain (¡L; L) as follows:
8
<
:
φ
00
= ¡λφ
φ(¡L) = φ(L)
φ
0
(¡L) = φ
0
(L)
:
The set of solutions to this modified eigenvalue problem, with the normalizing factor c = 1,
is given by:
φ
n
(x) 2
n
1; cos
L
x
; sin
L
x
o
; n 2N:
The associated eigenvalues for this problem are λ
n
=
n
2
π
2
L
2
.
Remark A.1. The set
1; cos
¡
L
x
; sin
¡
L
x

is commonly known as the Fourier trigono-
metric basis. By using the Euler formula e
=cos(θ) + i sin(θ), we can express this bas is in the
complex form as
e
i
L
x
; n 2Z. The refore, a piecewise c o ntinuously differentiable function
defined on [¡L; L] can be represented as:
f(x) =
X
n=¡1
1
F
n
e
in!x
;
where ! =
π
L
, and F
n
are constants that can be determined using the complex form of the
inner product hh; ii defined as:
hhf ; gii:=
Z
¡L
L
f(x) g¯(x) dx;
where g¯ represents the complex conjugate of g. Utilizing this definition for the orthogonal
eigenfunctions φ
n
(x) = e
in!x
, the parameters F
n
are determined as:
F
n
=
1
2L
Z
¡L
L
f(x) e
¡in!x
dx:
While the trigonometric basis
1; cos
¡
L
x
; sin
¡
L
x

and the complex basis
e
i
L
x
both
represent the same set of f unctions and are mathematically equivalent, the complex repre-
sentation is of ten preferred in theoretical works due to its simplicity and elegance. Thus, for
theoretical analysis and calculations, the complex basis offers sig nificant advantages over the
trigonometric basis.
Example A.1. Let's find the complex Fo urier series of the function f(x)=x in the interval
(¡1; 1). Using the c o mplex Fourier series representation, we have:
x =
X
n=¡1;n =/ 0
1
i cos()
e
inπx
: (A.2)
52 Fourier se ries
Now, let's determine the complex Fourier series of the function f(x) =x
2
in the interval
(¡1; 1). Using the c o mplex Fourier series representation, we have:
x
2
=
1
3
+
X
n=¡1;n =/ 0
1
2 cos()
n
2
π
2
e
inπx
: (A.3)
Problem A.1. If f (x) is an odd function, show that F
n
are pure imaginary and F
¡n
= ¡F
n
(and thus
F
0
= 0). If f(x) is even function, show that F
n
are real and F
¡n
= F
n
.
Exercise A.1. Show that the set of functions
sin
¡
L
x
; cos
¡
2n ¡ 1
2L
πx
is a basis for the piecewise
continuously differentiable functions defined o n (¡L; L). Hint: Find an Sturm-Liouville problem that
generate the set.
Exercise A.2. Use either the Fourier sine or cosine series of the function f (x) = x on (0; 1) to prove
the following identity:
π
2
8
=
1
1
2
+
1
3
2
+
1
5
2
+
1
7
2
+ ···
Exercise A.3. Use either the Fourier sine or cosine series of the function f(x) = x
2
on x 2(0; 1) to prove
the following identity:
π
2
12
= 1 ¡
1
4
+
1
9
¡
1
16
+
1
25
¡
1
36
+ ···
Exercise A.4. Use either the Fourier sine or cosine series of the function f(x) = x on x 2(0; 2 ) to prove
the following identity:
π
4
= 1 ¡
1
3
+
1
5
¡
1
7
+ ···:
Exercise A.5. Use either the Fourier sine or cosine series of the function f(x) = x
3
on x 2(0; 2) to prove
the following identity:
π
3
32
= 1 ¡
1
3
3
+
1
5
3
¡
1
7
3
+ ···:
Exercise A.6. Let f( x) = e
¡x
on (¡1; 1).
a) Write down the Fourier series of f and draw S
5
(x); S
10
(x).
b) Write down the Fourier sine and cosine series of the function f (x) = e
¡x
defined on x 2 (0 ; 1).
Draw the partial sums S
5
and S
10
for each series.
You can us e the following code in Matlab to generate the plot of partial sum S
n
of a function f(x)
defined in x 2(¡L; L)
%The length L
L=1;
%Determine the functon
f=@(x) exp(-x);
% the number of terms
n=10;
%the coefficients of Fourier sine series
a0=integral(@(x) f(x),-L,L)/(2*L);
a=integral(@(x) f(x).*cos((1:n)'*x*pi/L),-L,L,'arrayvalued',true)/L;
b=integral(@(x) f(x).*sin((1:n)'*x*pi/L),-L,L,'arrayvalued',true)/L;
x=-L:0.01:L;
fhat=a0+a'*cos((1:n)'*x*pi/L)+b'*sin((1:n)'*x*pi/L);
plot(x,fhat,x,f(x);
For the Fourier sine series, you can use the following code:
%The length L
L=1;
%Determine the functon
A.1 Trigonometric and complex Fourier series 53
f=@(x) exp(-x);
% the number of terms
n=10;
%the coefficients of Fourier sine series
b=2*integral(@(x) f(x).*sin((1:n)'*x*pi/L),0,L,'arrayvalued',true)/L;
x=0:0.01:L;
fhat=b'*sin((1:n)'*x*pi/L);
plot(x,fhat,x,f(x);
For the Fourier cosine series, you can use the following code:
%The length L
L=1;
%Determine the functon
f=@(x) exp(-x);
% the number of terms
n=10;
%the coefficients of Fourier sine series
a0=integral(@(x) f(x),0,L)/L;
a=2*integral(@(x) f(x).*cos((1:n)'*x*pi/L),0,L,'arrayvalued',true)/L;
x=0:0.01:L;
fhat=a0+a'*cos((1:n)'*x*pi/L);
plot(x,fhat,x,f(x);
Exercise A.7. Consider the function f(x) = cos(x).
a) Write down the Fourier series of f(x) on x 2
¡
¡
π
2
;
π
2
.
b) Write down the Fourier sine and cosine s eries on x 2
¡
0;
π
2
Exercise A.8. In this exercise we use the Fourier series to fin d the series solut ion of and boundary
value ODE equation. Consider the following problem:
y
00
+ y = 1
y(0) = y(1) = 0
:
a) Assum e that the solution to the equation is written as
y(x) =
X
n=1
1
Y
n
sin(nπx):
Substitute y(x) into the equation and find coefficients Y
n
.
b) Show that the obtained series is absolutely convergent.
Theorem A.1. (Parseval's Theorem) Let f (x) be a piecewise continuously differentiable
function in (¡L; L). The Fourier series of f (x) converges to f(x) in the norm sense, which
can be expressed as:
lim
N !1
kf ¡S
N
k
2
= lim
N !1
Z
T
jf (x) ¡S
N
(x)j
2
dx = 0;
where S
N
(x) is the partial sum of its Fourier series given by:
S
N
(x) =
X
n=¡N
N
F
n
e
in!x
;
for ! =
π
L
.
Problem A.2. Use the above theorem and prove the following equality
X
n=¡1
1
jF
n
j
2
=
1
2L
Z
¡L
L
jf(x)j
2
dx;
54 Fourier se ries
for piecewise continuously differentiable functions defined on (¡L; L).
Hint: Expand the square norm
f ¡
X
n=¡N
N
F
n
e
in!x
2
=
/
/
/
/
f ¡
X
n=¡N
N
F
n
e
in!x
; f ¡
X
n=¡N
N
F
n
e
in!x
/
/
/
/
;
and conclude the following inequality known as the Bessel inequality:
X
k=¡N
N
jF
n
j
2
1
2L
Z
¡L
L
jf(x)j
2
dx:
Then use the Parseval theorem and conclude the equality.
A. 2 O perations on Fourier series
Proposition A.1. If f and g are piecewise continuously differentiable functions, then
f(x) + g(x) =
X
n=¡1
1
(F
n
+ G
n
) e
in!x
: (A.4)
Proposition A.2. If f and g are piecewise continuously differentiable functions, then
f(x)g(x) =
X
n=¡1
1
X
k=¡1
1
F
k
G
n¡k
!
e
in!x
: (A.5)
Proof. Multiply f(x) by g(x) and write:
f(x)g(x) =
X
k=¡1
1
F
k
g(x) e
ik!x
: (A.6)
Insert the serie s of g(x) into the above series and obtain
X
k=¡1
1
F
k
X
l=¡1
1
G
l
e
il!x
!
e
ik!x
=
X
k=¡1
1
X
l=¡1
1
F
k
G
l
e
i(k+l)!x
: (A.7)
If we take k + l = n, then
f(x)g(x) =
X
n=¡1
1
X
k=¡1
1
F
k
G
n¡k
!
e
in!x
; (A.8)
and this complete the proof.
Proposition A.3. Let f be a twice continuously differentiable function in (¡L; L), and
f(¡L) = f(L), t hen
f
0
(x) =
X
n=¡1
1
(in!) F
n
e
in!x
: (A.9)
Proof. We c an write
f
0
(x) =
X
n=¡1
1
G
n
e
in!x
; (A.10)
A.2 Operations on Fourier series 55
where by the continuity of f , we can wr ite
G
n
=
1
2L
Z
¡L
L
f
0
(x) e
¡in!x
dx=
1
2L
f(x)e
¡in!x
a
b
+
in!
2L
hhf ; e
in!x
ii: (A.11)
The condition f (¡L) = f (L) implies f(x)e
¡in!x
j
a
b
= 0, and then
G
n
=
in!
2L
hhf ; e
in!x
ii=in!F
n
;
and this completes the proof.
The continuity of f, and condition f(¡L) = f (L) are crucial for the proof that can
not be relaxed. For example, function f(x) = x defined on [¡1; 1] has the series in basis
f1; cos(nπx); sin(nπx)g as
x =
X
n=1
1
¡2(¡1)
n
sin(nπx);
while f
0
(x) = 1, and it has the trivial series expansion.
Proposition A.4. Let f be a piecewise continuously differentiable function in (¡L; L), then
Z
¡L
x
f(t) dt =
X
n=¡1
1
F
n
Z
¡L
x
e
in!t
dt: (A.12)
Proof. Let F (x) be an anti-derivative of f, that is,
F (x) =
Z
¡L
x
f(t) dt:
First we prove the proposition if F (L) = 0, that is,
F (L) =
Z
¡L
L
f(t) dt = 0:
Obviously F is a continuously differentiable f unction , and thus
F (x) =
X
n=¡1
1
F
^
n
e
in!x
; (A.13)
where F
^
n
=
1
2L
hhF ; e
in!x
ii. On the other hand, the condition F (L) = 0 implies F (¡L) = F (L),
and thus,
f(x) = F
0
(x) =
X
n=¡1
1
(in!) F
^
n
e
in!x
: (A.14)
This implies F
n
= (in!) F
^
n
, and for n =/ 0, we can write F
^
n
=
1
in!
F
n
. Therefore,
F (x) = F
^
0
+
X
n=¡1;n =/ 0
1
F
n
in!
e
in!x
: (A.15)
56 Fourier se ries
Since F is continuous and F (¡L) = 0, then
F
^
0
= ¡
X
n=¡1;n=/ 0
1
F
n
in!
e
¡inπ
: (A.16)
On the oth er hand, we have
X
n=¡1
1
F
n
Z
¡L
x
e
in!t
dt =
X
n=¡1;n=/ 0
1
F
n
in!
(e
in!x
¡e
¡inπ
) =
¡
X
n=¡1;n =/ 0
1
F
n
in!
e
¡inπ
+
X
n=¡1;n =/ 0
1
F
n
in!
e
in!x
=
=F
^
0
+
X
n=¡1;n=/ 0
1
F
n
in!
e
in!x
= F (x):
Hence, we have
F (x) =
X
n=¡1
1
F
n
Z
¡L
x
e
in!t
dt:
Now, we relax the condition F (L) = 0. Define function g as g(x) = f(x) ¡F
0
. Since
G
0
=
1
2L
Z
¡L
L
g(x) dx =
1
2L
Z
¡L
L
f(x) dx ¡F
0
= 0; (A.17)
we write G(x) :=
R
¡L
x
g(t) dt as the f o llowing series
G(x) = G
^
0
+
X
n=¡1;n =/ 0
1
G
n
in!
e
in!x
;
where G
n
are:
G
n
=
1
2L
hhg; e
in!x
ii=
1
2L
hhf ; e
in!x
ii¡
1
2L
hhF
0
; e
in!x
ii= F
n
¡
1
2L
hhF
0
; e
in!x
ii:
For n =/ 0, we have hhF
0
; e
in!x
ii= 0, a nd hence G
n
= F
n
for n =/ 0. On the other ha nd, we have:
G
0
= ¡
X
n=¡1;n=/ 0
1
F
n
in!
e
¡inπ
= F
^
0
;
and thus
G(x) = F
^
0
+
X
n=¡1;n=/ 0
1
F
n
in!
e
in!x
:
Note that
G(x) =
Z
¡L
x
g(t) dt =
Z
¡L
x
f(t) dt ¡F
0
(x + L);
which implies
Z
¡L
x
f(t) dt = F
0
(x + L) + F
^
0
+
X
n=¡1;n =/ 0
1
F
n
in!
e
in!x
=
X
n=¡1
1
F
n
Z
¡L
x
e
in!t
dt;
A.2 Operations on Fourier series 57
and this completes the proof.
A. 3 A proof of the Four ier and Parseva l theorem
We present a proof of the Fourier theorem for continuously differentiable functi ons defined
on the domain (¡π; π).
Theorem A.2. (Fourier) Assume that f 2 C
1
(¡π; π) and continuous on [¡π; π]. Then
the partial series
S
n
(x) =
X
k=¡n
n
F
k
e
ikx
;
converges pointwise to f(x), where F
k
=
1
2π
hhf ; e
ikx
ii.
Substituting F
k
into the partial sum, gives
S
n
(x) =
1
2π
X
k=¡n
n
Z
¡π
π
f(ξ) e
¡ikξ
e
ikx
=
1
2π
Z
¡π
π
f(ξ)
X
k=¡n
n
e
ik(x¡ξ)
: (A.18)
Exercise A.9. Show the following identity
X
k=¡n
n
e
ikx
=
sin
¡¡
n +
1
2
x
sin
¡
x
2
; (A.19)
and conclude
Z
¡π
π
sin
¡¡
n +
1
2
x
sin
¡
x
2
dx = 2π: (A.20)
The 2π-periodic function
D
n
(x) =
sin
¡
n +
1
2
x
sin
¡
x
2
; (A.21)
is called the Dirichlet kernel. The figure below depicts this kernel for values n = 1; 3; 9.
π
1
2
π
1
2
π
π
5
10
15
n
= 9
n
= 3
n
= 1
Therefore, we can rewrite S
n
(x) as the following integral
S
n
(x) =
1
2π
Z
¡π
π
f(ξ) D
n
(x ¡ ξ): (A.22)
58 Fourier se ries
The kernel D
n
(x) is a Dirac delta se quence for n !1.
Proposition A.5. Let f 2C
1
(¡π; π), then for all x 2(¡π; π), we have:
lim
n!1
1
2π
Z
¡π
π
f(ξ) D
n
(x ¡ ξ)dξ = f (x): (A.23)
In order to prove the proposition, we need the following lemma.
Lemma A.1. Assume that f(x) is a continuous function in [¡π; π], then
lim
n!1
Z
¡π
π
f(x) sin(nx) dx = 0: (A.24)
Now using the above lemma, we have
S
n
(x) = f (x) +
1
2π
Z
¡π
π
(f (ξ) ¡f(x))D
n
(x ¡ ξ) = f(x) +
1
2π
Z
¡π
π
f(ξ) ¡f(x)
ξ ¡x
ξ ¡x
sin
¡
x ¡ ξ
2
sin((n + 1/2)(x ¡ ξ))dξ:
Since f is a C
1
function, the function
f (ξ) ¡ f (x)
ξ ¡ x
is continuous. Similarly, the function
ξ ¡ x
sin
x ¡ ξ
2
is continuous as well, and thus
lim
n!1
Z
¡π
π
f(ξ) ¡f(x)
ξ ¡x
ξ ¡x
sin
¡
x ¡ ξ
2
sin((n + 1/2)(x ¡ ξ))dξ = 0:
Finally, we obtain
lim
n!1
S
n
(x) = f (x):
Theorem A.3. (Parseval) If f 2PC
1
(¡π; π), then partial sum S
n
(x) converges to f (x) in
norm, that is,
lim
n!1
kf ¡S
n
k= 0:
The proof is based on a fact from analysis that if f 2PC
1
(T ), then for every " > 0, there
is a function g 2C
1
(T ) such that
kf ¡ g k< ":
By this fact, we can write:
kf ¡S
n
kkf ¡gk+ kg ¡S
n
k< " + kg ¡S
n
k:
On the oth er hand, we have:
kg ¡S
n
k
2
= 2π
X
k=¡n
n
F
k
(F
k
¡G
k
) 2π
X
k=¡n
n
jF
k
j
2
!
1/2
X
k=¡n
n
jF
k
¡G
k
j
2
!
1/2
:
A.3 A proof of the Fourier and Parseval theorem 59
The last inequal ity is the Cauchy-Schwarz inequality. On the other hand, using the Bessel
inequality, we have
2π
X
k=¡n
n
jF
k
¡G
k
j
2
Z
¡π
π
jf (x) ¡g(x)j
2
< "
2
:
Similarly, we have
X
k=¡n
n
jF
k
j
2
1
2π
Z
¡π
π
jf (x)j
2
M
2
;
for some M > 0. This implies that kg ¡S
n
k
2
"
M
and finally
kf ¡S
n
k< " +
"
M
:
Since " is arbitrary, we let " !0 and thus
lim
n!1
kf ¡S
n
k= 0:
A.3.1 Gibbs phenomena
In this chapter, we have observed the overshoot jump of partial series near discontinuity
points, which is commonly known as the Gibbs phenomenon, named after the American
physicist J. W. Gibbs. Now, we aim to find an estimate for the maximum va lue of thi s jump.
For simplicity, we assume th a t the function f is defined on the interval [¡π; π] and has a
discontinuity at x = 0. From the formula of the Dirichlet kernel D
n
(x), we know that D
n
(x)
vanishes at z = ±
2π
2n + 1
. The figure below illustrates D
4
(x) and its shift D
4
¡
x ¡
2π
9
:
π
1
2
π
1
2
π
π
5
D
4
(
x
)
D
4
(
x
2
π/
9)
We observe that the maximum of D
4
¡
x ¡
2π
9
o ccur at the first zero point of D
4
(x) which
is z =
2π
9
. We expect that the jump occu rs at the zero points of D
n
. For this, we need to
calculate S
n
¡
2π
2n + 1
:
S
n
2π
2n + 1
=
1
2π
Z
¡π
π
f(ξ)D
n
2π
2n + 1
¡ξ
dξ =
1
2π
Z
¡π
0
f(ξ) D
n
2π
2n + 1
¡ξ
dξ +
+
1
2π
Z
0
π
f(ξ)D
n
2π
2n + 1
¡ξ
dξ:
60 Fourier se ries
According to the shape of the Dirichlet kernel for sufficiently large n, we have:
Z
¡π
0
f(ξ) D
n
2π
2n + 1
¡ξ
=
f(0
¡
)
Z
¡π
0
D
n
2π
2n + 1
¡ξ
: (A.25)
Similarly, we have
Z
0
π
f(ξ) D
n
2π
2n + 1
¡ξ
=
f(0
+
)
Z
0
π
D
n
2π
2n + 1
¡ξ
: (A.26)
Direct calculation yields the fo llowing values for large n:
Z
¡π
0
D
n
2π
2n + 1
¡ξ
=
¡0.0897; (A.27)
and
Z
0
π
D
n
2π
2n + 1
¡ξ
= 1.0897: (A.28)
Consequently„ we obtain the following estimate for the jump at the right side of the jump
point x = 0:
S
n
2π
2n + 1
1.09f
+
(0) ¡0.09 f
¡
(0): (A.29)
A similar estimate holds for the jump at the left side of x = 0, that is,
S
n
¡
2π
2n + 1
1.09f
¡
(0) ¡0.09f
+
(0): (A.30)
A.3 A proof of the Fourier and Parseval theorem 61