Chapter 6
Systems of Differential Equations
6.1 Introd uction
6.1.1 Modeling with the syst e m of equations
We start with a m o tivational example. Let x(t); y(t) be res pectively the population of
some kinds of prey and predator at time t. We would like to model the dynamic of the
populations in terms of differential equations . Our model would be base d on some
assumptions. For example, in the absence of any predator, we assume that the population
of prey increases according to the formula
dx
dt
= r
1
x;
where r > 0 is the offspring rate of the prey. The hunting rate that we assume of the form
k
1
xy causes the decrease in x(t) and thus we can write
dx
dt
= r
1
x ¡k
1
xy;
for some constant k
1
> 0. Similarly, in the absence of any prey, the population of preda-
tors decreases according to the fo rmula
dy
dt
= ¡r
2
y;
for some death rate r
2
> 0. The hunting rate k
2
xy contributes to the increasing of y(t)
and then
dy
dt
= ¡r
2
y + k
2
xy;
for some k
2
> 0. Finally, we can write the mathematical model of prey-predator as follows
8
<
:
dx
dt
= r
1
x ¡k
1
xy
dy
dt
= ¡r
2
y + k
2
xy
: (6.1)
Note that the above equations constitute a system in the following sense: to solve the first
equation for x, one needs y(t) whi ch is derived by solving the second equation for y(t),
while solving the second equation for y(t) needs x(t) which needs the first equation to be
solved for x(t). Therefore, the above system must be solve simultaneously for x(t); y(t).
The above system can not be solved by standard methods we have learned in previous sec-
tions for scalar equations. However, the numerical methods can be employed to solve the
1
equation. The following figure shows the solution for r
1
= 4; k
1
= 0.4 and r
2
= 2; k
2
= 0.2 in
0 t 10 and initial co ndition (x(0); y(0)) = (4; 3). Observe how the population of
predator follows the population of prey with a specific delay time.
0 2 4 6 8 10
0
5
10
15
20
25
30
35
40
x(t)
y(t)
It may seem that if the death rate r
2
increases then y(t) will decrease. The following
figure show the solution for r
2
= 2.5 instead of r
2
= 2 for the above figure. While the max-
imum of y(t) in the previous case was about 27, the increase of the death rate r
2
increases
y(t) to more than 29. The reason is clear. The decreasing of y( t) will increase x(t) that
means more food an d thus the increase in y(t). This mutu a l affection is a characteristic of
systems.
0 2 4 6 8 10
0
5
10
15
20
25
30
35
40
45
50
x(t)
y(t)
6.1.2 Phase plane
In the prey-predator example, the plane (x; y) is called the phase plane. Note that in the
final analysis, the solution of the system is (x(t); y(t)) that can be considered as a para-
metric curve γ(t) := (x(t); y(t)) in the phase plane. This curve is called a trajectory of the
2 Systems of Differential Equations
system. On the other hand, the right hand side of the system (6.1) can be c o nsidered as a
vector field in the phase plane, that is,
F (x; y) =
r
1
x ¡k
1
xy
¡r
2
y + k
2
xy
:
The following figure show the vector field for parameters r
1
= 4; k
1
= 0.4; r
2
= 2; k
2
= 0.2
and a few of trajectories.
0 5 10 15 20 25 30
2
4
6
8
10
12
14
16
18
20
22
It is observed that the trajectories are tangent to the vector field in the phase plane.
This fact is evident from the system (6.1) as well. If γ(t) is the solution of the system,
then
d
dt
γ(t) = F (γ(t)):
Remember that
d
dt
γ(t) is the tangent vector to γ(t) at any time t and thus F (γ(t)) is tan-
gent to the trajectories γ(t).
Observe also that the trajectories are closed curves. A closed curve is called an orbit of
the system and show s that the solutions of the system is periodic with respect to time t.
In other wo rd, if γ(t) is closed, then there is some ! > 0 such that γ(t + !) = γ(t) and
thus x(t); y(t) are periodic functions with the period !. In addition, it seems that the tra-
jectories orbits around a specific point. This point is called an equilibrium o f the system.
Let us find the equilibrium of (6.1). Since the time derivative at the equilibrium is zero,
we have
r
1
x ¡k
1
xy
¡r
2
y + k
2
xy
=
0
0
;
that gives p
1
= (0 ; 0), and p
2
=
r
2
k
2
;
r
1
k
1
. For the chosen parameters of the figure, the non-
trivial equilibrium is (10; 10) which is cl ear from the gure as well.
Problem 6. 1 . The prey-predator system in the phase plane reduces t o a scalar differential equation
as
dy
dx
=
¡y(r
2
¡k
2
x)
x(r
1
¡k
1
y)
:
6.1 Introduction 3
Solve the above equation for y = y(x).
6.1.3 Linearization of nonlinear systems
As we saw above, the prey-predator model is a nonlinear sys tem due to the multiplicative
term xy. Despite linear systems, nonlinear ones sometimes show very complicated behav-
iors, and thus the linearized version of a nonlinear system is sometimes used instead of the
original one. With the linearization model of single variable fun ctions as follows
f(x) f (x
0
) + f
0
(x
0
)(x ¡x
0
);
we can write the linearization of a s mooth function
f
g
: D R
2
! R
2
at a given point
(x
0
; y
0
) as follows
f(x; y)
g(x; y)
f(x
0
; y
0
)
g(x
0
; y
0
)
+ J
f
(x
0
; y
0
)
x ¡x
0
y ¡ y
0
;
where J
f
is the Jacobi matrix
J
f
=
2
6
6
4
@f
@x
@f
@y
@g
@x
@g
@y
3
7
7
5
:
The most common point of a dynamical system for the linearization is the equilibrium
point. For the prey-predator system (6.1) with the equilibrium
r
2
k
2
;
r
1
k
1
, we have
d
dt
x
y
F
r
2
k
2
;
r
1
k
1
+
2
4
0 ¡
k
1
r
2
k
2
k
2
r
1
k
1
0
3
5
0
@
x ¡
r
2
k
2
y ¡
r
1
k
1
1
A
;
and since F
r
2
k
2
;
r
1
k
1
= 0, we derive the linearized system as follows
d
dt
x
y
2
4
0 ¡
k
1
r
2
k
2
k
2
r
1
k
1
0
3
5
0
@
x ¡
r
2
k
2
y ¡
r
1
k
1
1
A
:
Furthermore, if we take X = x ¡
r
2
k
2
, Y = y ¡
r
1
k
1
, we obtain the following simple linear
system
d
dt
X
Y
2
4
0 ¡
k
1
r
2
k
2
k
2
r
1
k
1
0
3
5
X
Y
;
or equivalently
8
<
:
dX
dt
= ¡
k
1
r
2
k
2
Y
dY
dt
=
k
2
r
1
k
1
X
:
Note that the above system yields th e following equation for X(t) by taking derivative of
the first equation and substituting
d
dt
Y from the second equation
d
2
X
dt
+ r
1
r
2
X = 0;
4 Systems of Differential Equations
which is solved for
X(t) = C
1
cos( r
1
r
2
p
t) + C
2
sin( r
1
r
2
p
t):
A similar solution is derived simply for Y (t). The following figure shows y(t) for the orig-
inal system and the linearized system with t he same parameters r
1
; k
1
; r
2
; k
2
as b efore and
the initial condition x
0
= 12; y
0
= 6. The popu lation of prey shows similar behavior. Note
that (x
0
; y
0
) is not very far from the equilibrium point (10; 10).
0 2 4 6 8 10
4
6
8
10
12
14
16
The trajectory in the p has e plane show s the differences and similarities better:
4 6 8 10 12 14 16 18 20
4
6
8
10
12
14
16
Remark 6.1. Nonlinear systems some tim es show sometimes extremely complicated or
eve chaotic dynamics. Therefore, the linearization of such system does not show the
intrinsic dynamics of them completely. Of the most well-known such system is the
Lorenz system studied by E. Lorenz in 1963 in his work in atmospheric convection of
6.1 Introduction 5
the following form
8
<
:
x
0
= α(y ¡x)
y
0
= rx ¡y ¡xz
z
0
= xy ¡βz
;
for some constants α; r; β. The following gure shows the solution in the phase space (x;
y; z) for s pecific value α = 10; r = 28; β =
8
3
.
50
20
15
10
5
0
-5
40
-10
-15
-20
30
20
10
0
20
0
-20
It is simply seen that the system has a non-trivial equilibrium at ( 72
p
; 72
p
; 27), and
thus the linearized system around this equilibrium is
0
@
x
0
y
0
z
0
1
A
=
2
6
6
4
¡10 10 0
1 ¡1 ¡ 72
p
72
p
72
p
¡
8
3
3
7
7
5
0
@
x
y
z
1
A
:
The solution of the above linear system is spiral and never chaotic.
12
10
8
34
32
30
6
28
26
10
24
22
4
20
5
6 Systems of Differential Equations
6.2 General first-order systems
6.2.1 Linear and nonlinear systems
The general form of a first-order system is
d
dt
y = f (t; y); (6.2)
where y =
0
B
B
@
y
1
(t)
·
·
·
y
n
(t)
1
C
C
A
and f =
0
B
B
@
f
1
·
·
·
f
n
1
C
C
A
. If f is independent of t, the sy stem is called
autonomous. The general f o rm of a first-order autonomous system is
d
dt
y = f (y): (6.3)
A va lue y
¯
is called an equilibrium for the above autonomous system if f(y
¯
) = 0. In this
case, y(t) is constant and y(t) = y
¯
for a ll t. If all f
i
is linear with respect to y
1
; :::; y
n
,
then the system is linear . The general fo rm of a linear system is of the following form
d
dt
y = [a
ij
(t)]y + b(t);
where b =
0
B
B
@
b
1
(t)
·
·
·
b
n
(t)
1
C
C
A
. If b is identically zero then the linear system is called linear homoge-
neous. If the coefficient matrix A = [a
ij
(t)] is independent of t, the system is called the
linear system with constant coefficients.
It is interesting to note that every second and higher-order scalar differential equations
can be rewritten as a system of first-order equations. The standard method to convert a
higher order equation to a s ystem is as follows. For example, consider the following second
order equation
y
00
= f (t; y; y
0
):
Let us rename y by y
1
and write
y
1
0
= y
2
y
2
0
= f (t; y
1
; y
2
)
:
The process for higher order equation is c o mpletely similar. For example, the equation
y
000
= f (t; y; y
0
; y
00
);
can be written as
8
>
>
<
>
>
:
y
1
0
= y
2
y
2
0
= y
3
y
3
0
= f (t; y
1
; y
2
; y
3
)
:
Example 6.1. Let us rewrite the following equation into a s ystem
y
000
+ y
0
y
00
+ yy
0
= 1 + x:
6.2 General first-order systems 7
We take y
1
= y, and write
8
>
>
<
>
>
:
y
1
0
= y
2
y
2
0
= y
3
y
3
0
= ¡y
2
y
3
¡y
1
y
2
+ 1 + x
:
The following linear equation
y
00
+ 3y
0
+ 2y = e
t
;
can be written in the following first-order system
d
dt
y
1
y
2
=
0 1
¡2 ¡ 3
y
1
y
2
+
0
e
t
Definition 6.1. A vector function y = φ(t) 2 R
n
is called a solution to the system ( 6.2)
is there is an open interval I such that
φ
0
(t) = f (t; φ(t)); 8t 2I:
Theorem 6.1. (existence & uniqueness) Consider the following initial value problem
(
d
dt
y = f(t; y)
y(t
0
) = y
0
:
If there is a cube D centered at (t
0
; y
0
) 2R ×R
n
such that f is continuous on D, then the
above initial value problem has at least one solution. In addition, if
@f
i
@y
j
is continuous on
D for all f
i
2f and y
j
2 y, then the problem has a unique solution.
Corollary 6.1. A linear homogeneous system with constant coefficient matrix has exactly
n linearly independent solution vector.
Proof. Consider the following system
d
dt
y = [a
ij
] y;
and the associated initial value problem
(
d
dt
y = [a
ij
] y
y(0) = e^
i
;
where e^
i
is the i
th
unit vector in the direction of y
i
. By the existence and uniqueness the-
orem, there is a unique solution φ
i
(t) for each i = 1; :::; n. In fact, f (t; y) = [a
ij
] is contin-
uous an d
@f
i
@y
j
= a
ij
are continuous as well. Si mply, the set fφ
i
(t)g
i=1
n
is a set of linearly
indep endent vector because
det[φ
1
(0)jφ
2
(0)··jφ
n
(0)] = det[e^
1
je^
2
··je^
n
] = 1 =/ 0
We leave the last part of the proof to the reader, that is, to prove that fφ
i
g
i=1
n
spans the
solution set of the system; see the p roblem set.
8 Systems of Differential Equations
6.2.2 Vector fie lds and the geometry of trajectories
Consider a particle moving in the the plane (y
1
; y
2
) following the rule
y
1
0
= f (y
1
; y
2
)
y
2
0
= g(y
1
; y
2
)
: (6.4)
The loca tion function of the particle in the plane at time t is γ(t) = (y
1
(t); y
2
(t)). The
path or trace of the particle in this plane is called the trajectory of the particle; see
Fig.6.1. From the physics point of view, γ
0
(t) is the velocity vector v~ of the trajectory
which is the tangent vector to the trajectory at the point γ(t):
γ
0
(t) = F (γ(t));
for F =
f
g
.
y
1
b
a
γ
y
2
v~
v~
γ(a; b )
v~
v~
Figure 6.1.
From th e geometry point of view, F =
f
g
defines a vector field on the phase plane
(y
1
; y
2
). The following figure shows how the particle moves in a velocity vector field. The
geometry of this vector field provides us with some information about the trajectories of
the particle in the plane.
1 2 3 4 5
y
1
0
5
10
y
2
slope field
trajectory
For example, let us consider the following velocity vector field
F =
y
1
2
+ y
2
2
+ 2y
1
y
1
2
+ y
2
2
¡2y
2
!
6.2 General first-order systems 9
Obviously points (0; 0) and (¡1; 1) are critical points or equilibria o f the filed. The trajec-
tory of the particle that moves in this field depends on th e initial position. The following
figure shows a few trajectories of the system bas ed on t he initial location of the partic le.
1
.
0
0
.
5 0
.
5 1
.
0
1
.
0
0
.
5
0
.
5
1
.
0
γ(0)
γ(0)
γ(0)
γ(t)
y
1
γ(0)
y
2
Example 6.2. Consider the following system
y
1
0
= ¡y
2
y
2
0
= y
1
:
The vector field is rotational as it is seen f rom the curl of the led r ×
0
B
B
@
¡y
2
y
1
0
1
C
C
A
= 2k
^
. The
two trajectories are shown in the f o llowing figure.
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
Let V = jy
1
(t)j
2
+ jy
2
(t)j
2
be the scalar function of the magnitude of the solution. Then
it is simply s een that
d
dt
V = 2 y
1
(t)
dy
1
dt
+ 2y
2
dy
2
dt
= ¡2 y
1
(t) y
2
0
(t) + 2y
2
(t) y
1
(t) = 0;
10 Systems of Differential Equations
and thus V (y
1
; y
2
) = const:, that is, the shape of the curve are circle. Now consider the fol-
lowing system
y
1
0
= ¡y
2
¡"y
1
y
2
0
= y
1
;
for " > 0, and let V (y
1
; y
2
) be the same function as previous. We have
d
dt
V = ¡2"y
1
2
(t);
and thus V (y
1
(t); y
2
(t)) decreases with respect to time. It turns out that
lim
t!1
V (y
1
(t); y
2
(t)) = 0;
and y
1
(t); y
2
(t) approach zero in long term as the following gure shows.
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
It is simply seen that the trajectory of the following sys tem is outward spiral
y
1
0
= ¡y
2
+ "y
1
y
2
0
= y
1
;
for " > 0.
Problems
Problem 6.2. We would like to model the water pollution diffusion of two connected ponds. Con-
sider two ponds P
1
and P
2
connected by two channels as shown in the following figure:
6.2 General first-order systems 11
Q
1
; c
1
Q; c
1
Q
1
; c
0
c
2
c
1
Q; c
2
P
1 P
2
We make the following assumptions: i) channels carry constant streams Q
m
3
s
between two ponds,
ii) the pond P
1
contains V
1
m
3
and the pond P
2
contains V
2
m
3
of pure water at t ime t = 0, iii) a pollu-
tant stream Q
1
m
3
s
of the concentration c
0
gr
m
3
runs into P
1
and simultaneously same amount is dis-
charged out of the pond. Ponds are gradually contaminated by the polluted streams. We would like
to determine functions c
1
(t)
gr
m
3
and c
2
(t)
gr
m
3
, the pollution of ponds P
1
; P
2
respectively.
a) Write down the conservation law for the quantity c
1
V
1
, and c
2
V
2
. Justify that the relations are
as follows
8
<
:
d(c
1
V
1
)
dt
= c
0
Q
1
+ c
2
Q ¡c
1
(Q
1
+ Q)
d(c
2
V
2
)
dt
= c
1
Q ¡c
2
Q
:
b) Verify that V
1
; V
2
are constants, and write the system of differential equations for c
1
; c
2
with
given initial conditions.
Problem 6.3. Consider the connected ponds shown below.
Q
1
; c
0
Q
1
; c
1
P
1
P
2
Q
2
; c
1
Q
3
; 0
Q
2
; c
2
Q
3
; c
2
A constant flow Q
1
m
3
/s of polluted water c
0
gr/m
3
runs into the pond P
1
and a co nstant ow Q
3
of pure water r uns into P
2
. Write down a system of differential equations describing the concentration
c
1
(t), c
2
(t), the pollution concentration of P
1
and P
2
. Suppose P
1
and P
2
contain res pectively V
1
m
3
and V
2
m
3
water initially.
Problem 6.4. Consider the connected ponds shown below.
A constant flow Q
1
m
3
/s of polluted water c
0
gr /m
3
runs into the pond P
1
. Write down a system
of differential equations describing the concentration c
1
(t), c
2
(t), the pollution concentration of P
1
and
P
2
. Suppose P
1
and P
2
contain respectively V
1
m
3
and V
2
m
3
water initially.
12 Systems of Differential Equations
Problem 6.5. Consider the circuit shown below.
We would like to obtain an equation for the V
c
(t), the voltage across the capacitance C, and i(t),
the electrical current in the inductor L.
a) Let i
1
(t) and i
2
(t) denote res pectively the electrical current in the resistors R1 and R 2. Use
Kirschoff's law and conclude
L
di
dt
= ¡R
1
i
1
; (6.5)
R
1
i
1
+ R
2
i
2
+ V
c
= 0: (6.6)
b) Substitute i
1
= i
2
+ i into (6.6) and conclude
i
2
= ¡
1
R
1
+ R
2
V
c
¡
R
1
R
1
+ R
2
i: (6.7)
c) Use the relation i
2
= C
dV
c
dt
, and rewrite (6.7) as
dV
c
dt
= ¡
1
(R
1
+ R
2
)C
V
c
¡
R
1
(R
1
+ R
2
)C
i: (6.8)
d) Use the relation L
di
dt
= ¡R
1
i
2
¡R
1
i and substitute i
2
from (6.7) and conclude
di
dt
=
R
1
(R
1
+ R
2
)L
V
c
¡
R
1
R
2
(R
1
+ R
2
)L
i: (6.9)
e) Rewrite the system for the vector
V
c
i
. It should has the form
V
c
i
0
=
0
B
B
@
¡
1
(R
1
+ R
2
)C
¡
R
1
R
1
+ R
2
R
1
(R
1
+ R
2
)L
¡
R
1
R
2
(R
1
+ R
2
)L
1
C
C
A
V
c
i
:
Problem 6.6. For the circuit shown below write down a system of first order equations for i(t) the
electric current in the inductor and V
c
the voltage a cross the capacitance.
Problem 6.7. Verify that vector functions φ
~
1
(x) =
cos(x)
¡sin(x)
and φ
~
2
(x) =
sin(x)
cos(x)
are two core solu-
tions to the system
(
y
1
0
= y
2
y
2
0
= ¡y
1
:
6.2 General first-order systems 13
Find the fundamental matrix of the system and the n find the solution to the following problem
8
>
>
<
>
>
:
y
1
0
= y
2
y
2
0
= ¡y
1
y
1
(0) = 1; y
2
(0) = ¡1
:
Problem 6.8. Verify that vector functions φ
~
1
(x) = e
x
cos(x)
sin(x)
and φ
~
2
(x) = e
x
¡sin(x)
cos(x)
are two core
solutions to the system
(
y
1
0
= y
1
¡y
2
y
2
0
= y
1
+ y
2
:
Find the fundamental matrix of the system and the n find the solution to the following problem
8
>
>
<
>
>
:
y
1
0
= y
1
¡ y
2
y
2
0
= y
1
+ y
2
y
1
(0) = 1; y
2
(0) = 0
:
Problem 6.9. Write down the fol lowing higher order equations in the form of a system of first order
equations. If an equation is linear , write it in the matrix form.
i. y
00
+ yy
0
+ y = e
¡x
ii. y
00
+ !
2
y = sin(!x)
iii. y
000
+ xy
00
+ 2y = 0
iv. y
000
+ sin(y) = y
0
¡1
Problem 6.10. Consider the following system
y
1
0
y
2
0
!
=
a b
c d
y
1
y
2
:
Find an equivalent second order equation for the above system and show that its characteristic is
equal to det(A ¡λI) = 0 where A is the coefficient matrix of the system.
Problem 6.11. Systems which are not fully coupled are sometimes easy to solve. For the following
semi-coupled sys tem, try to find the solution.
i.
(
y
1
0
= 2y
1
+ y
2
y
2
0
= ¡y
2
ii.
(
y
1
0
= 3 y
1
y
2
0
= y
1
(1 + y
2
2
)
iii.
8
>
>
<
>
>
:
y
1
0
= ¡y
1
+ x + 1
y
2
0
= y
2
/ y
1
+ y
2
2
/ y
1
2
y
1
(1) = 1; y
2
(1) = ¡1
;
Problem 6.12. For each of the following vector fields, use a computer software to draw the trajec-
tory passing through the given point. Use the method described in this section to construct approxi-
mate trajectory passing through the point (you can take the step size h = 0.1).
i. V = (¡2y; x); p
0
= (1; 0)
ii. V = (¡y + x; x); p
0
= (1; 0)
iii. V = (y + x; ¡3y); p
0
= (1; 0)
Problem 6.13. Verify that for each the following vector fields, the given parametric curve is a trajec-
tory
i. V = (¡y; x), γ(t) = (cos(t); sin(t))
14 Systems of Differential Equations
ii. V = (¡y; 2x ¡2y), γ(t) = e
¡t
( cos(t); cos(t) + sin(t))
iii. V = (y; 2x + y), γ(t) = (e
2t
¡e
¡t
; 2e
2t
+ e
¡t
)
iv. V = (¡2y; 13x ¡2y), γ(t) = e
¡t
(5 cos(5t) + sin(5t); 13 sin(5t));
Problem 6.14. Assume that γ
1
(t) an d γ
2
(t) are trajectories of the following direction fields respec-
tively
W
1
= (cos(y); sin(x) ¡sin(y));
W
2
= (sin(y) ¡sin(x); cos(y)):
Prove that if γ
1
(t) and γ
2
(t) intersect at a point, then they intersect orthogonal.
Problem 6.15. For the matrix A =
a b
c d
, assume jAj =/ 0. Show that for arbitrary λ =/ 0, the inte-
gral cu rves of two systems y
1
~ = A y
1
~ and y
2
~ = λA y
2
~ are parallel.
Problem 6.16. Let R
θ
be the rotation matrix
R
θ
=
cos(θ) ¡sin(θ)
sin(θ) cos(θ)
:
Prove that the integral curves of systems y
1
~ = R
θ
y
1
~ and y
2
~ = R
(θ+π/2)
y
1
~ are orthogonal at their inter-
sections.
Problem 6.17. Consider the following coupled mass spring system
k
1
k
2
m
1
m
2
x
1
x
2
The goal is to write equations describing x
1
(t), x
2
(t), the positions of the m ass m
1
and m
2
respec-
tively.
a) Use the Newton's second law for the mass m
1
and conclude
m
1
x
1
00
= ¡(k
1
+ k
2
)x
1
+ k
2
x
2
: (6.10)
b) Use the Newton's second law for the mass m
2
and conclude
m
2
x
2
00
= k
2
x
1
¡k
2
x
2
: (6.11)
c) By taking v
1
= x
1
0
, v
2
= x
2
0
, write down a first order system for the given mass-spring system.
The answer should has the form
s~
0
=
0
B
B
B
B
B
B
B
B
B
B
@
0 1 0 0
¡
(k
1
+ k
2
)
m
1
0
k
2
m
1
0
0 0 0 1
k
2
m
2
0 ¡
k
2
m
2
0
1
C
C
C
C
C
C
C
C
C
C
A
s~ ; (6.12)
for the state vector s~ = (x
1
; v
1
; x
2
; v
2
).
Problem 6.18. Write a syst em of differential equation describing the system shown below.
k
1
k
2
m
1
m
2
x
1
x
2
k
3
Problem 6.19. For the coupled damped mass-spring system shown below, write down the equa tion
of motio n as a linear system of first order equations.
6.2 General first-order systems 15
x
1
m
1
x
2
m
2
k
1
(spring)
k
2
(spring)
d
2
(damp)
d
1
(damp)
Problem 6.20. Write down the following higher order equations in the form of a system of first
order equations. If an equation is linear, write it in the matrix form.
i. y
000
+ xy
00
+ 2y = 0
ii. y
000
+ sin(y) = y
0
¡1
Problem 6.21. Assume that F (y) is a smooth vector field for y 2 R
n
. Prove that it is impossible
that two trajectories of the system y
0
= F (y) intersect each other.
Problem 6.22. We observed that vector fields are tangent to the associated traj ectori es. From the
physics point of view, the vector eld of a first-order system is the velocity eld of particle moving in
the phase plane. This property can be used to construct the trajectories numerically as well. Assume
a particle is located at (x
0
; y
0
) in the phase plane at time t = 0 following the equation
d
dt
x
y
= F (x; y):
The velocity vector at (x
0
; y
0
) is v~ = F (x
0
; y
0
). Therefore, on e can use the linear approximation for-
mula at t = h 1 and write
x(h)
y(h)
x
0
y
0
+ hF (x
0
; y
0
):
Accordingly, we obtain the following recursive formula
x
n+1
y
n+1
x
n
y
n
+ hF (x
n
; y
n
);
where x
n
:= x(nh), y
n
:= y(nh) for small step size h.
a) Follow the above algorithm and draw the solution x(t); y(t) o f the system (6.1) for t 2 [0; 2]
using the step size h = 0.1, and the initial condition x
0
= 8; y
0
= 5. Take parameters as r
1
= 4;
k
1
= 0.4; r
2
= 2; k
2
= 0.2.
b) Draw the obtained solution in the phase plane (x; y).
6.3 Linear homogeneous systems
We present a method to solve 2D systems wi th constant coefficient matrices, y
0
= A y,
Y 2R
2
. There is no general method to solve l inear s ystems with variable coefficients.
6.3.1 Outline of the m e t hod
Assume that a particle is moving in the phase plane (y
1
; y
2
) subj ect to the law y
0
= Ay,
and assume that v~ is an eigenvector o f A with eigenvalue λ. Consider the following initial
value problem
(
d
dt
y = Ay
y(0) = v~
:
Note that
dy
dt
(0) = Ay(0) = A v~ = λ v~ ;
16 Systems of Differential Equations
and thus the velocity vector of the particle at t = 0 lies along the eigenvector v~. It im plies
that the particle rem ains along v~ for all t, and therefore, we can write the system as the
scalar one along v~, that is,
dz
dt
= λz;
where z is a variable along v~. The solution of the above scalar equation is z = e
λt
, or
equivalently
y(t) = e
λt
v~:
The following figure shows the above argument schematical ly.
y(t)
velocity =λy(t)
y(0)
Eigenvector
velocity =λy(0)
Proposition 6.1. Consider the following initial value problem
(
d
dt
y = Ay
y(0) = c v~
;
where v~ is an eigenvector of A with eigenvalue λ. Then the unique solution of the problem
is
φ(t) = ce
λt
v~:
Proof. The existence and uniqueness is verified by the existence and uniqueness theorem
for the g iven initial va lue problem. We need only to verify that the given vector function
solves the given problem. It satisfies the given initial condition, and furthermore, we have
d
dt
φ(t) = cλe
λt
v~ ;
and
(t) = ce
λt
A v~ = cλe
λt
v~ ;
and thus y = φ(t) satisfies the given problem.
6.3.2 Two real di stinct eigenvalues
Consider the following initial value problem
(
d
dt
y = Ay
y(0) = y
0
;
6.3 Linear homogeneous systems 17
and assume A has two real distinct eigenvalues λ
1
; λ
2
. It is simply seen that their associ-
ated eigenvectors v~
1
; v~
2
are linearly independ ent. Therefore, we can write the initial condi-
tion y
0
as the linear combination
y
0
= c
1
v~
1
+ c
2
v
2
~ ;
and thus the given initial value problem reads
(
d
dt
y = Ay
y(0) = c
1
v
~
1
+ c
2
v
2
~
:
The superposition principle allows us to write the above problem as the summation of the
following two sub-problems
(1)
(
d
dt
y = Ay
y(0) = c
1
v~
1
; (2)
(
d
dt
y = Ay
y(0) = c
2
v
2
~
:
Obviously, the rst problem is solved for φ
1
= c
1
e
λ
1
t
v~
1
, and the second one for φ
2
=
c
2
e
λ
2
t
v~
2
.
Example 6.3. Consider the following ini tial value problem
8
>
>
<
>
>
:
d
dt
y =
1 1
0 2
y
y(0) =
1
¡1
:
The coefficient matrix has eigenvalues λ
1
= 1 ; λ
2
= 2 with associated eigenvectors v~
1
=
1
0
,
v~
2
=
1
1
. Note that we can write y
0
as
1
¡1
= 2
1
0
¡
1
1
;
and thus
φ(t) = 2e
t
1
0
¡e
2t
1
1
=
2e
t
¡e
2t
¡e
¡2t
!
:
Example 6.4. Consider the following system
8
>
>
<
>
>
:
y
1
0
= 3y
1
¡4y
2
y
2
0
= y
1
¡2y
2
y
1
(0) = 1; y
2
(0) = ¡1:
; (6.13)
The coefficient matrix A is
A =
3 ¡4
1 ¡2
; (6.14)
with the eigenvalues λ
1
= ¡1 a nd λ
2
= 2 . The associated eigenvectors are v~
1
=
1
1
and
v~
2
=
4
1
. Since
y(0) = ¡
5
3
v~
1
+
2
3
v~
2
; (6.15)
18 Systems of Differential Equations
we obtain
y(t) = ¡
5
3
e
¡t
1
1
+
2
3
e
2t
4
1
=
1
3
8e
2t
¡5e
¡t
2e
2t
¡5e
¡t
!
: (6.16)
6.3.3 Geometry o f the trajectories
The following figure shows schematically the phase plane of a matrix A with two eigenvec-
tors v
1
~ ; v~
2
associated to eigenvalues λ
1
< 0 and λ
2
> 0. Consider a particle p moving in
accordance with the system
d
dt
y
1
y
2
= A
y
1
y
2
:
If particle p lies initially at the direction of v~
1
, it gradually approaches the origin as shown
in the figure. On the other hand, it is it located at v~
2
, it moves along the same direction
away from the origin. Now, assume that it is located at y
0
. The vector y
0
can be uniquely
decomposed on the direction of v~
1
as p
1
, and on the direction of v~
2
as p
2
. After δt, the
point p
1
reaches p
1
(δt), and p
2
to p
2
(δt). As it is shown in the figure, the initial point y
0
moves to y(δt) at t = δt.
y(δt)v~
2
y
0
p
1
p
2
p
2
(δt)
λ
2
> 0
λ
1
< 0
v~
1
p
1
(δt)
y
1
y
2
Figure 6.2.
There are three possible cases for a matrix of two distinct eigenvalues λ
1
; λ
2
.
λ
2
< λ
1
< 0. In this case, both terms e
λ
1
t
and e
λ
2
t
goes zero when t !1 and then
lim
t!1
y(t) = 0:
The equilibrium point in this case is stable and is called a nodal sink. For example,
consider the following system
d
dt
y
1
y
2
=
¡2 1
1 ¡2
y
1
y
2
The eigenvalues are λ
1
= ¡1; λ
2
= ¡3 and eigenvectors v~
1
=
1
1
; v~
2
=
¡1
1
. The
following figure shows a few of trajectories in the phase plane. Note that λ
2
< λ
1
and thus e
λ
2
t
on the direction of v~
2
vanishes sooner than e
λ
1
t
on the direction of v~
1
.
Therefore, trajectories approaches the origin tangent to v~
1
.
6.3 Linear homogeneous systems 19
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
λ
2
< 0 < λ
1
. In this case, the direction of v~
1
is unstable and the direction of v~
2
is
stable. For example, consider the following system
d
dt
y
1
y
2
=
2 4
1 ¡1
y
1
y
2
:
The coefficient matrix has eigenvalues λ
1
= 3; λ
2
= ¡2 and eigenvectors v~
1
=
4
1
;
v
~
2
=
¡1
1
. The following gures show some trajectories in the phase plane. The
origin in this case is called a saddle point.
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
0 < λ
1
< λ
2
. For example, consider the following system
d
dt
y
1
y
2
=
2 1
1 2
y
1
y
2
:
20 Systems of Differential Equations
The eigenvalues ate λ
1
= 1; λ
2
= 3 with eigenvectors v~
1
=
¡1
1
; v~
2
=
1
1
. The fol-
lowing figure shows a few trajectories in the phase plane. In this case, the origin is
called a nodal source. Note that how trajectories ten toward the dominant direc-
tion v~
2
with bigger eigenvalues.
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
Example 6.5. Let us draw the phase portrait of a system with a zero eigenvalue. Con-
sider the following system
d
dt
y
1
y
2
=
1 2
2 4
y
1
y
2
:
The coefficient matrix has eigenvalues-eigenvectors λ
1
= 0, v~
1
=
¡2
1
, λ
2
= 5, v~
2
=
1
2
.
Since point son v~
1
do not move with respect to time, trajectories are straight lines perpen-
dicular to the direction of v~
1
as shown in the following figure.
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
6.3 Linear homogeneous systems 21
6.3.4 Repeated eigenvalues
If matrix A
2×2
has a repeated eigenvalue λ then there are two possibilities for th e set of
eigenvectors:
1. that any vector of R
2
is an eigenvector, and
2. there is o nly one eigenvector.
In fact, if A has two linearly independent eigenvectors v~
1
; v~
2
with th e same eigenvalues λ,
then for arbitrary vector w~ , there are constants c
1
; c
2
such that
w~ = c
1
v~
1
+ c
2
v~
2
;
and thus
Aw~ = c
1
A v~
1
+ c
2
A v~
2
= λ (c
1
v~
1
+ c
2
v~
2
) = λ w~ :
If so, two linearly i ndependent solutions are
φ
1
(t) = e
λt
1
0
; φ
2
(t) = e
λt
0
1
;
and thus the initial value problem
(
d
dt
y = Ay
y(0) = y
0
;
has the solution
φ(t) = e
λt
y
0
:
The difficult pa rt is the case when A has only one eigenvector. Assume that λ; v~ is the
eigenvalue-eigenvector of a matri x A. Then the system
d
dt
y = Ay;
has one solution
φ
1
(t) = e
λt
v~:
For the second solution we use a fact from linear algebra. Remember that if a matrix A
2×2
has only one eigenvector v~, then there is a generalized eigenvector w~ such that
(A ¡λI)w~ = v~: (6.17)
22 Systems of Differential Equations
We claim that the second solution is as follows
φ
2
(t) = e
λt
(w~ + t v~):
The verification is straightforward as g iven below. We have
d
dt
φ
2
(t) = λe
λt
(w~ + t v~) + e
λt
v~:
On the other hand, we have
2
(t) = e
λt
A(w~ + t v~);
and by the relation Aw~ = λw~ + v~, we obtain
2
(t) = e
λt
(λw~ + v~ + v~);
and hence
d
dt
φ
2
(t) =
2
(t):
Example 6.6. Consider the system
8
>
>
<
>
>
:
y
0
=
3 ¡1
1 1
y
y(0) =
1
¡1
: (6.18)
The coefficient matrix has eigenvalue-eigenvector λ = 2, v~ =
1
1
and the generalized e igen-
vector is w~ =
1
0
. Therefore, two linearly independent solutions are
φ
1
(t) = e
2t
1
1
; φ
2
(t) = e
2t
t + 1
t
;
and thus the general solution is
y(t) = c
1
e
2t
1
1
+ c
2
e
2t
t + 1
t
:
Applying the initial condition de termin ed c
1
= ¡1, c
2
= 2 and hence the solution to the
given initial value problem is
φ(t) = e
2t
2t + 1
2t ¡1
:
6.3 Linear homogeneous systems 23
6.3.5 Trajectories in the phase p lane
If A has only one e igenvector, the origin is called improper or defective point.
λ < 0. In this case, a ll trajectories approaches the origin in long term
lim
t!1
y(t) = 0:
Furthermore, trajectories are tangent to the unique eigenvector as shown in the fol-
lowing figure fo r the ma trix A =
¡3 ¡1
1 ¡1
. In this case, the origin is called a defec-
tive sink.
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
If we replace the matrix A with the matrix B =
¡1 1
¡1 ¡3
, th e eigenvalue and eigen-
vector are the same, however, the form of trajectories are as follows
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
In order to determine the rotation of trajectories, we c a n do as follows. For the
24 Systems of Differential Equations
first matrix, let y(0) =
1
0
, and thus
y
0
(0) =
¡3 ¡ 1
1 ¡1
y
0
=
¡3
1
;
as shown below
y
2
y
1
3
¡1
¡3
1
For the matrix B we have
y(0) =
¡1 1
¡1 ¡ 3
y(0) =
¡1
¡1
;
that is shown below
y
2
y
1
1
1
¡1
¡1
λ > 0. In this case all trajectories goes unbounded when t ! 1 as shown in the fol-
lowing figure The origin is called a defective source.
6.3 Linear homogeneous systems 25
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
6.3.6 Complex eigenvectors
Complex eigenvalues are characteristic of rotational matrices. If a matrix A
2×2
has a com-
plex eigenpair (λ; v~), then it posses the complex conjugate pair (λ
¯
; v~
¯
). Consider the fol-
lowing system
d
dt
y = Ay;
where A has complex eigenpair (λ; v~). Then, the system ha s two solutions
φ
1
(t) = Refe
λt
v~ g; φ
2
(t) = Im(e
λt
v~):
Example 6.7. Consider the system
8
>
>
<
>
>
:
d
dt
y =
3 ¡2
4 ¡1
y
y(0) =
0
1
: (6.19)
The eigenvalue of the coefficient matrix is λ = 1 + 2i wi th the complex eigenvector v~ =
1
1 ¡ i
. Therefore, two independent solutions are
φ
1
(t) = e
t
Re
e
i2t
1
1 ¡i

= e
t
cos(2t)
cos(2t) + sin(2t)
;
φ
2
(t) = e
t
Im
e
i2t
1
1 ¡i

= e
t
sin(2t)
sin(2t) ¡cos(2t)
:
26 Systems of Differential Equations
Therefore, the general solution is
y(t) = c
1
e
t
cos(2t)
cos(2t) + sin(2t)
+ c
2
e
t
sin(2t)
sin(2t) ¡cos(2t)
;
and applying the given initial condition yields c
1
= 0, c
2
= ¡1 and finally
φ(t) = e
t
¡sin(2t)
cos(2t) ¡sin( 2 t)
:
6.3.7 Trajectories in the phase p lane
In this case, the trajectories fo rm closed curves or spiral around the origin depending on
the sign of σ.
σ = 0. Whe n the real part of the eigenvalue is zero, the exponential function e
λx
= e
i!x
is just a trigonometric functions and the trajectories form e llipses around the
origin. The origin in this case is called a center. T he f o llowing figure shows a few
trajectories of a system with coefficient m a trix A =
1 ¡3
1 ¡1
σ < 0. In this case, the rotation is multiplied by the factor e
σx
and thus the trajecto-
ries goes to the origin when t ! 1. The origin is called a spiral sink in this case.
The follo wing figures trajectories of a system wi th the coefficient matrix A =
¡0.5 ¡3
1 ¡0.5
. Note that in this case, the value of σ is
1
2
.
6.3 Linear homogeneous systems 27
σ > 0. Similar behavior to σ < 0 but only the spirals are outward. The origin is called
a spiral source in this case.
Remark 6.2. The direction of the rotation is simply determined by the aid of the co effi-
cient matrix. For example, for a system with the coefficient matrix
3 ¡2
4 ¡1
, we have
y
0
(0) =
3 ¡2
4 ¡1

0
1
=
¡2
¡1
and thus trajectories form a spiral source rotating counter-clockwise direction. This argu-
ment implies that if c > 0 in the coefficient matrix A =
a b
c d
then the rotation is counter-
clockwise and if c < 0 , th e rotation is clockwise.
28 Systems of Differential Equations
¡2
¡1
(0; 1)
y
1
y
2
Problems
Problem 6.23. Consider the following system
d
dt
y = A(t)y;
where A(t) = [a
ij
(t)]
n×n
, and all a
ij
(t) are continuous. Prove that the system has exactly n-linearly
indep endent solution vectors.
Problem 6.24. Rewrite each of the following initial value problem as a scalar equation along the
appropriate eigenvect or and then write the solution
i.
8
>
>
<
>
>
:
y
1
0
= 2y
1
+ y
2
y
2
0
= 3y
1
+ 4 y
2
y
1
(0) = 1; y
2
(0) = 3
ii.
8
>
>
<
>
>
:
y
1
0
= y
1
¡ y
2
y
2
0
= ¡4 y
1
+ y
2
315pty
1
(0) = 2; y
2
(0) = ¡4
iii.
8
>
>
<
>
>
:
y
1
0
= y
1
+ 2y
2
y
2
0
= 2 y
1
+ 4y
2
y
1
(0) = ¡2; y
2
(0) = 1
iv.
8
>
>
<
>
>
:
y
1
0
= ¡y
1
+ 8y
2
y
2
0
= y
1
+ y
2
y
1
(0) = 1; y
2
(0) = ¡1
Problem 6.25. Find two linearly independent solution vectors for each of the following systems
i.
(
y
1
0
= 2y
1
+ y
2
y
2
0
= 3y
1
+ 4 y
2
6.3 Linear homogeneous systems 29
ii.
(
y
1
0
= ¡3y
1
+ 2y
2
y
2
0
= ¡2y
1
+ y
2
iii.
(
y
1
0
= 2y
1
¡5y
2
y
2
0
= 4y
1
¡2y
2
iv.
(
y
1
0
= y
1
+ y
2
y
2
0
= ¡2y
1
¡2y
2
Problem 6.26. For the following 2-dimensional systems, find the equilibrium point(s), determine the
type of points, and draw the phase portrait of the system
a)
(
y
1
0
= 2y
1
+ y
2
y
2
0
= 3y
1
+ 4 y
2
;
b)
(
y
1
0
= y
1
¡ y
2
y
2
0
= ¡4 y
1
+ y
2
;
c)
(
y
1
0
= ¡y
1
+ 8y
2
y
2
0
= y
1
+ y
2
;
d)
(
y
1
0
= ¡y
1
¡5y
2
y
2
0
= y
1
+ y
2
;
e)
(
y
1
0
= ¡3y
1
+ 2y
2
y
2
0
= ¡2y
1
+ y
2
;
f)
(
y
1
0
= 5y
1
+ 3y
2
y
2
0
= ¡3y
1
¡ y
2
;
g)
(
y
1
0
= ¡6y
1
y
2
0
= 2y
1
+ y
2
h)
(
y
1
0
= ¡y
1
¡2y
2
y
2
0
= 2y
1
+ 3 y
2
i)
(
y
1
0
= 2y
1
+ 2y
2
y
2
0
= ¡5y
1
¡4 y
2
j)
(
y
1
0
= 2y
1
¡5y
2
y
2
0
= 4y
1
¡2y
2
k)
(
y
1
0
= 4y
1
¡ y
2
y
2
0
= 6 y
1
¡2y
2
30 Systems of Differential Equations
Problem 6.27. Consider the system
d
dt
y =
2 ¡1
4 ¡2
y:
a)
Show that the system has two linearly independent solution vectors φ
~
1
=
1
2
and φ
~
2
=
t +
1
2
2t
!
.
b) Deduce that the trajectories in the phase plane satisfies the equation y
2
= 2y
1
+ c for some con-
stant c.
Problem 6.28. Let A
2×2
has a complex eigenvalue λ = σ + i!.
i. For the matrix Q =
1
2
[Re(v~)jIm(v~) ] show that Q
¡1
AQ =
σ ¡!
! σ
.
ii. Conclude that A = QBQ
¡1
for the matrix B = jλj
cos(θ) ¡sin(θ)
sin(θ) cos(θ)
for some suitable angle θ.
6.4 Non-homogeneous systems
Consider the following system
d
dt
y = Ay + r(t): (6.20)
As we expect, the gen eral solution should has the following form
y(t) = y
h
(t) + y
p
(t); (6.21)
where y
h
is the homogeneous solution when r is identically zero, and y
p
is a particular
solution of the system. We present three different methods to find y
p
.
6.4.1 Eigenvector decomposition method
Consider the following system
(
d
dt
y = A
2×2
y + r(t)
y(0) = y
0
:
If A has two linearly independent eigenvectors v~
1
; v~
2
, then we can decompose r(t) in a
unique way as
r(t) = r
1
(t) v~
1
+ r
2
(t) v~
2
;
and thus the given non-homogeneous sy stem can be solved separately as
(
d
dt
y = A
2×2
y + r
1
(t) v
1
~
y(0) = c
1
v~
1
;
(
d
dt
y = A
2×2
y + r
2
(t) v
2
~
y(0) = c
2
v~
2
;
where y
0
= c
1
v~
1
+ c
2
v~
2
for some c
1
; c
2
. The first system reduces to a scalar equation along
v~
1
, that is,
(
dz
dt
= λ
1
z + r
1
(t)
z(0) = c
1
;
6.4 Non-homogeneous systems 31
and the second system reads
(
dz
dt
= λ
1
z + r
2
(t)
z(0) = c
2
:
Note that, the solution of the first equation is a vector solution along v~
1
, and of the second
one is a solution along v~
2
, that is,
φ
1
(t) =
c
1
e
¡λ
1
t
+ e
¡λ
1
t
Z
0
t
e
λ
1
τ
r
1
(τ)
v~
1
;
φ
2
(t) =
c
2
e
¡λ
2
t
+ e
¡λ
2
t
Z
0
t
e
λ
2
τ
r
2
(τ)
v~
2
:
Example 6.8. Let A =
3 ¡2
2 ¡2
, y(0) = 0, and r =
e
t
t
!
. Since v~
1
=
1
2
, v~
2
=
2
1
, we
have
r =
¡1
3
e
t
+
2
3
t
v~
1
+
2
3
e
t
¡
1
3
t
v~
2
: (6.22)
Since λ
1
= ¡1 and λ
2
= 2, the associated sub-systems along their associated eigenvectors
are
(
z
0
= ¡z ¡
1
3
e
t
+
2
3
t
z(0) = 0
;
(
z
0
= 2z +
2
3
e
t
¡
1
3
t
z(0) = 0
:
These two scalar equations are simply solved for
φ
1
(t) =
5
6
e
¡t
¡
1
6
e
¡t
+
2
3
t ¡
2
3
v~
1
; φ
2
(t) =
¡
11
36
e
¡2t
+
2
9
e
t
¡
1
6
t +
1
12
v~
2
;
and thus the final solution is φ(t) = φ
1
(t) + φ
2
(t).
As it is seen, this method works very well if A
2×2
has two real distinct eigenvalues.
However, the application is limited if A h a s a repeated or complex eigenvalues.
6.4.2 Va riation of parameters
Let φ
1
(t); φ
2
(t) be two linearly independent of the homogeneous system
d
dt
y = Ay:
Then we write the particular solution of the non-homogeneous system
d
dt
y = Ay + r(t)
32 Systems of Differential Equations
as follows for some un de termined f unctions c
1
(t); c
2
(t)
y
p
(t) = c
1
(t)φ
1
+ c
2
(t)φ
2
;
Substituting y
p
(t) into the system, gives
c
1
0
(t)φ
1
+ c
1
(t)φ
1
0
+ c
2
0
(t)φ
2
+ c
2
(t)φ
2
0
= c
1
(t)
1
+ c
2
(t) Aφ
2
+ r(t):
Note that φ
1
0
=
1
and φ
2
0
=
2
, and thus
c
1
0
(t)φ
1
+ c
2
0
(t)φ
2
= r(t): (6.23)
In the matrix form, we can write the above system as foll ows
Φ(t)
c
1
0
(t)
c
2
0
(t)
= r(t);
where Φ(t) = [φ
1
jφ
2
]. Therefore,
c
1
(t)
c
2
(t)
=
Z
Φ
¡1
(t) r(t) dt:
Example 6.9. Let us find a particular solution to the system y
0
= Ay +
0
t
where A =
3 ¡2
2 ¡2
. It i s simply seen that the homogeneous system has two vector solution φ
1
(t) =
e
¡t
1
2
, and φ
2
(t) = e
2t
2
1
!
. We write the particular solution a s
y
p
(t) = c
1
(t)e
¡t
1
2
+ c
2
(t)e
2t
2
1
;
where c
1
; c
2
satisfy the following equation
c
1
0
(t) e
¡t
1
2
+ c
2
0
(t) e
2t
2
1
=
0
t
;
and thus
(
e
¡t
c
1
0
(t) +2e
2t
c
2
0
(t) = 0
2e
¡t
c
1
0
(t) + e
2t
c
2
0
(t) = t
:
By eliminating c
2
0
, we obtain 3e
¡t
c
1
0
= 2t and hence
c
1
(t) =
2
3
Z
te
t
=
2
3
(t ¡1)e
t
:
Similarly, we obtain
c
2
(t) = ¡
1
3
Z
te
¡2t
= ¡
1
3
¡1
2
te
¡2t
¡
1
4
e
¡2t
= e
¡2t
1
6
t +
1
12
;
and finally
y
p
(t) =
0
@
t ¡
1
2
3
2
t ¡
5
4
1
A
:
6.4 Non-homogeneous systems 33
Theorem 6.2. Let Φ = [φ
1
(t)jφ
2
(t)] be a solution matrix of the homogeneous system
d
dt
y = Ay:
Then, the solution of the non-homogeneous system
(
d
dt
y = Ay + r(t)
y(0) = y
0
; (6.24)
is
y(t) = Φ(t
¡1
(0)y
0
+ Φ(t)
Z
0
t
Φ
¡1
(τ) r(τ ) : (6.25)
Proof. The verification process is straightforward as follows. Fist, the given solution sat-
isfies the initial condition at t = 0. Furthermore, we have
d
dt
y(t) = Φ
0
(t
¡1
(0)y
0
+ Φ
0
(t)
Z
0
t
Φ
¡1
(τ) r(τ ) + r(t):
For the last term, we used the fundamental theorem of calculus. On the other hand, Φ(t)
is the homogeneous matrix solution of the homogeneou s system, that is,
d
dt
Φ(t) = AΦ(t);
and thus
d
dt
y(t) = A
Φ(t
¡1
(0)y
0
+ Φ(t)
Z
0
t
Φ
¡1
(τ) r(τ ) d
τ + r(t) = Ay(t) + r(t);
and this completes the proof.
6.4.3 Undeter mined coefficient method
This method is applied if the forcing terms is of the following forms
polynomials,
exponential,
trigonometric sine and cosine functions.
polynomials. If r(t) has the form a~
n
t
n
+ ···+ a~
0
and λ = 0 is not an eigenvalue of the
coefficient matrix A, then the particular solution is y
p
= c~
n
t
n
+ ··· + c~
0
for some
undetermined vectors c~
n
. If λ = 0 is a simple eigenvalue of A then y
p
= c~
n+1
t
n+1
+
c~
n
t
n
+ ···+ c~
0
.
Exponential. If r(t) is an exponential function r(t) = a~ e
bt
and λ = b is not an eigen-
value of A, then y
p
= c~ e
bt
for an u ndetermined vector c~. If λ = b is a simple eigen-
value of A, then y
p
= c~
1
te
bt
+ c~
0
e
bt
. If λ = b is a repeated eige nvalue then
y
p
= c~
2
t
2
e
bt
+ c
1
~ te
bt
+ c~
0
e
t
:
34 Systems of Differential Equations
Trigonometric. If r(t) has the form a~ sin(!t) or a~ cos(!t) and λ = i! is not an eigen-
value of A, then y
p
= c~
1
sin(!t) + c~
2
cos(!t). If λ = i! is a n eigenvalue of A, then
y
p
= (c~
1
t + d
1
~
) sin(!t) + (c~
2
t + d
~
2
) cos(!t):
Example 6.10. Let us solve the system g iven in the previous example, that is, A =
3 ¡2
2 ¡2
and r =
e
t
t
!
. It is better to rewrite r as r =
1
0
e
t
+ t
0
1
. The particular solu-
tion associated to the first term is y
p
= e
t
c~
1
where c~
1
satisfies the following relation
e
t
c~
1
= e
t
A c~
1
+ e
t
1
0
;
and thus (A ¡Id)c~
1
= ¡
1
0
. This gives
c~
1
= ¡(A ¡Id)
¡1
1
0
=
¡3
2
¡1
!
:
A particular solution associated to the second term is y
p
= c~
2
t + c~
3
where c~
2
; c~
3
satisfy the
relation c~
2
= tAc~
2
+ Ac~
3
+ t
0
1
. Thus, we obtain Ac~
2
= ¡
0
1
and Ac~
3
= c~
2
. These rela-
tions determine c~
2
=
1
3
2
!
, and c~
3
=
0
@
¡1
2
¡5
4
1
A
. H en ce, th e particular solution is
y
p
= e
t
¡3
2
¡1
!
+t
1
3
2
!
¡
0
@
1
2
5
4
1
A
:
Problems
Problem 6.29. Find the general solution of the following sys tems by th e eigenvector decomposition
method
a)
y
0
=
3 ¡4
1 ¡2
y +
1
¡3
:
b)
y
0
=
¡1 3
1 1
y +
t + 1
e
¡t
!
:
c)
y
0
=
4 ¡3
2 ¡1
y +
e
¡t
e
t
!
:
d)
y
0
=
2 5
1 ¡2
y +
sint
cost
:
6.4 Non-homogeneous systems 35
e)
y
0
=
2 ¡3
1 ¡2
y +
0
B
B
@
e
t
1 + e
2t
e
t
1 + e
2t
1
C
C
A
Problem 6.30. Find the general solution of the following systems by the variation of parameters
method
a)
y
0
=
0 ¡1
1 0
y +
e
t
e
¡t
!
:
b)
y
0
=
0 1
2 ¡1
y +
0
B
B
@
e
2t
1 + e
t
e
2t
1 + e
1
C
C
A
:
c)
y
0
=
1 1
¡1 1
y +
0
B
B
@
e
t
cost
e
t
sint
1
C
C
A
:
d)
y
0
=
2 ¡5
1 ¡2
y +
sint
cost
:
e)
y
0
=
2 ¡3
1 ¡2
y +
0
B
B
@
e
t
1 + e
2t
e
t
1 + e
2t
1
C
C
A
Problem 6.31. Find the general solution of the following sys tems by th e eigenvector decomposition
method
a)
y
0
=
3 ¡4
1 ¡1
y +
1
¡3
:
b)
y
0
=
¡1 3
1 1
y +
t + 1
e
¡t
!
:
c)
y
0
=
4 ¡3
2 ¡1
y +
e
¡t
e
t
!
:
d)
y
0
=
2 5
1 ¡2
y +
sint
cost
:
e)
y
0
=
2 ¡3
1 ¡2
y +
t
t + e
:
f)
y
0
=
3 ¡4
1 ¡2
y +
¡1
t
:
g)
y
0
=
3 ¡6
3 ¡3
y +
sin t
0
:
36 Systems of Differential Equations
Problem 6.32. Solve the following initial value problems
i.
8
>
>
<
>
>
:
y
1
0
= 2y
1
+ 3y
2
y
2
0
= y
1
+ 2 y
2
+ 1
y
1
(0) = y
2
(0) = 0
ii.
8
>
>
<
>
>
:
y
1
0
= ¡2y
1
+ 3y
2
+ 1
y
2
0
= ¡y
1
+ 2y
2
y
1
(0) = y
2
(0) = 0
iii.
8
>
>
<
>
>
:
y
1
0
= 2y
2
+ t
y
2
0
= ¡y
1
¡2 y
2
y
1
(0) = 0; y
2
(0) = 1
iv.
8
>
>
<
>
>
:
y
1
0
= y
2
+ 1
y
2
0
= ¡y
1
y
1
(0) = y
2
(0) = 0
v.
8
>
>
<
>
>
:
y
1
0
= 5y
1
+ 2y
2
+ 1
y
2
0
= ¡2y
1
+ y
2
+ 1
y
1
(0) = y
2
(0) = 0
vi.
8
>
>
<
>
>
:
y
1
0
= 6y
1
+ 4y
2
+ 3
y
2
0
= ¡y
1
+ 2 y
2
¡2
y
1
(0) = 1; y
2
(0) = 0
vii.
8
>
>
<
>
>
:
y
1
0
= ¡3y
1
+ 2y
2
+ t
y
2
0
= ¡y
1
¡ y
2
y
1
(0) = y
2
(0) = 0
Problem 6.33. Consider the system y
0
= Ay + r (t), where A =
1 1
2 2
. Note that λ = 0 is an eigen-
value of A.
i. Find two linearly independent solution vectors to the homogeneous system.
ii. Use undetermined coefficient method to find a particular solution to the given system if r(t) =
1
¡1
.
iii. Repeat the pr oblem when r(t) =
1
2
.
iv. Now an arbitrary constant matrix r =
r
1
r
2
can be uniquely decomposed in directions
1
¡1
and
1
2
and thus we expect the particular s olution to be of the for m c~
1
x + c~
0
. Find a partic-
ular solution when r =
3
1
.
Problem 6.34. Consider the system y
0
= Ay + r(t), where A =
3 ¡2
2 ¡2
.
i. Find two linearly independent solutions to the homogeneous system.
ii. Use undetermined coefficient method to find a particular solution to the given system if r =
1
2
e
¡t
.
iii. Repeat the pr oblem if r =
2
1
e
¡t
.
iv. Now find a particular solution if r =
0
1
e
¡t
.
6.4 Non-homogeneous systems 37
6.5 Nonlinear systems
Despite linear systems, there is no general method to solve nonlinear ones. In this section,
we discuss the properties of nonlinear systems without attempting to solve them.
6.5.1 Linearization
In the first section of this chapter, we studied the method of linearization for nonlinear
systems around an equilibrium point. By linearization, we gain some information about
the behavior of the nonlinear system . Consider the following system
(
x
0
= y
y
0
= ¡2y ¡x
2
+ 4
:
The system has two equilibrium points p
1
= (2; 0); p
2
= (¡2; 0). The linearzed version of
the system around p
1
is
d
dt
x
y
=
0 1
¡4 ¡ 2
x ¡2
y
:
The eigenvalues of the coefficient matrix are λ = ¡1 ± i 3
p
and thus p
1
is a spiral sink for
the linearized system. The linear system around p
2
is
d
dt
x
y
=
0 1
4 ¡2
x + 2
y
;
with eigenvalues λ
1;2
= ¡1 ± 5
p
and thus p
2
is a saddle point for the linear system. There-
fore, the behavior of the original system around the equilibrium points is very similar to
one of the linearized versions. Note that the behavior may b e different at points far from
the equilibrium. The following figure shows the trajectories of the nonlinear system in the
phase plane.
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
p
1
p
2
38 Systems of Differential Equations
6.5.2 Integrable systems
Consider the following autonomous system
8
<
:
dx
dt
= f ( x; y)
dy
dt
= g(x; y)
: (6.26)
If there is a scalar function H(x; y
2
) such that f = ¡
@H
@y
, and g =
@H
@x
, then the a bove
system is called integrable. Consider the f o llowing syste m
8
<
:
dx
dt
= ¡
@H
@y
dy
dt
=
@H
@x
:
The equation in the phase plane (x; y) is
dy
dx
= ¡
@H
@x
@H
@y
;
or equivalently
@H
@x
dx +
@H
@y
dy = 0;
which is so lved as H(x; y) = const. The condition that the system (6.26) is integrable in
an open doma in D R
2
, the following condition must be satisfied
@f
@x
= ¡
@g
@y
;
in addition to the continuity of f ; g; and
@f
@x
;
@g
@y
on D.
Example 6.11. Consider the equation of a pendulum
θ
00
+
g
l
sin(θ) = 0; (6.27)
and let us rewrite the equation in the following form
(
θ
1
0
= θ
2
θ
2
0
= ¡
g
l
sin(θ
1
)
: (6.28)
The above system is integrable as it satisfies the continuity condition and the relation
@
1
θ
2
= ¡
@
@θ
2
¡
g
l
sin(θ
1
)
:
It is simply verified that the scalar fu nc tion H is as follows
H(θ
1
; θ
2
) =
1
2
(θ
2
)
2
¡
g
l
cos(θ
1
);
and thus
1
2
(θ
2
)
2
¡
g
l
cos(θ
1
) = C is the solution of the system in the phase plane (θ
1
; θ
2
).
6.5 Nonl inear systems 39
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
Observe that some trajectories are closed around the point (0; 0). For th is trajecto-
ries, the solutions θ
1
= θ
1
(t) and θ
2
= θ
2
(t) in the (t; θ)-plane are periodic.
6.5.3 Stability
Definition 6.2. A point p¯ = (x¯; y¯) is called an equilibrium point of the system (6:26) if
f(p¯) = g(p¯) = 0. An equilibrium point p¯ is called an isolated equilibrium if there is an open
neighborhood of p¯ containing no other equilibrium other than p¯.
Usually it is desired to gain some information about the behavior of a system about
one o f its equilibrium points. For example, the system (6.28) has equilibrium points of the
form (; 0) for (θ
1
; θ
2
). The trajectories near the equilibrium (0; 0) a re peri odic, while at
(π; 0) or (¡π; 0) are unstable.
Definition 6.3. An isolated equilibrium p¯ is called stable if there is an open disk D cen-
tered at p¯ such that all trajectories starting at D remain inside D or in a bigger disk D
1
.
An equilibrium p¯ is called a symptotically stable id there is an open disk D centered at p¯
such that all trajectories starting inside D approach p¯ when t !1.
An important result about the asymptotically stable equilibria is given in the followin g
theorem.
Theorem 6.3. Let p¯ = (x¯; y¯) be an isolated equilibrium of the system ( 6.26) and assume
that the eigenvalues of the Jacobi matrix J
(f ;g)
(p¯) has negative real part. Then p¯ is an
asymptotically stable equilibrium of the system.
Example 6.12. Consider the following s yste m
8
<
:
dx
dt
= ¡y ¡
1
2
x(1 ¡ y
2
)
dy
dt
= x ¡
1
2
y(1 ¡ y
2
)
:
40 Systems of Differential Equations
The Jacobi matrix at p¯= (0; 0 ) is
J =
2
4
¡
1
2
¡1
1 ¡
1
2
3
5
;
with eigenvalues
λ = ¡
1
2
±i
2
p
2
:
Therefore, p¯ is asymptotically stable.
Although, the above theorem provides us with a powerful tool to decide whether if an
equilibrium is asymptotically stable or not, it does not provide us with any estimation of
the domain of the stability. For the above example, consider the following scalar function
V (x(t); y(t)) = jx(t)j
2
+ jy(t)j
2
;
which is the magnitude of the solution of t he system. We have
dV
dt
= 2x(t) x
0
(t) + 2 y(t) y
0
(t) = ¡(x
2
+ y
2
) (1 ¡ y
2
):
Obviously,
dV
dt
< 0 for ¡1 < y < 1 a nd thus we expect that trajectories starting or entering
in the disk x
2
+ y
2
< 1 approach the origin when t !1.
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
Example 6.13. Consider the following damped pendulum equation
θ
00
+
0
+
g
l
sin(θ) = 0 : (6.29)
As we know, the system dissipate its energy and approaches the equilibrium (0; 0). In fact,
if we multiply the equation by θ
0
, we obtain
d
dt
1
2
jθ
0
j
2
+
g
l
(1 ¡cos(θ))
= ¡" jθ
0
j
2
:
6.5 Nonl inear systems 41
If we take
V ( θ; θ
0
) =
1
2
jθ
0
j
2
+
g
l
(1 ¡cos(θ));
then
V (θ; θ
0
) = V
0
¡"
Z
0
t
jθ
0
(s)j
2
ds;
where V
0
= V (θ(0); θ
0
(0)) is the initial energy of the pendulum. Obviously, jθ
0
(t)j ! 0 for
t ! 1, and thus V (θ(t); θ
0
(0)) ! 0 for t ! 1 (why?). This implies that th e pendulum
approaches the equilibrium in long term. Let us write th e equation in the following form
(
θ
1
0
= θ
2
θ
2
0
= ¡" θ
2
¡
g
l
sin(θ
1
)
:
The linearized version of the system is
θ
1
0
θ
2
0
=
0 1
¡g
l
¡"
!
θ
1
θ
2
:
The eigenvalues of the coefficient matrix are
λ
1;2
=
¡" ± "
2
¡4
g
l
q
2
: (6.30)
If " is sufficiently s mall, then λ
1;2
are complex with the real part
¡"
2
. This im plies that the
point (0; 0) is a spira l sink as we expected from the given no n-linear equation.
If the real part of the Jacobi matrix is zero, we can not s ay that the equilibrium point
is a center, asymptotically stable or unstable. The following exam ple explains this case.
Example 6.14. Consider the following damped pendulum equation
θ
00
+
0
3
+
g
l
sinθ = 0: (6.31)
Here the d rag force is assumed to be of order 3 of the velocity. It is simply seen that this
nonlinear system dissipate energy in time and approaches the origin for small " > 0. Let us
write the equation in the system form as follows
(
θ
1
0
= θ
2
θ
2
0
= ¡" θ
2
3
¡
g
l
sin(θ
1
)
:
The linearize version of the system at (0; 0) is
(
θ
1
0
= θ
2
θ
2
0
= ¡
g
l
θ
1
;
and thus (0; 0) is a center point for the linearized system while it is a sink fo r the original
nonlinear system.
42 Systems of Differential Equations
Problems
Problem 6.35. Fo r the following linear system, find the general solution in the phase plane and com-
pare the results with the solutions obtained by the direct method of solving linear systems.
a)
8
<
:
dy
1
dt
= 3y
1
¡5y
2
dy
2
dt
= y
1
¡3y
2
:
b)
8
<
:
dy
1
dt
= 3y
1
¡ y
2
dy
2
dt
= y
1
+y
2
:
c)
8
<
:
dy
1
dt
= y
1
¡y
2
dy
2
dt
= y
1
+y
2
:
Problem 6.36. Verify that the following systems are integrable and then derive the potential H and
write the solution given the initial condition.
a)
8
<
:
dx
dt
= ¡y
dy
dt
= x
; (x(0); y(0 )) = (1; ¡1)
b)
8
<
:
dx
dt
= x ¡ y
2
+ 1
dy
dt
= x
2
¡ y ¡1
; (x(0); y (0)) = (0; 1)
c)
8
>
>
<
>
>
:
dx
dt
= ¡
y
x
2
+ y
2
dy
dt
=
x
x
2
+ y
2
; (x(0); y(0)) = (1; 1)
Problem 6.37. For each of the following system, find equilibrium points and determine the type o f
each equilibrium of the correspond linearized system
i.
(
y
1
0
= y
1
y
2
y
2
0
= y
1
2
+ y
2
2
¡1
ii.
(
y
1
0
= y
1
y
2
y
2
0
= y
1
2
¡3y
1
+ 2
iii.
(
y
1
0
= y
1
sin(y
2
)
y
2
0
= ¡y
2
sin(y
1
)
6.5 Nonl inear systems 43
iv.
(
y
1
0
= y
1
(1 ¡y
2
2
)
y
2
0
= y
2
(1 + y
1
)
v.
(
y
1
0
= y
1
2
+ 4y
2
y
2
0
= y
1
2
¡ y
2
2
Problem 6.38. Prove that (0; 0) is an asymptotically stable point of the following system
8
<
:
dx
dt
= ¡xe
y
+ y
dy
dt
= ¡x ¡ y
Problem 6. 3 9 . Linearize the following system at its equilibrium point(s) and determine the type of
the linearized system.
x
0
= y(x ¡1)
y
0
= x(y ¡1)
:
6.6 Highe r dimensional systems: fundamental matrix
A matrix Φ(x) is called the fundamental matrix of another matrix A
n×n
if
i. Φ
n×n
(0) = I,
ii. Φ
0
(t) = A
n×n
Φ(t) for all t 2(¡1; 1)
Consider the following initial value problem
(
d
dt
y = Ay + r(t)
y(0) = y
0
:
If Φ(t) is the fundamental matrix of A , then
y(t) = Φ(t)y
0
+ Φ(t)
Z
0
t
Φ(¡τ )r(τ ) dτ:
For a 2 ×2 matrix A, the fundamental matrix Φ is just
Φ = [φ
1
(t)jφ
2
(t)] [φ
1
(0)jφ
2
(0)]
¡1
;
where φ
1
; φ
2
are two linearly solutions of the system y
0
= Ay. Here we introduce a method
to derive the fundamental matrix for general matrices.
6.6.1 Exponential formula
If A is a matrix, we define the exponential matrix e
A
through the following series
e
A
= I + A +
1
2
A
2
+
1
3!
A
3
+ ·· ·: (6.32)
Apparently, we need to justify that the infinite series in the right hand side of the above
equality converges. In the appendix, we prove that the s eries is convergent.
44 Systems of Differential Equations
Theorem 6.4. Let A be a n ×n constant matrix. The unique fundamental matrix Φ(t) of
the A is Φ(t) = e
At
.
Proof. Ac cord ing to the definition of a fundamental matrix and of the matrix e
At
, it is
immediately follows tha t Φ(0) = I. To justify the second property of a fund amental
matrix, we need the following fact from the linear algebra: if A; B are two n × n matrices
that satisfie s the co ndition AB = BA then e
A
e
B
= e
A+B
. Using that fact, we justify the
second property as follows
d
dt
e
At
= lim
h!0
e
A(t+h)
¡e
At
h
= lim
h!0
e
Ah
e
At
¡e
At
h
= lim
h!0
e
Ah
¡I
h
e
At
:
According to th e definition of e
Ah
, we can write
e
Ah
¡I
h
= A + h
1
2
A
2
+ ···+
1
n!
A
n
h
n¡2
+ ·· ·
:
The expression in the bracket converges to a n ×n matrix, say B, and then
lim
h!0
e
Ah
¡I
h
= A + lim
h!0
hB = A:
This implies that
d
dt
e
At
= Ae
At
and thus e
At
is a f undamental matrix o f the matrix A. We
now show that this matrix is unique. If Φ
1
; Φ
2
are two fundamental matrices, then for any
arbitrary c~ 2 R
n
, the vectors y
1
= Φ
1
(t)c~ and y
2
= Φ
2
(t)c~ are solutions to the initial value
problem
(
d
dt
y = Ay
y(0) = c~
:
According to the uniqueness theorem, we have Φ
1
(t)c~ = Φ
2
(t)c~ for arbitrary c~, and thus
Φ
1
= Φ
2
.
Below, we present an algorithm to ca lculate the summation of the infinite sums.
6.6.2 Calcula tion of the fundamental matrix
In order to derive the fundamental matrix in a closed form, we use the Jordan normal
form o f a matrix; see the appe ndix of this bo o k. Firs let us fix our notation. If v~
1
; :::; v~
n
are the columns of a matrix Q, we represent Q by Q = [v~
1
jv~
2
··jv~
n
] and by this notation,
the matrix multiplication AQ (in the case of the proper dimensionality) is defined by the
relation
AQ=[Av~
1
jAv~
2
··jAv~
n
]: (6.33)
We need the following fact in o ur subsequent discussion.
Proposition 6.2. If C is an invertible matrix then for an arbitrary matrix A we have
e
CAC
¡1
= Ce
A
C
¡1
: (6.34)
Proof. Direct calculation gives the result. In fact, we have
e
CAC
¡1
= I + CAC
¡1
+ ···+
1
n!
(CAC
¡1
)
n
+ ···: (6.35)
6.6 Higher dimensional systems: fundamental matrix 45
Since
(CAC
¡1
)
k
= CAC
¡1
CAC
¡1
···CAC
¡1
= CA
k
C
¡1
; (6.36)
we obtain
e
CAC
¡1
= I +···+
1
n!
CA
k
C
¡1
+ ···= C
I + ···+
1
n!
A
k
+ ·· ·
C
¡1
= Ce
A
C
¡1
; (6.37)
and this completes the proof.
Matrix A with real distinct eigenvalues.
Assume that A
n×n
has n real distinct eigenvalues λ
1
; :::; λ
n
. In this case A has n linearly
indep endent eigenvectors v~
1
; :::; v~
n
. Moreover, for the eigenvector matrix Q = [v~
1
jv~
2
··jv~
n
],
we have
Q
¡1
AQ = diag(λ
1
; :::; λ
n
):
The proof is straightforward and we leave it a s an exercise to the reader.
Consider the following system
d
dt
y = Ay;
and let Q be the eigenvector matrix Q = [v~
1
j···jv~
n
]. By the axis transformation Y = Q
¡1
y,
the above system is transformed to the following one
Q
d
dt
Y = AQY ;
and therefore,
d
dt
Y = Q
¡1
AQY = diag(λ
1
; :::; λ
n
)Y :
The fundamental matrix of the later system is
Ψ(x) = e
diag(λ
1
t;:::;λ
n
t)
= diag(e
λ
1
t
; :::; e
λ
n
t
):
The above equality is directly f o llows from the definition of the exponential matrix e
A
. On
the other hand, since A = QΛQ
¡1
, we obtain
Φ(t) = e
Qdiag(λ
1
t;:::;λ
n
t)Q
¡1
= Qdiag(e
λ
1
t
; :::; e
λ
n
t
) Q
¡1
:
The last equality obtained by the proposition (6.2).
Example 6.15. Consider the following s yste m
d
dt
y =
0
@
1 0 4
3 2 0
0 0 3
1
A
y: (6.38)
The eigenvalues of the coefficient matrix are λ
1
= 1, λ
2
= 2 and λ
3
= 3. The associated
eigenvectors a re
v~
1
=
0
@
1
¡3
0
1
A
; v~
2
=
0
@
0
1
0
1
A
; and v~
3
=
0
B
B
@
1
3
1
2
1
C
C
A
: (6.39)
46 Systems of Differential Equations
Therefore, Φ is obtained as follows
Φ = Q
0
B
B
@
e
t
0 0
0 e
2t
0
0 0 e
3t
1
C
C
A
Q
¡1
=
0
B
B
@
e
t
0 2e
3t
¡2e
t
3e
2t
¡3e
t
e
2t
6e
3t
¡12e
2t
+ 6e
t
0 0 e
3t
1
C
C
A
:
Matrix A with repeated eigenvalues.
One eigenvector.
Assume that A has only one eigenvector v~
1
. This implies that the characteristic polyno-
mial p(λ) of A has only one root, say λ
1
, that is, p(λ) = (λ ¡ λ
1
)
n
. It is known (see the
appendix of the book) that there are exactly n ¡ 1 generalized eigenvectors v~
2
; :::; v~
n
for A
satisfying the following relation
(A ¡λI)v~
k
= v~
k¡1
; k = 2; :::; n:
Moreover, we have the following fact.
Proposition 6.3. Assume that A
n×n
has only one eigenvector v~
1
and one eigenvalue λ.
Let v~
2
; :::; v~
n
be the generalized eigenvectors of A
n×n
. Then Q
¡1
AQ = λI + [δ
i;j ¡1
], where
δ
i;j
=
1 i = j
0 i =/ j
and Q = [v~
1
··jv~
n
].
Proof. We have
AQ = [Av~
1
jAv~
2
··jA v~
n
] = [λ v~
1
jv~
1
+ λv~
2
··jv~
n¡1
+ λv~
n
]:
On the other hand, we have
[λ v~
1
jv~
1
+ λv~
2
j···jv~
n¡1
+ λv~
n
] = [λ v~
1
jλv~
2
j···jλv~
n
] + [0jv~
1
··jv~
n¡1
] =
=λQ+[0jv~
1
··jv~
n¡1
]:
It is simply seen that
[0jv~
1
··jv~
n¡1
] = Q [0je~
1
je~
2
j···je~
n¡1
]:
Thus Q
¡1
AQ = λI + [0je~
1
je~
2
··je~
n¡1
] and this is the Jordan form of a matrix with only
one eigenvector.
We use the notation S = [0je~
1
je~
2
··je~
n¡1
] in our subsequent calculation. It is simply
seen that S
2
= [0j0je~
1
··je~
n¡2
], and S
3
= [0j0j0je~
1
··je~
n¡3
]. This implies that S
n
= 0
n×n
.
Now, we have
Q
¡1
e
At
Q = e
Q
¡1
AtQ
= e
λt
e
St
= e
λt
I + tS +
1
2!
t
2
S
2
+ ···+
1
(n ¡1)!
t
n¡1
S
n¡1
: (6.40)
This implies
Φ(t) = e
λt
Q
I +tS +
1
2!
t
2
S
2
+ ·· ·+
1
(n ¡1)!
t
n¡1
S
n¡1
Q
¡1
: (6.41)
6.6 Higher dimensional systems: fundamental matrix 47
Example 6.16. Let us find the fundamental matrix of A =
0
B
B
@
2 3 ¡1
0 2 0
0 1 2
1
C
C
A
. The matrix has
repeated eigenvalue λ = 2 with the eigenvector v
1
~ =
0
B
B
@
1
0
0
1
C
C
A
. The generalized eigenvector v~
2
is
determined by solving the equation (A ¡ 2I)v
~
2
= v
~
1
. It is simply verified that v
~
2
=
0
B
B
@
0
0
¡1
1
C
C
A
.
Similarly, f o r the generalized eigenvector v~
3
, we solve the equation (A ¡ 2I)v
3
~ = v~
2
and
obtain v~
3
=
0
B
B
@
0
¡1
¡3
1
C
C
A
. For Q =
0
B
B
@
1 0 0
0 0 ¡1
0 ¡1 ¡3
1
C
C
A
, the fundamental matrix is
Φ = e
2t
Q
0
B
B
@
1 t
1
2!
t
2
0 1 t
0 0 1
1
C
C
A
Q
¡1
= e
2t
0
B
B
@
1 3t ¡
1
2
t
2
¡t
0 1 0
0 t 1
1
C
C
A
: (6.42)
Multiple eigenvectors.
With only one e igenvalue, the matrix A
n×n
can have one, two or e ven n i ndependent
eigenvectors bas ed on the algebraic multiplicity of the eigenvalue.
Definition 6.4. Let A
n×n
be a matrix. The algebraic multiplicity of an eigenvalue λ
of A
is the value m such that p(λ) is of the form
p(λ) = (λ ¡λ
)
m
q (λ); (6.43)
where q(λ) does not have any factor of λ ¡ λ
. The geometric multiplicity r of λ
is the
dimension of the null space of the map (A ¡λ
I), i.e., r = ker (A ¡λ
I).
Remark 6.3. Note that r is always less than or e qual m. If r < m then there exist m ¡r
generalized eigenvectors w
1
; :::; w
m¡r
such that (A ¡λ
I)w
1
is an eigenvector and
(A ¡λ
I)w
k
= w
k¡1
for k = 2; :::; m ¡r: (6.44)
Example 6.17. Consider the following m a trix
A =
0
@
1 1 1
0 1 0
0 0 1
1
A
: (6.45)
This matrix has repeated eigenvalue λ = 1 with the algebraic multiplicity m = 3. It is
simply verified that vectors v
1
~ =
0
B
B
@
1
0
0
1
C
C
A
and v
2
~ =
0
B
B
@
0
1
¡1
1
C
C
A
are eigenvectors associated to λ = 1
and thus λ = 1 has geo metric multiplicity r = 2. Note that for any vector v~ in span
(
0
B
B
@
1
0
0
1
C
C
A
;
0
B
B
@
0
1
¡1
1
C
C
A
)
, we have (λ ¡I)v~ = 0. Moreover, we have (A ¡I)w~ = v~
1
for w~ =
0
B
B
@
0
1
0
1
C
C
A
and thus w~ is
a generalized eigenvector of A.
48 Systems of Differential Equations
Example 6.18. Consider the system
d
dt
y =
0
@
1 1 1
0 1 0
0 0 1
1
A
y: (6.46)
As we saw above, the matrix has two eigenvectors v~
1
=
0
B
B
@
1
0
0
1
C
C
A
and v~
2
=
0
B
B
@
0
1
¡1
1
C
C
A
with eigen-
value λ = 1. Since (A ¡I)w = v~
1
for w =
0
B
B
@
0
1
0
1
C
C
A
, the matrix Q is Q = [v~
1
jw~ jv
2
~ ] and then
Q
¡1
AQ =
0
B
B
@
1 1 0
0 1 0
0 0 1
1
C
C
A
: (6.47)
Observe how a Jordan block is formed in the upper sub-matrix of Q
¡1
AQ. Accordingly,
we have
e
Ax
= Q
0
B
B
@
e
t
te
t
0
0 e
t
0
0 0 e
t
1
C
C
A
Q
¡1
= e
t
0
@
1 t t
0 1 0
0 0 1
1
A
: (6.48)
Matrix A with complex eigenvalues.
We describe first the method for a 2 × 2 matrix and then generalize the result fo r higher
dimensional systems. First, note that if A
2×2
has complex eigenvalues λ
1;2
= σ ± i! then
its associated eigenve ctors v~
1
, v~
2
are in conjugate form, i.e., v~
2
= v~
1
¯
. For simplicity, we
denote the eigenvalue by λ = σ + i! and the complex eigenvector by v~. In this case, the
matrix Q = [v~ j v~
¯
] is complex. We transform this matrix to a real one by a simple trick.
Define Q as
Q =
1
2
[v~ jv~
¯
]
¡i 1
i 1
=
1
2
[¡i(v~ ¡v~
¯
)jv~ + v~
¯
] = [Im(v~)jRe(v~)]: (6.49)
Proposition 6.4. For Q defined in (6:49), we have
Q
¡1
AQ =
σ ¡!
! σ
: (6.50)
Proof. Direct calculation proves the claim. In fact, we have
AQ = A[v~ jv~
¯
]
¡i 1
i 1
= [λv~ jλ
¯
v~
¯
]
¡i 1
i 1
= [v~ jv~
¯
]
λ 0
0 λ
¯
!
¡i 1
i 1
=
=[v~ jv~
¯
]
σ + i! 0
0 σ ¡i!

¡i 1
i 1
= [v~ jv~
¯
]
! ¡iσ σ + i!
! + σ ¡i!
:
On the other hand, we have
Q
σ ¡!
! σ
= [v~ jv~
¯
]
¡i 1
i 1
σ ¡!
! σ
=[v~ jv~
¯
]
! ¡iσ σ + i!
! + σ ¡i!
:
6.6 Higher dimensional systems: fundamental matrix 49
Therefore AQ = Q
σ ¡!
! σ
and this completes the proof.
Accordingly, we have the equality
A = Q
σ ¡!
! σ
Q
¡1
; (6.51)
and then
Φ = Qe
σ ¡!
! σ
t
Q
¡1
: (6.52)
Now, we have the fo llowing proposition.
Proposition 6.5. The following relation holds
e
σ ¡!
! σ
t
= e
σt
cos(!t) ¡sin(!t)
sin(!t) cos (!t)
: (6.53)
Proof. Consider the following decomposition
σ ¡!
! σ
= σ I
2×2
+ !D; (6.54)
where D = [e~
2
j¡e~
1
]. Then we can write
e
σ ¡!
! σ
t
= e
σt
e
!tD
: (6.55)
But we have
e
!tD
= I
2×2
+ !t D +
1
2!
!
2
t
2
D
2
+ ·· ·: (6.56)
Simple calculation shows D
2
= ¡I a nd then
D
3
= ¡D; D
4
= I
2×2
; D
5
= D; ···: (6.57)
Let us denote S the matrix e
!tD
. We have then
S
1;1
= 1 ¡
1
2!
!
2
t
2
+
1
4!
!
4
t
4
¡···= cos(!t); (6.58)
and
S
12
= ¡!t +
1
3!
!
3
t
3
¡
1
5!
!
5
t
5
+ ·· ·= ¡sin(!t): (6.59)
Similarly we obtain S
2;1
= sin(!t) and S
22
= cos(!t).
By the above pr oposition, the fundamental matrix Φ is
Φ = e
σt
Q
cos(!t) ¡sin(!t)
sin(!t) cos(!t)
Q
¡1
: (6.60)
Example 6.19. Let us find the fundamental matrix of A =
1 ¡5
1 ¡3
. This matrix has
eigenvalue λ = ¡1 + i and thus σ = ¡1 and ! = 1. Th e associated eigenvector is v~ =
5
2 ¡ i
. For the matrix Q we have
Q = [Im(v~)jRe(v~)] =
0 5
¡1 2
: (6.61)
50 Systems of Differential Equations
Therefore Φ is
Φ = e
¡t
0 5
¡1 2

cos(t) ¡sin(t)
sin(t) cos(t)
0 5
¡1 2
¡1
=
=e
¡t
cos(t) + 2 sin(t) ¡5 sin(t)
sin(t) cos(t) ¡2 sin(t)
:
Now, we f o llow the same method to find the fundamental ma trix of a 4 ×4 matrix.
Example 6.20. We find the fundamental matrix associated to th e matrix
A =
0
B
B
B
B
B
B
@
1 1 0 1
¡1 1 1 0
0 0 1 ¡1
0 0 1 1
1
C
C
C
C
C
C
A
: (6.62)
The eigenvalue of the matrix is λ = 1 + i with the algebraic multiplicity m = 2. The eigen-
vector of the matrix is v~ = (1; i; 0; 0). So we have two eigenvectors v~ and v~
¯
. We find the
generalized eigenvector w~ such that
(A ¡λI
4×4
)w~ = v~: (6.63)
A simple calculation gives w~ = (1; i; i; 1). The matrix Q is
Q = [Im(v~)jRe(v~)jIm(w~ )jRe(w~ )] =
0
B
B
B
B
B
B
@
0 1 0 1
1 0 1 0
0 0 1 0
0 0 0 1
1
C
C
C
C
C
C
A
:
It is seen that
Λ = Q
¡1
AQ =
0
B
B
B
B
B
B
@
1 ¡1 1 0
1 1 0 1
0 0 1 ¡1
0 0 1 1
1
C
C
C
C
C
C
A
: (6.64)
Note the identity block in the upper diagonal of the matrix Λ. We have
e
Λt
= e
t
0
B
B
B
B
B
B
@
cos(t) ¡sin(t) t cos(t) ¡t sin(t)
sin(t) cos(t) t sin(t) t cos(t)
0 0 cos(t) ¡sin(t)
0 0 sin(t) cos(t)
1
C
C
C
C
C
C
A
;
and finally the fundamental matrix is
Φ = Qe
Λt
Q
¡1
= e
t
Q
0
B
B
B
B
B
B
@
cos(t) sin(t) t sin(t) t cos(t)
¡sin(t) cos(t) t cos(t) ¡t sin(t)
0 0 cos(t) ¡sin(t)
0 0 sin(t) cos(t)
1
C
C
C
C
C
C
A
Q
¡1
: (6.65)
6.6 Higher dimensional systems: fundamental matrix 51
6.6.3 General matrices
Let us show the method for a general matrix by solving an example. Consider the system
y = Ay, where A is
A =
0
B
B
B
B
B
B
B
B
B
B
@
1 2 3 4 5
0 1 ¡1 3 4
0 1 3 2 4
0 0 0 1 ¡5
0 0 0 1 ¡3
1
C
C
C
C
C
C
C
C
C
C
A
: (6.66)
The eigenvalues of the matrix are
λ
1
= 1; λ
2;3
= 2; λ
4;5
= ¡1 ±i; (6.67)
with the associated eigenvectors
v~
1
=
0
B
B
B
B
B
B
B
B
B
B
@
1
0
0
0
0
1
C
C
C
C
C
C
C
C
C
C
A
; v~
2
=
0
B
B
B
B
B
B
B
B
B
B
@
1
¡1
1
0
0
1
C
C
C
C
C
C
C
C
C
C
A
; v~
4;5
=
0
B
B
B
B
B
B
B
B
B
B
@
¡1.36 ±i0.58
¡3.84 i3.38i
¡1.06 ±i0.08
2 ±i
1
1
C
C
C
C
C
C
C
C
C
C
A
(6.68)
The generalized eigenvector v~
3
is obtained by solving the equation
(A ¡2I) v~
3
= v~
2
; (6.69)
that gives v~
3
= (1; 1; 0; 0; 0). Thus, the matrix Q is
Q = [v~
1
jv~
2
jv~
3
jIm(v~
4
)jRe(v
4
~ )] =
0
B
B
B
B
B
B
B
B
B
B
@
1 1 1 0.58 ¡1.36
0 ¡1 1 ¡3.38 ¡3.84
0 1 0 0.0.8 ¡1.06
0 0 0 1 2
0 0 0 0 1
1
C
C
C
C
C
C
C
C
C
C
A
It is simply verified that
Q
¡1
AQ =
0
B
B
B
B
B
B
B
B
B
B
@
1 0 0 0 0
0 2 1 0 0
0 0 2 0 0
0 0 0 ¡1 ¡1
0 0 0 1 ¡1
1
C
C
C
C
C
C
C
C
C
C
A
: (6.70)
So Λ = Q
¡1
AQ is a Jordan 3-block matrix. Therefore we have
e
Λt
=
0
B
B
B
B
B
B
B
B
B
B
@
e
t
0 0 0 0
0 e
2t
te
2t
0 0
0 0 e
2t
0 0
0 0 0 e
¡t
cos(t) ¡e
¡t
sin(t)
0 0 0 e
¡t
sin(t) e
¡t
cos(t)
1
C
C
C
C
C
C
C
C
C
C
A
: (6.71)
52 Systems of Differential Equations
The matrix Φ is obtained by the formula Φ = Qe
Λt
Q
¡1
.
Problems
Problem 6. 40. Show that the fundamental matrix of a system is unique. That is, if Φ
1
; Φ
2
are two
fundamental matrices of a matrix A, then Φ
1
= Φ
2
.
Problem 6.41. If Φ is the fundamental matrix of matrix A, show that the solution to the problem
y
0
= Ay + r(t)
y(0) = y
0
;
is obtained by the relation
y(t) = Φ(t)y
0
+ Φ(t)
Z
0
t
Φ(¡τ)r(τ) : (6.72)
Problem 6.42. Show that if φ = [φ
1
(t)jφ
2
(t)] is a solution matrix of the system y
0
= Ay, then Φ =
φ(t)φ
¡1
(0) is the fundamental matrix of A.
Problem 6.43. Consider the problem
y
0
= Ay
y(0) = y
0
and assume A has two real distinct eigenvalue λ
1
;
λ
2
with associated eigenvalue v~
1
; v~
2
.
i. Show that the solution can be written as follows
y(t) = e
λ
1
t
y
0
+ (e
λ
2
t
¡e
λ
1
t
) cv~
2
;
for some suitable constant c.
ii. Show that
c v~
2
=
1
λ
2
¡λ
1
(A ¡λ
1
I)y
0
;
and conclude that
y(t) =
e
λ
1
t
I +
(e
λ
2
t
¡e
λ
1
t
)
λ
2
¡λ
1
(A ¡λ
1
I)
y
0
: (6.73)
iii. Show that
y(t) =
e
λ
1
t
λ
1
¡λ
2
(A ¡λ
1
I) +
e
λ
2
t
λ
2
¡λ
1
(A ¡λ
2
I)
y
0
(6.74)
iv. Assume that A
2×2
has only one eigenvector v~ with eigenvalue λ. Use formula (6.73) and show
that the fundamental matrix Φ(x) of A is
Φ(t) = e
λt
[I + t (A ¡λI)]: (6.75)
Hint: For λ
1
= λ let λ
2
!λ and calculate the limit.
v. Assume that A has a complex eigenvalue λ = σ + i! and a complex eigenvector v~. Use formula
(6.74) and show that the fundamental matrix Φ(t) of A is
Φ(t) = e
σt
cos(!t)I +
1
!
sin(!t)(A ¡σI)
:
Problem 6.44. Use the formula Φ = φ(t)φ
¡1
(0) to determine the fundamental matrix of the fol-
lowing system. Then use the exponential f orm of the fundamental matrix and verify it is equal to the
obtained one.
a)
y
0
=
2 3
3 2
y:
b)
y
0
=
¡2 3
¡1 2
y:
6.6 Higher dimensional systems: fundamental matrix 53
c)
y
0
=
0 2
¡1 ¡2
y:
d)
y
0
=
0 1
¡1 0
y:
e)
y
0
=
6 3
¡2 1
y:
f)
y
0
=
5 2
¡2 1
y:
g)
y
0
=
6 4
¡1 2
y:
h)
y
0
=
¡3 2
¡1 ¡1
y:
i)
y
0
=
1 3
¡3 1
y:
Problem 6.45. Write down the following equations in a system form, find the fundamental matrix of
the system and then find the solution of the system satisfying the initial conditions
i. y
00
+ 3 y
0
+ 2y = sin(x), y(0) = 0, y
0
(0) = 0.
ii. y
00
¡ y = e
x
, y(0) = 0, y
0
(0) = 1.
iii. y
00
¡2 y
0
¡3y = x, y(0) = 1, y
0
(0) = ¡1.
Problem 6.46. Consider the system
(
d
dt
y = Ay
y(0) = y
0
:
Divide the segment [0; t] into n sub-interval with the length h =
t
n
and use the approximation
y((k + 1)h) = y (kh) + y
0
(kh) h;
to conclu de
y(t) = (I + hA)
t
h
y
0
:
Let n !1 and conclude y(t) = e
At
y
0
.
Problem 6.47. Consider the system in the previous problem. Rewrite the system in the integral
form
y(t) = y
0
+ A
Z
0
t
y(τ) :
Use the Picard approximation and conclude
y
n+1
(t) =
I + A +
1
2
A
2
+ ···+
1
n!
A
n
y
0
:
Let n !1 and conclude y(t) = e
At
y
0
.
Problem 6.48. Assume that A
n×n
has only one eigenvalue λ
. This implies p(λ) = (λ ¡λ
)
n
and by
the Cayley-Hamilton theorem we know (A ¡λ
I)
n
= 0. Use this identity to show
Φ(t) = e
λ
t
X
k=1
n¡1
1
k!
(A ¡λ
I)
k
: (6.76)
54 Systems of Differential Equations
Problem 6.49. Obtain the fundamental matrix of the following system
y
0
=
0
@
1 2 3
0 0 ¡1
0 1 0
1
A
y:
Problem 6.50. Obtain the fundamental matrix of the following system and solve it:
y
0
=
0
@
1 2 3
0 1 0
0 1 1
1
A
y +
0
@
0
0
1
1
A
; y(0) =
0
@
1
0
0
1
A
Problem 6.51. Find the fundamental matrix of the following system
8
>
>
<
>
>
:
y
1
0
= y
1
+ 5 y
3
+ t
y
2
0
= 2y
2
+ 6y
3
y
3
0
= 3y
1
+ 3y
3
+ sin(t)
and nd y
1
(t) if the initial condition is (y
1
(0); y
2
(0); y
3
(0)) = (0; 0; 0).
Problem 6.52. Find the fundamental matrix of the following system
8
>
>
<
>
>
:
y
1
0
= 2y
1
+ 3y
3
+ t
y
2
0
= 2y
2
+ y
3
¡e
¡t
y
3
0
= x ¡4y
2
+ 4y
3
and nd y
3
if the initial condition is (y
1
(0); y
2
(0); y
3
(0)) = (0; 0; 0).
Problem 6.53. For the following system, w rite down the equation of motion. It gives a system of
second order equations. If b
1
= b
2
= 2, make a substitution to reduce the system to a decoupled
system. Analyze the obtained system in terms of k
1
; k
2
.
x
1
b
1
b
2
m
1
k
2
k
1
m
2
x
2
6.6 Higher dimensional systems: fundamental matrix 55
Appendix A
Repeated eigenvalue: A justication
In the case of repeated roots for the characteristic polynomial p(λ), the matrix A
2×2
may
have on ly one eigenvector v~. In this case, p(λ) has the form
p(λ) = (λ ¡λ
)
2
: (A.1)
Proposition A.1. Assume λ
is a repeated eigenvalue of the matrix A
2×2
with only one
eigenvector v~. Then there exist w~ such that
(A ¡λ
I) w~ = v~: (A.2)
Proof. Recall the Cayley-Hamilton theorem which states that every matrix satisfies in
its characteristic polynomial, i .e. ,
(A ¡λ
I)
2
= 0: (A.3)
If V is the one dimensional space containing v~, choose arbitrary vector w~ 2 V. For the
vector u~ = (A ¡λ
I)w~ , we have (A ¡λ
I) u~ = 0. This implies that u~ = c v~ for some constant
c =/ 0 (note that if c = 0 then u~ = 0 or equivalently w~ 2 V). Now choose w~ such tha t c = 1
and therefore (A ¡λ
I)w~ = v~
1
.
Let V and W be one dimensional spaces of v~ and w~ respectively. If y(0) = v~
1
then
according to the relation y
0
= λ
y a nd we obtain y(t) = e
λ
t
v~
1
. If y(0) = v~
2
then we have
y
0
(0) = λ
v~
2
+ v~
1
: (A.4)
In order to obtain y(t) when y(0) = w~ 2W; we do a s follows. Divide the segment [0; t] into
n sub-interval δx =
t
n
. The approximation value of y(δt) is
y(δt)
=
y(0) + y
0
(0) δt = (1 + λ
δt)w~ + δtx v~: (A.5)
At 2δt, the approximation is
y(2δt) = y(δt) + y
0
(δt) δt = (1 + λ
δt)
2
w~ + 2(1 + λ
δt) v~: (A.6)
It is seen that for t = nδt
y(t) = (1 + λ
δt)
n
w~ + n(1 + λ
δt)
n¡1
v~: (A.7)
57
Now let n !1, then accord ing to the formula
lim
n!1
1 + λ
t
n
n
= e
λ
t
; (A.8)
and
lim
n!1
n(1 + λ
δt)
n¡1
= te
λ
t
; (A.9)
we obtain the solution
y(t) = e
λ
t
w~ +te
λ
t
v~: (A.10)
In general if y(0) = a
1
v~ + a
2
v~, we obtain y(t) as
y(t) = e
λ
t
a
1
v~ + e
λ
t
(a
2
t v~ + a
2
w~ ): (A.11)
We can write (A.11) as
y(t) = e
λ
t
y
0
+ te
λ
t
a
2
v~: (A.12)
On the other hand, according to the relation (A.2) we can w rite
a
2
v~ = (A ¡λ
I)y
0
; (A.13)
and this justifies formula we obtained for repeated eigenvalue.
A. 1 Convergenc e of m atrix series
Theorem A.1. If A
n×n
is a constant mat rix, the matrix series
e
A
= I + A +
1
2!
A
2
+ ·· ·+
1
n!
A
n
+ ·· ·;
converges to a n ×n matrix.
We define the norm of a matrix A = [a
ij
]
n×n
as
kAk=
X
i;j
ja
ij
j: (A.14)
Lemma A .1. If A = [a
ij
]
n×n
, B = [b
ij
]
n×n
are two matrices then
i. kA + B kkAk+ kB k
ii. kAB kkAkkB k.
Proof. The proof o f the part (i) is straightforwa rd and is left as an exercise to the reader.
For the part (ii), first note that if
a = (a
1
; :::; a
n
); and b =
0
@
b
1
·
·
·
b
n
1
A
;
then
ab (ja
1
j+ ···+ ja
n
j) (jb
1
j+ ···+ jb
n
j): (A.15)
58 Repeated eigenvalue: A justification
If A
i
~
denotes the i
th
row of the matrix A and B
j
denotes the j
th
column of the matrix B
then AB = [A
~
i
B
j
]
n×n
and
kABk=
X
ij
jA
~
i
B
j
j
X
ij
jA
~
i
jjB
j
j= (jA
~
1
j+ ···+ jA
~
n
j) (jB
1
j+ ···+ jB
n
j); (A.16)
where jA
~
i
j=
P
j
ja
ij
j and jB
j
j=
P
i
jb
ij
j. Since
kAk=
X
i
(jA
~
i
j); and kB k=
X
j
(jB
j
j); (A.17)
we obtain
kABkkAkkB k; (A.18)
and this completes the proof.
Define the matrix S
N
as
S
N
= I + A + ···+
1
N!
A
N
: (A.19)
Using the above lemma we have
kS
N
kkI k+ kAk+ ···+
1
N!
kA
N
kn + kAk+ ···+
1
N!
kAk
N
(A.20)
But we have
e
kAk
= 1 + kAk+ ···+
1
N!
kAk
N
+ ·· ·; (A.21)
and then
kS
N
k(n ¡1) + e
kAk
: (A.22)
Let N !1 and then we have kS k (n ¡1) + e
kAk
. In particular, for S = [s
ij
]
n×n
we have
js
ij
j< 1 and then S
N
converges to the n ×n matrix S.
A.1 Convergence of matrix series 59