#### Project File Details

3,000.00

File Type: MS Word (DOC) & PDF
File Size:501 KB
Number of Pages:52

Epigraph ii
Preface iii
Acknowledgement iv
Dedication v
1 Introduction and Generalities 1
1.1 Basic concepts from linear functional analysis . . . . . . . . . . . . . . . . . 2
1.1.1 Linear operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Matrix calculus and basic Operator theory . . . . . . . . . . . . . . . . . . . 5
1.2.1 Matrices, eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . 5
1.2.2 Limits of sequences of operators . . . . . . . . . . . . . . . . . . . . . 8
1.3 Review of calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3.1 The mean value theorem . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.2 Integration in Banach space . . . . . . . . . . . . . . . . . . . . . . . 12
2 Basic notions of ordinary dierential equations 13
2.1 Generalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2 Existence and Uniqueness of Solutions to a System . . . . . . . . . . . . . . 15
2.2.1 General theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2.2 Linear systems of ODE . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.3 Stability theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3.1 Phase space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3.2 General denition of stability . . . . . . . . . . . . . . . . . . . . . . 26
2.3.3 Stability of linear systems . . . . . . . . . . . . . . . . . . . . . . . . 28
vi
3 Floquet theory: Presentation and stability of periodic solutions 32
3.1 Linear systems with periodic coecients: Floquet theory . . . . . . . . . . . 32
3.1.1 Nonhomogeneous linear systems . . . . . . . . . . . . . . . . . . . . . 36
3.2 Stability of periodic solutions . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.2.1 Autonomous systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.2.2 Nonautonomous systems . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.3 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Bibliography 45
vii

CHAPTER ONE

Introduction and Generalities
Introduction
In physical sciences (e.g, elasticity, astronomy) and natural sciences (e.g, ecology) among others,
the consideration of periodic environmental factor in the dynamics of multi-phenomena
interractions (or multi-species interractions) leads to the study of dierential systems with
periodic data. Therefore, it is worth investigating, the fundamental questions inherent in systems
of periodic (ordinary) dierential equations such as: existence, uniqueness and stability.
This work is specially devoted to one of the celebrated tools which is crucial in the analysis
and control of time-periodic systems, namely Floquet Theory. This basic theory which is
about a systematic study of linear systems of ordinary dierential equations with periodic
coecients, has had a rich history since the pioneering work of Floquet (1883) followed by the
contribution of Lyapunov (1892). Futhermore it gives a change of variables that transforms a
linear dierential system with periodic coecients, into a system with constant coecients,
and also provides a representation formula of the solutions of (1).
The aim of this project is to present Floquet theory and to use it to assess stability of
periodic solution of periodic dierential systems (linear or nonlinear).
The work is organized as follows:
First, we review some basic notions from Analysis and Linear algebra that will be used in
the subsequent chapters.
In chapter 2, we present the basic notions and theorems of the general theory of Ordinary
Dierential Equations (ODE).
Finally in chapter 3, we state Floquet’s theorem, prove it and use it to address stability of
periodic solutions of linear as well as nonlinear periodic systems. Furthermore, we illustrate
our method by studying Hill’s equation which is a generalization of Mathieu equation.
1
Generalities
The purpose of this chapter is to give some concepts and theorems of linear algebra and
Mathematical analysis that will be of importance in the subsequent chapters
1.1 Basic concepts from linear functional analysis
let’s recall some denitions and results from linear functional analysis
Denition 1.1.1 Let X be a linear space over a eld K; where K holds either for Rn or C.
A mapping k:k: X 􀀀! R is called a norm provided that the following conditions hold:
i) kxk 0; for all x 2 X and kxk= 0 , x = 0
ii) kxk= jjkxk for all 2 K; x 2 X
iii) kx + yk kxk+kyk for arbitrary x; y 2 X
If X is a linear space and k:k is a norm on X; then the pair (X; k:k) is called a normed
linear space over K.
Should no ambiguity arise about the norm, we simply abbreviate this pair by saying that X
is a normed linear space over K.
Example 1. Each of the following expressions denes on the vector space Rn a norm which
is in common use.
i) The absolute norm :
kxk1 =
Xn
i=1
jxij ; for every x = (x1; : : : ; xn) 2 Rn:
ii) The euclidean norm :
kxk2 =

Xn
i=1
jxij2
!1
2
; for every x = (x1; : : : ; xn) 2 Rn:
iii) The maximum norm :
kxk1 = max
1in
jxij; for every x = (x1; : : : ; xn) 2 Rn:
2
Example 2. Let X = C([0; 1]) be the space of all real-valued continuous functions on [0; 1].
Each of the following expressions denes on the vector space C([0; 1]) a norm which is in
common use.
i) kfk1=
R 1
0 jf(t)jdt for every f 2 C([0; 1]).
ii) kfk2=
R 1
0 (jf(t)j)
1
2 dt
1
2 for every f 2 C([0; 1]).
iii) kfk1= max

jf(t)j: t 2 [0; 1]

.
Denition 1.1.2 (Equivalent norms)
Two norms k:k1 and k:k2 dened on a normed linear space X are said to be equivalent if
there exists > 0 and > 0 constants such that
kxk1 kxk2 kxk1 8x 2 X:
Theorem 1.1.3 In a nite dimensional normed linear space, all the norms are equivalent.
Denition 1.1.4 Every normed linear space E is canonically endowed with a metric d de-
ned on E E by
d(x; y) = jjx 􀀀 yjj 8 x; y 2 E:
Denition 1.1.5 (Cauchy sequence)
A sequence (xn)n1 of elements of a normed vector space X is a Cauchy sequence if
lim
n;m!1
kxn 􀀀 xmk= 0:
That is, for any > 0 there is an interger N = N() such that kxn 􀀀 xmk< whenever
n N and m N.
Remark. In a normed linear sapce, every Cauchy sequence (xn)n1 is bounded; i.e, there
exists a constant M 0 such that jjxnjj M ; 8n 1: (See also Denition 1.1.11 below)
Denition 1.1.6 (convergent sequence)
A sequence (xn)n1 of elements of a normed vector space X converges to an element x 2 X
if
lim
n!1
kxn 􀀀 xk= 0:
In such a case, we say that (xn)n1 is a convergent sequence.
Remark. In a normed linear space, every convergent sequence is a Cauchy sequence.
Denition 1.1.7 A normed linear space is complete if every Cauchy sequence in X has the
limit in X. A complete normed linear space is called a Banach space.
Remark. The notion of completeness is also dened for metric spaces which need not have
any linear structure.
Example (Banach space). The normed linear space
􀀀
C([0; 1]); k k1

is a Banach space.
3
Denition 1.1.8 (Open sets and closed sets)
Let X be a normed linear space. We dene open (respectively closed) ball with center at a
point x 2 X and radius r > 0 by
Br(x) = fx 2 X : kxk< rg (respectively Br(x) = fx 2 X : kxk rg ) :
A nonempty subset A of a normed linear space X is said to be open if for all x 2 A; there
exists r > 0 such that Br(x) A: And A is said to be closed if XnA is open.
Proposition 1.1.9 A subset A of a normed linear space is closed if and only if every con-
vergent sequence (an)n1 of elements of A has its limit in A:
Denition 1.1.10 (Closure and interior) Let A be a subset of a normed linear space X.
The interior of A denoted by intA is dened as the union of all open sets contained in A
and the closure of A denoted by cl(A) or A is dened as the intersection of all closed sets
containing A:
Theorem 1.1.11 Let A be a subset of a normed linear space X and x 2 X then,
a) x 2 intA if and only if 9 r > 0 : B(x; r) A:
b) x 2 clA if and only if 8 r > 0; B(x; r) \ A 6= ?:
Remark. Given a subset A of a normed linear space X; we have :
x 2 A () 9 (an)n A such that lim
n!+1
an = x:
Denition 1.1.12 Let X be a normed linear space, x 2 X and let V be a subset of X
containing x: We say that V is a neighbourhood of x if there exists an open set U of X
containing x and contained in V: We denote by N(x) the collection of all neighbourhoods of
x:
Denition 1.1.13 A subset of a normed linear space X is said to be bounded if it can be
included in some ball.
Theorem 1.1.14 (Riesz/ Heine-Borel) A normed linear space is nite dimensional if
and only if its closed unit ball is compact, i.e., every bounded sequence of the closed unit ball,
has a convergent subsequence.
1.1.1 Linear operators
In this section X and Y are normed linear spaces over K.
Denition 1.1.15 A K-linear operator T from X into Y is a map T : X 􀀀! Y such that
T(x + y) = Tx + Ty
for all ; 2 K and all x; y 2 X:
When Y = K, such a map is called a linear functional or a linear form.
4
Proposition 1.1.16 The set of K-linear operators from X into Y has a natural structure
of linear space over K and is denoted by L(X; Y ). Note that L(X;X) is simply denoted by
L(X).
Proposition 1.1.17 If Z is also a linear space, then
f 2 L(X; Y ) and g 2 L(Y;Z) =) gof 2 L(X;Z) :
Theorem 1.1.18 Let T 2 L(X; Y ). Then the following are equivalent
i) T is continuous at the origin (in the sense that if fxngn is a sequence in X such that
xn ! 0 as n ! 1, then T(xn) ! 0 in Y as n ! 1.
ii) T is Lipschitz, i.e., there exists a constant K 0 such that for every x 2 X,
jjT(x)jj Kjjxjj :
iii) The image of the closed unit ball, T
􀀀
B1(0)

, is bounded.
Denition 1.1.19 A linear operator T : X 􀀀! Y is said to be bounded if there exist some
k 0 such that
kT(x)k kkxk
for all x 2 X:
If T is bounded, then the norm of T is dened by
kTk= inffk : kT(x)k kxk; x 2 Xg:
The set of bounded linear operators from X into Y is denoted B(X; Y ): If X = Y; one simply
writes B(X):
Proposition 1.1.20 Suppose X 6= f0g and T 2 B(X), then we have the following:
kTk= sup
kxk1
kT(x)k= sup
kxk=1
kT(x)k= sup
kxk6=0
kT(x)k
kxk
1.2 Matrix calculus and basic Operator theory
1.2.1 Matrices, eigenvalues and eigenvectors
Denition 1.2.1 An m n matrix A is a rectangular array of numbers, real or complex,
with m rows and n columns. We shall write aij for the number that appears in the ith row
and the jth column of A; this is called the (i; j) entry of A: We can either write A in the
extended form 0
BBB@
a11 a12 a1n
a21 a22 a2n

. . .

am1 am2 amn
1
CCCA
5
or in the more compact form
(aij)m;n:
We will denoted A = (aij)m;n with 1 i m; 1 j n:
Associated with each matrix A a matrix At; known as the transpose of A, and obtained
from A by interchanging the rows and the columns of A. Thus, if A = (aij)m;n then At =
(aji)n;m: The trace denoted by tr of A is the sum of a diagonal elements of A. Two m n
matrices A and B are said to be equal if all corresponding elements are equal, that is , if
aij = dij for each i and j.
Let D = (dij)m;n, A = (aij)m;n and B = (bij)n;p be a matrices.
1) The sum of two m n matrices A and D is dened as matrix obtained by adding
corresponding elements:
D + A = (dij + aij)m;n:
Similarly, the dierence is
D 􀀀 A = (dij 􀀀 aij)m;n:
2) The multiplication of a matrix A by a scalar is dened as follows: A = (aij)m;n:
3) The product of a m n matrix A and a n p matrix B is
AB =

Xn
k=1
aikbkj
!
m;p
:
Special matrices
i) A row-vector or 1 m matrix: A = (a1; a2; :::; an) where the a0
is are scalars.
ii) A column-vector or n 1 matrix: A =
0
B@
a1

an
1
CA
where the a0
is are scalar.
iii) A zero matrix is a matrix which all of whose entries are zero. The zero m n matrix
is denoted by 0m;n or simply 0:
iv) A square matrix is a matrix with the same number of rows and columns.
v) The identity matrix of order n has one on the principal diagonal, that is from top
left to bottom right, and zeros elsewhere; it is denoted by In = (ij) where ij is the
Kronecker’s symbol. From the denition of matrix multiplication we have
AI = IA = A
for any square matrix A.
6
iv) A square matrix A is regular or non-singular if its column vectors are linearly independant
or equivalently its determinant, det(A), is nonzero. Otherwise A is said to be
singular or degenerated when det(A) = 0. (See details/recalls below).
v) A square matrix in which all the non-zero elements lie on the principal diagonal is
called a diagonal matrix.
vi) An n n matrix N is said to be nilpotent, if there is a positive integer k such that
Nk = 0:
Denition 1.2.2 Let A and B be two n n matrices. We say that A and B are similar,
notation A B; if and only if there exists a nonsingular matrix T such that T􀀀1AT = B
Eigenvalues and Eigenvectors
Denition 1.2.3 Let A be an nn square matrix. Then a scalar is called an eigenvalue
of A, if there exists a nonzero vector v 2 Rn such that Av = v:
In this case, the is called an eigenvalue of A and the v; eigenvector associated with :
The eigenvalues of A are also the roots of the characteristic polynomial p() = det(A􀀀I)
with p() of degree n:
Theorem 1.2.4 Let A be a square matrix, if the eigenvalues of A are all distincts, A is
similar to a diagonal matrix (whose diagonal entries are the eigenvalues of A).
Now if the matrix A has repeated eigenvalues, then it is not possible to diagonalize it. In
this case, we introduce the concept of generalized eigenvector.
Denition 1.2.5 A nonzero vector v is called a generalized eigenvector of rank k of A;
associated with an eigenvalue if and only if
(A 􀀀 I)kv = 0 and (A 􀀀 I)k􀀀1v 6= 0:
Lemma 1.2.6 If v is a generalized eigenvector of rank k; then the vectors
v; (A 􀀀 I)v; :::; (A 􀀀 I)k􀀀1v are linearly independent.
Recall that a set of vectors v1; v2; : : : ; vk is linearly dependent if there exist scalars c1; c2; : : : ; ck
not all zero, such that
c1v1 + c2v2 + : : : + ckvk = 0:
A set of vectors v1; v2; : : : ; vk is linearly independent if it is not linearly dependent.
Jordan form
From the lemma above, we construct a new basis for Cn such that the matrix representation
of A with respect to this new basis is the one we call Jordan canonical form denoted by J:
7
Theorem 1.2.7 For every nn complex matrix A with eigenvalues 1; :::; s (not necessarily
distinct) of multiplicities n1; :::; ns respectively, there exists a nonsingular n n matrix P
such that
P􀀀1AP = J = diag(J1; :::; Js);
where each of block matrices J1; :::; Js is of the form
Jk =
0
BBBB@
k 1 0 0
0 k 1 0

0 k 1
0 0 k
1
CCCCA
; k = 1; :::; s;
and
Ps
k=1 nk = n:
For the proof of this theorem, refer to Coddington and Levinson [C/L] or Hirsch and Smale
[H/S.]
The block matrices J1; :::; Js are called Jordan blocks, and J is called the Jordan canonical
form of A: Note, any Jordan block Jk() can be written as Jk = kI + Nk; where Nk is
nilpotent of order k and that A is similar to J:
1.2.2 Limits of sequences of operators
We introduce the concepts of limit in the norm and of strong limit of a sequence of operators.
We shall then introduce the derivative and integral of operators depending on a parameter
and shall discuss series of operators.
Let X be a Banach space and (An) a sequence of operators in L(X):
– We say that (An) converges in norm to the operator A 2 L(X); if
lim
n!1
jjAn 􀀀 Ajj = 0: (1.2.1)
– If, for each element x 2 X;
lim
n!
jjAnx 􀀀 Axjj = 0; (1.2.2)
we shall say that An converges strongly to the operator A 2 L(X):
It is immediate that (1.2.1) implies (1.2.2). The converse is not in general true. However, in
the case of nite dimensional Banach spaces it is true, for if we put x = e(i); we have from
(1.2.2)
lim
n!1
jj(a(1)
i1 ; :::; a(k)
ik )jj = 0 for i = 1; :::; k:
This ensures that all the components tend to 0; and since there are only a nite number of
them, the limit is uniform.
It is now possible to dene the meaning of the series
X1
s=1
As;
8
the series
1P
s=1
As is said to be convergent if the sequence made by the partial sums
XN
s=1
As;
converges in L(X).
In this case of matrices,
NP
s=1
As corresponds to a matrix of which elements are
NP
s=1
a(s)
ij = aij :
– We say that the series of operators
1P
s=1
As converges absolutely, if the series
X1
s=1
jjAsjj
converges.
In the case of matrices, this happens if and only if, for every component,
X1
s=1
ja(s)
ij j
converges with As = (a(s)
ij ):
Proposition 1.2.8 Let X be a normed linear space. If A 2 L(X); then the series
1P
0
An
n! is
absolutely convergent.
Proof. If suces to show that the sequence of partial sums fSNg1N
=1 for the series
1P
n=0
kAnk
n! is a Cauchy sequence. Let us dene
NP
0
An
n! : Note that the partial sums of the convergent
series of real number
1P
0
kAkn
n! = ekAk form a cauchy sequence. Using this fact, it follows
that SN is a cauchy sequence in L(X).
Dene the exponential map exp : L(X) ! L(X) by exp(A) = eA =
1P
n=0
An
n! :
The main properties of exponential map are summarized in the following proposition
Proposition 1.2.9 Suppose that A;B 2 L(Rn):
i) If A 2 L(Rn; ) then eA 2 L(Rn):
ii) If B is nonsingular, then B􀀀1eAB = eB􀀀1AB:
iii) e􀀀A = (eA)􀀀1:
9
iv) keAk ekAk:
– Let A be an operator depending on a real parameter t with a t b: Let t0 2 [a; b]; if
h is suciently small, the operator
A(t0 + h) 􀀀 A(t0)
h
can be dened. If its limit as h ! 0 exists, we say that the operator A is dierentiable at t
with respect to t: The limiting operator is denoted by d
dtA(t): We thus have
lim
h!0

A(t0 + h) 􀀀 A(t0)
h
􀀀
dA
dt

= 0:
In the case of matrices, the limits exists if and only if each component aij is dierentiable,
and we have
d
dt
A(t) =

d
dt
aij(t)

:
In general, if A and B are two dierentiable operators in L(X); the following product rule
is valid.
d
dt
(AB) =
dA
dt
B + A
dB
dt
:
We say that the series of operators depending on a parameter converges uniformly if, for
every > 0; there exists a > 0 such that for every t 2 [a; b] and every >

X1
s=
As(t)

< :
If the operator As(t) are dierntiable and the series
X1
s=0
d
dt
As(t)
converges uniformly, then also the operator A =
1P
s=0
As(t) is dierntiable, and we have
dA
dt
=
X1
s=0
d
dt
As:
We illustrate this notion with the following result: If A 2 L(X) then tA 2 L(X) for each
t 2 R so that the function t 7􀀀! etA is dierentiable and
d
dt
(etA) = AetA:
If A is a nonsingular matrix, then logarithm of A denoted by ln(A) is well-dened matrix.
This important result is stated in the following theorem.
10
Theorem 1.2.10 Let A be a nonsingular n n matrix, then there exists an n n matrix
B (called logarithm of A) such that A = eB:
Let A(t) be a matrix depending on a parameter t; we suppose that the component aij are
all integrable functions over the interval [t0; t]: We shall say that the matrix
Z t
t0
A( )d;
is the intergral of the matrix A(t) between t0 and t:
According to the following denition
d
dt
(aij)m;n =

daij
dt

m;n
;
we have that if the functions aij are continuous, then
d
dt
Z
A( )d = A(t):
If, nally A is an m n matrix, and L is a number greater than or equal to the absolute
values of aij ; then
Zt
t0
A( )d

p
m:n:Ljt 􀀀 t0j:
1.3 Review of calculus
We now give the general denition of dierentiability. Let U be a nonempty open subset of
a Banach space X, let Y denote a Banach space, and let the symbol k:k denote the norm in
both Banach spaces.
Denition 1.3.1 A function f : U 􀀀! Y is dierentiable at x 2 U if there is a map
A 2 L(X; Y ) such that
lim
h!0
kf(x + h) + f(x) 􀀀 Ahk
khk
= 0
Remark. If such a linear map exists, then it is unique and we write it as A = f0(x) called
the derivative of f at x:
Other common notations for the derivative are Df and fx
The following is a list of standard facts about the derivative. For the statement in the list,
the symbols X; Y; Xi; and Yi denoted Banach spaces.
i) If f : X 􀀀! Y is dierentiable at a 2 X, then f is continuous at a.
ii) If f : X 􀀀! Y1 ::: Yn is given by f(x) = (f1(x); :::fn(x)); and if fi is dierentiable
for each i, then so is f and,
Df(x) = (Df1(x); :::;Dfn(x)):
11
iii) If the function f : X1 X2 ::: Xn 􀀀! Y is given by (x1; x2; :::; xn) 7􀀀!
f(x1; x2; :::; xn), then the ith partial derivative of f at (a1; a2:::; an) 2 X1X2:::Xn
is the derivative of the function g : Xi 􀀀! Y dened by g(xi) = f(a1; :::; ai􀀀1; xi; ai+1; :::; an).
This derivatives is denoted by Dif(a): If f is dierentiable, then all its partial derivatives
exist and, if we dene h = (h1; h2; :::; hn); we have
Df(x)h =
Xn
i=1
Dif(x)hi:
The converse is not true in general, but if all the partial derivatives of f exist and are
continuous in an open set U X1 X2 :::Xn then f is continuously dierentiable
in U.
1.3.1 The mean value theorem
Theorem 1.3.2 Suppose that [a; b] is a closed interval, and f : [a; b] 􀀀! Y is a continuous
function. If f is dierentiable on the open interval (a; b) and there is some number M > 0
such that kf0(t)k M for all t 2 (a; b); then
kf(b) 􀀀 f(a)k M(b 􀀀 a)
Theorem 1.3.3 (Mean value theorem) Suppose that f : X 􀀀! Y is dierentiable on an
open set U X with a; b 2 U and a + t(b 􀀀 a) 2 U for 0 t 1: If there is some M > 0
such that
sup
0t1
kDf(a + t(b 􀀀 a)k M;
then
kf(b) 􀀀 f(a)k Mkb 􀀀 ak:
1.3.2 Integration in Banach space
Let X be a Banach space and I = [a; b] R where a < b.
Denition 1.3.4 If f : I ! X is continuous on I, we dene its integral in the sense of
Rieman by the following formula:
Z b
a
f(t)dt = lim
n!1
b 􀀀 a
n
Xn􀀀1
k=0
f(a + k
b 􀀀 a
n
):
It is easily seen that
Z b
a
f(t)dt

Z b
a
kf(t)kdt:
Theorem 1.3.5 Suppose that U is a nonempty open subset of X: If f : X ! Y is a
dierentiable function, and x + ty 2 U for 0 t 1; then
f(x + y) 􀀀 f(x) =
Z 1
0
Df(x + ty)ydt:
12

GET THE FULL WORK