#### Project File Details

3,000.00

File Type: MS Word (DOC) & PDF

File Size: 889 KB

Number of Pages:9

Epigraph ii
Preface iii
Acknowledgement iv
Dedication v
1 PRELIMINARIES 1
1.1 Basic notions of Functional Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Linear operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Dierentiability in Banach spaces . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.3 Fundamental theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.1.4 Riemann Integration of functions with values in Banach spaces . . . . . . . . 12
1.1.5 Gronwall Lemma, Dierential Inequality . . . . . . . . . . . . . . . . . . . . . 15
1.1.6 Function Spaces with Values in a Banach Space . . . . . . . . . . . . . . . . . 16
1.2 Semigroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.2.1 Spectral Theory of linear Operators . . . . . . . . . . . . . . . . . . . . . . . 20
1.2.2 Semigroups of Linear Operators . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.2.3 Examples of Semigroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.2.4 Innitesimal Generator of a C0-semigroup . . . . . . . . . . . . . . . . . . . . 26
1.2.5 Lumer-Phillips Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2 ABSTRACT LINEAR EVOLUTION EQUATIONS 36
2.1 Linear Evolution Equations in nite dimensional spaces: Well-Posedness . . . . . . . 36
2.2 Linear Evolution Equations in innite Dimensional Spaces: Abstract Cauchy Problem 39
3 SEMI-LINEAR EVOLUTION EQUATIONS 51
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.2 Theory for Lipschitz-Type Forcing terms . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.2.1 Existence and uniqueness results . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.2.2 Existence of Local Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
vi
3.2.3 Continuous Dependence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.2.4 Extendability of Local Solutions . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.2.5 Global Existence of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.2.6 Long-Term Behavior of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.3 Theory for Non-Lipschitz-Type Forcing Terms . . . . . . . . . . . . . . . . . . . . . . 64
3.3.1 Existence and Uniqueness Result . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.3.2 Theory under compactness assumption . . . . . . . . . . . . . . . . . . . . . . 71
4 APPLICATIONS 75
4.1 The Homogeneous Heat Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.2 The Classical Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.3 The Nonlinear Heat Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.4 Nonlinear Wave Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Bibliography 84

## CHAPTER ONE

PRELIMINARIES
1.1 Basic notions of Functional Analysis
In this section we recall some denitions and results from linear functional analysis
Denition 1.1.1 Let X be a linear space over a eld K; where K holds either for R or C.
A mapping k:k: X 􀀀! R is called a norm provided that the following conditions hold:
i) kxk 0 for all x 2 X, and kxk= 0 , x = 0
ii) kxk= jjkxk, for all 2 K; x 2 X
iii) kx + yk kxk+kyk, for arbitrary x; y 2 X.
If X is a linear space and k:k is a norm on X, then the pair (X; k:k) is called a normed linear space
over K.
Should no ambiguity arise about the norm, we simply abbreviate this pair by saying that X is a
normed linear space over K.
Example . Let X = C([0; 1]) be the space of all real-valued continuous functions on [0; 1]. Each of
the following expressions denes on the vector space C([0; 1]) a norm which is in common use.
kfkp=
R 1
0 (jf(t)j)pdt
1
p , for every f 2 C([0; 1]), and any p 2 [1;1)
kfk1 = ess sup jfj = inffM 0 : jf(x)j M a:eg
Denition 1.1.2 (Equivalent norms)
Two norms k:k1 and k:k2 dened on a linear space X are said to be equivalent if there exists > 0
and > 0 constants such that
kxk1 kxk2 kxk1; 8x 2 X:
1
Theorem 1.1.1 In a nite dimensional linear space, all the norms are equivalent.
Denition 1.1.3 Every normed linear space E is canonically endowed with a metric d dened on
E E by
d(x; y) = jjx 􀀀 yjj 8 x; y 2 E:
Denition 1.1.4 (Cauchy sequence)
A sequence (xn)n1 of elements of a normed vector space X is a Cauchy sequence if
lim
n;m!1
kxn 􀀀 xmk= 0:
That is, for any > 0 there is an interger N = N() such that kxn 􀀀 xmk< whenever n N and
m N.
Remark. In a normed linear space, every Cauchy sequence (xn)n1 is bounded; i.e, there exists a
constant M 0 such that jjxnjj M ; 8n 1: (See also Denition 1.1.5 below)
Denition 1.1.5 (convergent sequence)
A sequence (xn)n1 of elements of a normed vector space X converges to an element x 2 X if
lim
n!1
kxn 􀀀 xk= 0:
In such a case, we say that (xn)n1 is a convergent sequence.
Remark. In a normed linear space, every convergent sequence is a Cauchy sequence.
Denition 1.1.6 A normed linear space X is complete if every Cauchy sequence of X is convergent
in X. A complete normed linear space is called a Banach space.
Remarks. Every normed linear space has a completion. The notion of completeness is also dened
for metric spaces which need not have any linear structure.
Example (Banach spaces). The normed linear space
􀀀
C([0; 1]); k k1

is a Banach space. Also
the space of all bounded linear maps from R to R denoted by B(R) is a Banach space.
The completion of the normed linear space
􀀀
C([0; 1]); k k2

where k k2 is dened by
kfk2 =
Z 1
0
jf(t)j2dt
1
2
is L2(0; 1) (see Denition 1.1.7).
1.1.1 Linear operators
In this section X and Y are normed linear spaces over a eld K.
Denition 1.1.7 A K-linear operator T from X into Y is a map T : X 􀀀! Y satisfying the
following property
T(x + y) = Tx + Ty
for all ; 2 K and all x; y 2 X:
When Y = K, such a map is called a linear functional or a linear form.
2
Proposition 1.1.1 The set of K-linear maps from X into Y has a natural structure of linear space
over K and is denoted by L(X; Y ). Note that L(X;X) is simply denoted by L(X).
Proposition 1.1.2 : The sum of two linear maps is a linear map, and the product of a linear
operator by a scalar is also a linear operator. Let X; Y and Z be linear spaces. Then
f 2 L(X; Y ) and g 2 L(Y;Z) =) gof 2 L(X;Z) :
Theorem 1.1.2 Let T 2 L(X; Y ). Then the following are equivalent
i) T is continuous at the origin (in the sense that if fxngn is a sequence in X such that xn ! 0
as n ! 1, then T(xn) ! 0 in Y as n ! 1).
ii) T is Lipschitz, i.e., there exists a constant K 0 such that for every x 2 X,
jjT(x)jj Kjjxjj :
iii) The image of the closed unit ball, T
􀀀
B1(0)

, is bounded.
Denition 1.1.8 A linear operator T : X 􀀀! Y is said to be bounded if there exist some k 0
such that
kT(x)k kkxk
for all x 2 X:
If T is a bounded linear map, then the norm of T is dened by
kTk = inf

k : kT(x)k kkxk; 8x 2 X

:
The set of bounded linear operators from X into Y is denoted B(X; Y ):
If X = Y; one simply writes B(X) for B(X;X):
Proposition 1.1.3 Suppose X 6= f0g and T 2 B(X; Y ), then we have the following:
kTk = sup
kxkX1
kT(x)kY = sup
kxkX=1
kT(x)kY = sup
x6=0
kT(x)kY
kxkX
1.1.2 Dierentiability in Banach spaces
Let X and Y be two real Banach spaces and U be a subset of X with nonempty interior.
Denition 1.1.9 (Directional Dierentiability) Let f : U 􀀀! Y be a map. Let x0 2 U and
v 2 X n f0g.
The function f is said to be dierentiable at x0 in the direction of the vector v if the function
t 7􀀀! f(x0 + tv) is dierentiable at t = 0. That is, the function
t 7􀀀!
f(x0 + tv) 􀀀 f(x0)
t
; t 6= 0
has a limit in the normed linear space Y when t tends to 0 .
This limit, when it exists is denoted by f0(x0; v) or @f
@v (x0).
3
Note that since x0 2 U, if t is suciently close to 0, then x0 + tv 2 U and f(x0 + tv) is well
dened.
Denition 1.1.10 (G^ateaux Dierentiability) Let f : U 􀀀! Y be a map . Let x0 2 U.
The function f is said to be Gâteaux Dierentiable at x0 if :
1. f is dierentiable at x0 in every direction v 2 U n f0g and
2. There exists a bounded linear map A 2 B(X; Y ) such that f0(x0; v) = A(v) for all v element
of X n f0g.
In this case the map f0(x0; 🙂 is called the Gâteaux dierential of f at x0 and is denoted by DGf(x0)
or f0G
(x0).
In orther words, f is Gâteaux dierentiable at x0 if there exists a bounded linear map A 2 B(X; Y )
such that
lim
t!0
f(x0 + tv) 􀀀 f(x0)
t
= A(v); 8v 2 X n f0g
From the denition 1.1.10 it is obvious that if a function is Gâteaux dierentiable at a point, then
it has a directional derivative in all direction at that point. The following example shows us that
the converse is not true.
Example 1 (Gâteaux-dierentiable functions)
Let f be the function dened from R2 into R by
f(x1; x2) =
(
x31
x21
+x22
if (x1; x2) 6= (0; 0)
0 otherwise
(1) The function f has the directional derivative at the point 0 = (0; 0) in all direction.
Indeed, for any v = (v1; v2) 2 R2 n f0g we have f(tv)􀀀f(0)
t = f(v).
So f0(0; v) = limt!0
f(tv)􀀀f(0)
t = f(v)
(2) The function f is not Gâteaux dierntiable at the point 0 since the function v 7! f0(0; v) = f(v)
is not linear.
Remark: A function f : U ! Y is Gâteaux dierentiable at x0 2 U if there exists a bounded
linear mapping A : X ! Y such that limt!0+
f(x0+th)􀀀f(x0)
t = A(h); 8h 2 X.
Denition 1.1.11 (Frechet Dierentiability) Let f : U 􀀀! Y be a map. The function f is
said to be Fréchet dierentiable at p 2 U if there exists a bounded linear mapping T 2 B(X; Y ) such
that
lim
h!0
jjf(p + h) 􀀀 f(p) 􀀀 T(h)jj
jjhjj
= 0 :
The bounded linear map T 2 B(X; Y ) in this denition is uniquely determined and called the
Fréchet dierential of f at p and it is denoted by Df(p) or f0(p) (sometine it is also denoted by
dfp).
4
Remark: The function f is said to be Fréchet dierentiable on U if it is Fréchet dierentiable at
every point of U. In case the domain of f is open and there is no ambiguity about it we just say
that f is dierentiable.
It is easy to see that if f is Fréchet dierentiable at x 2 U, then it is continuous at x.
Proposition 1.1.4 Let f : U 􀀀! Y be a dierentiable map at p 2 U. Then
dfp(x) = lim
t!0
f(p + tx) 􀀀 f(p)
t
; 8x 2 X:
Proof Let x 2 X: If x = 0, obviously proposition 1.1.4 holds. Suppose that x 6= 0:
For t 6= 0 suciently small, we have:
dfp(x) 􀀀 f(p+tx)􀀀f(p)
t

=

dfp(tx)􀀀f(p+tx)+f(p)
t

= jjxjj

dfp(tx)􀀀f(p+tx)+f(p)
jjtxjj
:
Since f is dierentiable at p, we have
lim
t!0

dfp(tx) 􀀀 f(p + tx) + f(p)
jjtxjj

= 0
Therefore we have
dfp(x) = lim
t!0
f(p + tx) 􀀀 f(p)
t
:
Theorem 1.1.3 If the map f : U 􀀀! Y is Frechet dierentiable at p 2 U, then it is Gâteaux
dierentiable at p and we have dfp = f0G
(p):
Proof
From the Proposition 1.1.4 we have :
dfp(x) = lim
t!0
f(p + tx) 􀀀 f(p)
t
8x 2 X
This implies f0(p; x) = dfp(x); 8x 2 X
Since dfp is a bounded linear map, then the function f is Gâteaux dierentiable at p and
df(p) = f0G
(p)
The following counterexample, shows that the converse of Theorem 1.1.3 is not true.
Example
Let f be the function dened from R2 into R by :
f(x1; x2) =
8><
>:
x1x42
x21
+x82
if (x1; x2) 6= (0; 0)
0 Otherwise
5
The function f is Gâteaux dierentiable at (0; 0) but not Fréchet dierentiable at (0; 0).
Indeed, let x = (x1; x2) 2 R2 n f0g and t 6= 0, we have :
f(tx) 􀀀 f(0)
t
=
(
0 if x1 = 0
t2x1x42
x21
+t6x82
otherwise
Thus implies that
lim
t!0
f(tx) 􀀀 f(0)
t
= 0:
Hence f is Gâteaux dierentiable at (0; 0) and f0G
(0) = 0.
However on the curve x1 = x42
, we have f(x) = 1
2 which does not tend to zero as x2 ! 0. So the
function f is not continuous at (0; 0). Hence it is not Fréchet dierentiable at (0; 0)
This example shows us that even the Gâteaux dierentiability does not guarrantee the continuity.
Theorem 1.1.4 (Mean Value Theorem) Let a; b be two points in a nonempty open set
X
such that the segment [a; b] is contained in
, and let f :
! Y be dierentiable such that
supz2[a;b] jjf0(z)jj < 1. Then
jjf(a) 􀀀 f(b)jj Mjja 􀀀 bjj;
where M = supz2[a;b] jjf0(z)jj.
Proof
Let a; b 2
such that [a; b]
and g 2 Y ; with jjgjj 1. Consider the map : [0; 1] ! X
dened by :
(t) = g f(a + t(b 􀀀 a)) 8t 2 [0; 1]
Since f is dierentiable, is also dierentiable and we have :
0(t) = g(f0(a + t(b 􀀀 a))(b 􀀀 a))
with
sup
s2(0;1)
jj0(s)jj jjgjj sup
z2[a;b]
jjf0(z)jj jjb 􀀀 ajj Mjjb 􀀀 ajj < 1:
Therefore by the classical Mean Value Theorem in R, we have
jj(1) 􀀀 (0)jj sup
0t1
j0(t)j
Mjjb 􀀀 ajj;
which yields the desired result by the Hahn-Banach Theorem since for every x 2 X,
jjxjj = sup

jjT(y)jj : T 2 Y and jjTjj 1

:
.
Denition 1.1.12 Let U X be a nonempty open set and f : U 􀀀! Y be Fréchet dierentiable
mapping. The dierential of f on U is the mapping
6
Df : U 􀀀! B(X; Y )
x 7􀀀! Df(x)
We say that f is continously dierentiable on U (or of class C1) if Df is a continuous mapping.
The following theorem gives a sucient condition for a Gâteaux dierentiable function to be Fréchet
dierentiable.
Theorem 1.1.5 Assume that f : U 􀀀! F is Gâteaux dierentiable on U and such that f0G
is
continuous from U into B(X; Y ): Then f is Fréchet dierentiable and its Fréchet and Gâteaux
dierentials are the same.
Proof
Let p 2 U. Then there exists > 0 such that B(p; ) U. Let x 2 X such that jjxjj < : Consider
the map : [0; 1] 􀀀! Y dened by :
(t) = f(p + tx) 􀀀 f(x) 􀀀 tf0G
(p)(x)
Since f is Gâteaux dierentiable, it follows that is dierentiable and
0(t) = f0G
(p + tx)(x) 􀀀 f0G
(p)(x):
Observe that 0 is bounded and by applying the mean value theorem to we have:
j(0) 􀀀 (1)j sup
0t1
j0(t)j
This is equivalent to
jjf(p + x) 􀀀 f(p) 􀀀 f0G
(p)(x)jj sup0t1 jjf0G
(p + tx)(x) 􀀀 f0G
(p)(x)jj
sup0t1 jjf0G
(p + tx) 􀀀 f0G
(p)jjjjxjj
Let > 0. Then by continuity of the mapping u 7! f0G
(u), there exists > 0 such that
jjp 􀀀 yjj < ) jjf0G(y) 􀀀 f0G
(p)jj < :
If jjxjj < minf; g, then for every t 2 [0; 1], p + tx 2 B(p; ). This implies that
jjf0G
(p + tx) 􀀀 f0G
(p)jj < 8t 2 [0; 1] :
Hence sup
0t1
jjf0G
(p + tx) 􀀀 f0G
(p)j . Since is arbitrarily choosen, we deduce that:
lim
x!0
􀀀
sup
0t1
jjf0G
(p + tx) 􀀀 f0G
(p)jj

= 0
implying that
lim
x!0
jjf(p + x) 􀀀 f(p) 􀀀 f0G
(p)(x)jj
jjxjj
= 0
Hence the function f is Fréchet dierentiable at p. We are done.
7
1.1.3 Fundamental theorems
In this section we state some fundamental theorems that will be useful in the study of our theory.
Denition 1.1.13 Let X and Y be Banach spaces.
i) A function g : [0; T] X ! Y is globally Lipschitz on D X (uniformly in t), if there exists a
positive constant K = K(g) (independent of t) such that
kg(t; x) 􀀀 g(t; y)kY Kkx 􀀀 ykX; 8t 2 [0; T] and x; y 2 X:
A function f : X ! Y is locally Lipschitz on X, if 8×0 2 X and > 0, there exists
k = k(x0; ) such that
kf(x) 􀀀 f(y)k kkx 􀀀 yk; 8x; y 2 BX(x0; ):
ii) A mapping : X ! X is a contraction, if there exists 0 < < 1 such that
k(x) 􀀀 (y)k kx 􀀀 yk; 8x; y 2 X:
Theorem 1.1.6 (Contraction Mapping theorem) If : X ! X is a contraction, then there
exists a unique z 2 X such that (z) = z (i.e., has a unique xed-point).
Theorem 1.1.7 (Uniform Boundedness Principle) Let X and Y be Banach spaces. Suppose
that
i) fTg2 B(X; Y );
ii) 8x 2 X; sup2 kT(x)k < 1.
Then sup
2
kTk < 1:
Theorem 1.1.8 (Closed-graph Theorem) Let X and Y be two Banach spaces. Let T be a linear
map from X into Y . Assume that the graph of T, GT , is closed in X Y . Then T is continuous.
Denition 1.1.14 i) Let X be a Banach space and D a subset of X. We say that D is precompact
if the closure of D in X is compact.
ii) A set F C([a; b];X) is equicontinuous at t0 2 [a; b], if 8 > 0; 9 > 0 (depending on t0 and )
such that:
t 2 [a; b] with j t 􀀀 t0 j< ) kf(t0) 􀀀 f(t)k < ; 8f 2 F.
iii) Let X and Y be Banach spaces. An operator A : X ! Y is compact if for every bounded set
D X, the image A(D) is precompact in Y.
8
Denition 1.1.15 A sequence of continuous functions fn : [a; b] ! R (i.e.,fn 2 C([a; b])) is called
equicontinuous if,
8 > 0; 9 > 0 : js 􀀀 tj < ) jfn(s) 􀀀 fn(t)j < for any n 1:
Theorem 1.1.9 (Arzela-Ascoli Theorem.) Any bounded equicontinuous sequence ffng in C([a; b])
has a uniformly convergent subsequence.
We now state a version of the above theorem in C([a; b];X) which will be useful in the dissertation.
Theorem 1.1.10 [Arzela-Ascoli in C([a; b];X)] A set K C([a; b];X) is precompact if and only
if 8t 2 [a; b] the set fz(t)jz 2 Kg Y is precompact in Y and equicontinuous at t.
The following theorem will be used to prove the existence of a mild solution of the Cauchy problem,
when the forcing term does not satisfy a Lipschitz-type condition, and A generates a compact
semigroup.
Theorem 1.1.11 (Schaefer’s Fixed-point Theorem) . Let X be a Banach space and let
: X ! X be a continuous, compact operator and let
( ) =

x 2 X : x = (x) for some 1

:
If ( ) is bounded, then has at least one xed-point.
Expenential Maps
We dene and give some regularity properties of the exponential map with respect to bounded
linear operators.
Let E be Banach space throughout this section. Then it is not hard to see that for every bounded
linear operator A on E, the series
I +
1X
n=1
An
n!
converges absolutely in B(E). For simplicity we write
1X
n=0
An
n!
for
I +
1X
n=1
An
n!
and we have the following
Denition 1.1.16 The exponential of any bounded linear operator A is dened by
exp(A) = eA :=
1X
n=0
An
n!
:
9
Theorem 1.1.12 We have the following
i) e0 = I.
ii) AmeA = eAAm, for all A 2 B(E) and all natural number m.
iii) For all A;B 2 B(E) such that AB = BA,
we have AeB = eBA and eAeB = eBeA = eA+B.
Corollary 1.1.1 Let A 2 B(E) and let t; s 2 R. Then
e(t+s)A = esAetA
Proof of the Theorem 1.1.12: We prove only (iii) since it implies (ii) while (i) follows immediately
from the denition. Assume that A;B 2 B(E) and satisfy AB = BA. Therefore
AeB = A
1X
k=0
Bk
k!
= A

lim
n!1
Xn
k=0
Bk
k!
!
= lim
n!1
Xn
k=0
ABk
k!
; by continuity and linearity of A
= lim
n!1
Xn
k=0
BkA
k!
; by commutativity of A and B
=

1X
k=0
Bk
k!
!
A
= eBA
10
Moreover it follows, by mathematical induction, that for all k 2 N we have AkeB = eBAk. Thus
eAeB =

1X
k=0
Ak
k!
!
eB
= lim
m!1

Xm
k=0
Ak
k!
!
eB
= lim
m!1
Xm
k=0
Ak
k!
eB
= lim
m!1
Xm
k=0
Ak
k!
1X
p=0
Bp
p!
= lim
m!1
Xm
k=0
1X
p=0
AkBp
k!p!
by continuity and linearity of A
=
1X
k=0
1X
p=0
AkBp
k!p!
=
1X
n=0
X
k+p=n
AkBp
k!p!
; since the series converge absolutely
=
1X
n=0
Xn
k=0
AkBn􀀀k
k!(n 􀀀 k)!
=
1X
n=0
Xn
k=0
1
n!

n
k

AkBn􀀀k
=
1X
n=0
1
n!
Xn
k=0
)

n
k

AkBn􀀀k
=
1X
n=0
(A + B)n
n!
by the commutativity of A and B
= e(A+B)
Theorem 1.1.13 Let A be in B(E) , t be a real scalar variable, x 2 E and let
f(t) = etAx:
Then f is dierentiable for all x 2 E and f0(t) = AetAx.
Theorem 1.1.14 The exponential map is continuously dierentiable from B(E) into B(E).
Theorem 1.1.15 (Existence and Uniqueness) Given any bounded linear operator A on E, for
every U0 2 E, the homogeneous Cauchy problem

U0(t) = AU(t); t > 0
U(0) = U0
(1.1)
has a unique solution U in C([0;1);E) \ C1((0;1);E) expressed by U(t) = etAU0.
11
Proof: Let U0 2 E
Existence:
We must verify that
U(t) = etAU0
is continuous on [0; T], dierentiable on (0; T], and saties (1.1).
But we know that for every x0 2 E, g : [0;1) 􀀀! E dened by
g(t) = etAx0
is continuous. So U 2 C([0;1);E) and since g 2 C1([0;1);E) this ensures that U 2 C1([0;1);E)
and U satises the O.D.E in (1.1) and
U(0) = e0U0 = U0
satisfying the initial condition. This thus establishes the existence.
Uniqueness
Let V : [0;1) 􀀀! E be a solution to the Initial value Problem:

V 0(t) = AV (t); t > 0
V (0) = V0
Dene the function : [0;1) 􀀀! E by
(s) = e(t􀀀s)AV (s)
. Since t 7! etA is dierentiable 8t 0 so is . So let t0 > 0. Then 8s 2 [0; t0]
d
ds
(s) = e(t0􀀀s)AV 0(s) +
d
ds
e(t0􀀀s)AV (s)
= e(t0􀀀s)AV 0(s) 􀀀 e(t0􀀀s)AAV (s)
= e(t0􀀀s)AV 0(s) 􀀀 e(t0􀀀s)AV 0(s)
= 0
So is a constant function and therefore (0) = (t0); 8t0 > 0 i.e, U(t0) = V (t0) showing uniqueness
and therefore completing the proof.
Remark: The above results clearly hold when E is RN (nite dimensional space) and A is a
Square matrix of order N.
1.1.4 Riemann Integration of functions with values in Banach spaces
Here we shall dene the Riemann integral for abstract functions of the real variable and discuss some
properties which will be constantly used in this project. By an abstract function we mean a function
x dened on an interval [a; b] of R (a b) with values in a Banach space X. Let x : [a; b] ! X be
an abstract function.
We say that x is continuous at t0 2 [a; b] if
kx(t) 􀀀 x(t0)k 􀀀! 0 as t ! t0 in [a; b]:
12
The function x is continuous on [a; b] if it is continuous at each point of [a; b]. We denote the
partition
a = t0 < t1 < t2 < ::: < tn = b
together with chosen points
i 2 [ti; ti+1]; i = 0; 1; 2; :::; n 􀀀 1
by and set jj = maxi jti+1 􀀀 tij.
We form the Riemann sum associated to
S =
Pn􀀀1
i=0 (ti+1 􀀀 ti)x(i)
By varying , if S has a limit I in X, as jj ! 0, then I is called the Riemann integral of the
function x and is denoted by
I =
Z b
a
x(t) dt :
In such a case we also dene Z a
b
x(t) dt := 􀀀
Z b
a
x(t) dt :
Theorem 1.1.16 Let a < b in R. If x 2 C([a; b];X), then the Riemann integral
R b
a x(t)dt exists.
Properties
Using the denition of the Riemann integral one can easily verify the following properties for a
continuous function x : [a; b] ! X where a < b are real numbers.
i)
R b
a x(t)dt =
R c
a x(t)dt +
R b
c x(t)dt; for a < c < b provided that one of the integrals on the right
exists.
ii) If x(t) = x0 for all t 2 [a; b], we have
R b
a x0dt = (b 􀀀 a)x0
iii) If t = w( ) is an increasing continuously dierentiable function on [; ] with a = w() and
b = w(), then Z b
a
x(t)dt =
Z

x[w( )]w0( )d
v) k
R b
a x(t)dtk
R b
a kx(t)kdt.
Indeed, from the denition of the Riemann sum we have
kSk

nX􀀀1
i=0
(ti+1 􀀀 ti)x(i)

nX􀀀1
i=0
(ti+1 􀀀 ti)kx(i)k 􀀀!
Z b
a
kx(t)kdt; as jj 􀀀! 0
since kx(t)k is continuous and hence is integrable on [a; b].
13
Theorem 1.1.17 (Dominated convergence theorem) If fxngn1 is a sequence of abstract measurable
functions which converges to an abstract function x on the interval [a; b], and kxn(t)k y(t)
almost everywhere on [a; b] for all n, where y is a nonnegative integrable function on [a; b]. Then
lim
n􀀀!1
Z b
a
xn(t)dt =
Z b
a
x(t)dt:
Proof : We have

Z b
a
xn(t)dt 􀀀
Z b
a
x(t)dt

Z b
a
kxn(t) 􀀀 x(t)kdt
max
[a;b]
kxn(t) 􀀀 x(t)k(b 􀀀 a) 􀀀! 0 as n 􀀀! 1
which proves the desired result.
Theorem 1.1.18 If x 2 C([a; b];X), then
d
dt
Z t
a
x(s)ds

= x(t); a t b:
Proof : Set
y(t) =
Z t
a
x(s)ds:
Then in view of the fact that x(t) is uniformly continuous on [a; b] as a continuous function on a
closed and bounded interval, we have

y(t + h) 􀀀 y(t)
h
􀀀 x(t)

=

1
h
h Z t+h
a
x(s)ds 􀀀
Z t
a
x(s)ds
i
􀀀 x(t)

=

1
h
h Z t+h
a
x(s)ds +
Z a
t
x(s)ds 􀀀
Z t+h
t
x(t)ds
i
by properties (i) and (iii)
=

1
h
Z t+h
t

x(s) 􀀀 x(t)

ds
; by property (ii)
max
js􀀀tjjhj
kx(s) 􀀀 x(t)k 􀀀! 0 as h 􀀀! 0
and the proof is complete.
Theorem 1.1.19 (Fundamental theorem of calculus in Banach spaces) If the function
x : [a; b] ! X is continously dierentiable on (a; b), then for any ; 2 (a; b) the following formula
is true:
Z

x0(s)ds = x() 􀀀 x()
14
Proof: By theorem 1.1.18
( d
dt )
hR t
x0(s)ds 􀀀 x(t)
i
= 0; t . Hence
Z t

x0(s)ds 􀀀 x(t) = const: (1.2)
For t = we nd the value const = 􀀀x() and the result follows by setting t = in the above
identity.
The following theorem which asserts that integration commutes with closed operators (in particular,
integration commutes with bounded linear operators) will be useful later in the project.
Theorem 1.1.20 Let A on D(A) be a closed operator in the Banach space X and x 2 C([a; b);X)
with b 1. Suppose that x(t) 2 D(A); Ax(t) is continuous on [a; b) and that the improper
integral
R b
a x(t)dt and
R b
a Ax(t)dt exist. Then
Z b
a
x(t)dt 2 D(A)
and
A
Z b
a
x(t)dt =
Z b
a
Ax(t)dt
1.1.5 Gronwall Lemma, Dierential Inequality
We now state a result (where the above Riemann integral is used) of which we will make use in the
sequel.
Lemma 1.1.1 (Gronwall’s lemma) Let t0 2 (􀀀1; T); x 2 C([t0; T];R); K 2 C([t0; T]; [0;+1))
and M be a real constant.
Suppose x(t) M +
R t
t0
K(s)x(s)ds; t0 t T.
Then
x(t) Me
R t
t0
K(s)ds; t0 t T:
Proof Dene y : [t0; T] ! R by
y(t) = M +
Z t
t0
K(s)x(s)ds:
Observe that y 2 C1([t0; T];R) and satises the IVP

y0(t) = K(t)x(t); t0 t T
y(t0) = M
(1.3)
By assumption, x(t) y(t); t0 t T. Multiplying both sides of this inequality by K(t) and
then substituting into the above IVP yields

y0(t) K(t)y(t); t0 t T
y(t0) = M
(1.4)
15
Hence
y0(t) 􀀀 K(t)y(t) 0; t0 t T;
so that multiplying both members by e􀀀
R t
t0
K(s)ds yields
d
dt
h
e􀀀
R t
t0
K(s)dsy(t)
i
0; t0 t T
Consequently,
e􀀀
R t
t0
K(s)dsy(t) y(t0) = M; t0 t T;
and so y(t) Me
R t
t0
K(s)ds; t0 t T. Since x(t) y(t); t0 t T, the conclusion follows
from the above inequality.
1.1.6 Function Spaces with Values in a Banach Space
Let X be a Banach space with norm k:k. We introduce various function spaces consisting of functions
dened on an interval of R or on a domain of C with values in X.
Uniformly Bounded Function Spaces
Let [a, b] be a bounded closed interval. By B([a; b];X) we denote the space of uniformly bounded
functions on [a, b] (not necessarily smooth or measurable). The space is a Banach space with the
supremum norm
kFkB = sup
atb
kF(t)k
Continuously Dierentiable Function Spaces
Let [a; b] be a bounded closed interval, and let m = 0; 1; 2; :::, be a nonnegative integer. Cm([a; b];X)
denotes the space of m times continuously dierentiable functions on [a; b]. When m = 0, C0([a; b];X)
is the space of continuous functions and is simply denoted by C([a; b];X). The space is equipped
with the maximum norm
kFkCm =
Xm
i=0
atb
max kF(i)(t)k :
Let us recall that for any continuous function F 2 C([a; b];X), the Riemann integral of F on
the interval [a; b] is dened and is denoted by
R b
a F(t)dt and satises

Z b
a
F(t)dt

Z b
a
kF(t)kdt :
Hölder Continuous Function Spaces
For m = 0; 1; 2; : : :, and an exponent 0 < < 1; Cm+([a; b];X) denotes the space of m-times
continuously dierentiable functions whose mth derivatives are Hölder continuous on [a; b] with
exponent . The space is equipped with the norm
kFkCm+ = kFkCm + sup
as<tb
kFm(t) 􀀀 Fm(s)k
j t 􀀀 s j
16
LP and Sobolev Spaces
In the sequel,
is a nonempty open subset of Rn with Lebesgue measure dx.
Denition 1.1.17 Let 1 p < 1. We dene:
(i) Lp(
) as the set of measurable functions f :
! R such that:
Z

jf(x)jp dx < +1
and
(ii) L1(
) as the set of measurable functions f :
! R such that:
ess sup jfj < +1
where
ess sup jfj = inf fK 0; jf(x)j K; for a.e x 2
g
For f 2 Lp(
), we dene:
kfkp =
Z

jf(x)jp dx
1
p
; 1 p < +1 (1.5)
and kfk1 = ess sup jfj (1.6)
Theorem 1.1.21 (Holder’s Inequality.) Let 1 p < +1, we dene q by 1=p + 1=q = 1: If
f 2 Lp(
) and g 2 Lq(
), then fg 2 L1(
) and
kfgk1 kfkpkgkq: (1.7)
Denition 1.1.18 (Weak Derivative.) Let u 2 L1
loc(
). For a given multi-index 2 Nn, a
function v 2 L1
loc(
) is called the th-weak derivative of u if
Z

u D’ dx = (􀀀1)jj
Z

v ‘ dx; 8 ‘ 2 D(
): (1.8)
where D(
) is the space of all C1 functions with compact support in
, and L1
loc(
) is the set of
all functions that are integrable on any compact subset of
.
The function v is also refered to as the generalized derivative of u and we write v = Du. Clearly
Du is uniquely determined up to set of Lebesgue measure zero.
Denition 1.1.19 Let m be a non-negative integer and 1 p 1. The Sobolev space Wm;p(
)
is dened by
Wm;p(
) = fu 2 Lp(
) j Du 2 Lp(
); for all 2 Nn : jj mg: (1.9)
17
In other words, Wm;p(
) is the collection of all functions in Lp(
) for which all weak derivatives
up to order m are also in Lp(
). Clearly Wm;p(
) is a vector space. (In all that follows we will
consider functions with values in R and the corresponding function spaces as vector spaces over R).
We provide it with the norm:
kukWm;p(
) =
X
jjm
kDukLp(
): (1.10)
or equivalently, for 1 p < 1,
kukWm;p(
) =
0
@
X
jjm
kDukp
Lp(
)
1
A
1
p
: (1.11)
The cases p = 2 will play a special role in this project. These spaces will be denoted by Hm(
).
Thus
Hm(
) = Wm;2(
) (1.12)
and for u 2 Hm(
), its norm is denoted by
kukHm(
) = kukWm;2(
): (1.13)
The spaces Hm(
) are Hilbert spaces with the natural inner-product dened by
(u; v)Hm =
X
jjm
Z

DuDv dx; 8 u; v 2 Hm(
): (1.14)
This inner-product yields the norm given by formula (1.11).
Finally, we introduce an important subspace of the space Wm;p(
). If 1 p < 1, we know that
D(
) is dense in Lp(
). Also, if ‘ 2 D(
), so does every derivative of ‘ and so D(
) Wm;p(
),
for any m and p.
If 1 p < 1, we dene the space Wm;p
0 (
) as the closure of D(
) in Wm;p(
). Thus Wm;p
0 (
)
is a closed subspace of Wm;p(
) and its elements can be approximated in the Wm;p(
)-norm by
C1 functions with compact support. In general this is a strict subspace of Wm;p(
), except when

= Rn.
Sobolev Embedding
The following theorem will be very useful in the study of some particular types of Evolution Equations.
Theorem 1.1.22 (Sobolev embedding theorem) Let
be a bounded open subset of RN with C1
boundary. Then we have
1) If 1 p < N and if p q p (where p = Np
N􀀀p ) then W1;p(
) Lq(
) with continuous
embedding.
18
2) If p = N then W1;p(
) Lq(
) for all q 2 [p;1) with continuous embedding.
3) If p > N then W1;p(
) Lq(
) with continuous embedding. Moreover for all u 2 W1;p(
)
and for all x; y 2
we have
ju(x) 􀀀 u(y)j CkukW1;p(
)jx 􀀀 yj; = 1 􀀀
N
p
:
Compact Embedding:
Theorem 1.1.23 (Rellich-Kondrachov) Let
be an open bounded subset of Rn with C1 boundary.
If 1 p < +1, then W1;p(
) is compactly embedded into Lq(
) if 1 q < p (where p = np
(n􀀀p) ).
That is
i) There is a constant C depending only on p; n; and
such that kukLq(
) CkukW1;p(
), for
every u 2 W1;p(
).
ii) Every bounded sequence (uk) in W1;p(
) has a subsequence (ukj ) that converges in Lq(
).
19
1.2 Semigroups
In this section we will review some basic concepts and results concerning spectral theory and
semigroup of linear operators that will be useful in the dissertation. The material is standard and
can be found in any textbook on the subject. So we state the most important results some without
proof as any interested reader can nd details in [5].
1.2.1 Spectral Theory of linear Operators
Denition 1.2.1 Let X be a Banach space and A : D(A) X ! X be a linear operator. Then the
resolvent set of A denoted by (A), is dened by
(A) = f 2 C : I 􀀀 A : D(A) ! X has a bounded inverseg :
The spectrum of A denoted (A) is the complement of (A) in C.
For 2 (A), the inverse
R(;A) := (I 􀀀 A)􀀀1 (1.15)
is (by denition) a bounded linear operator on X and will be called the resolvent of A at .
Remark 1.2.1 i) It follows from the denition above that the identity
AR(;A) = R(;A) 􀀀 I (1.16)
holds for every 2 (A) and R(;A) commutes with A.
ii) For ; 2 (A) one has:
R(;A) 􀀀 R(;A) = ( 􀀀 )R(;A)R(;A) (1.17)
and
R(;A)R(;A) = R(;A)R(;A) (1.18)
iii) If (A) 6= ;, then A is closed.
Theorem 1.2.1 Let A : D(A) X ! X be a closed linear operator. Then the following properties
hold.
i) The resolvent set (A) is open in C, and for 2 (A) one has
R(;A) =
1X
n=0
( 􀀀 )nR(;A)n+1 (1.19)
for 2 C such that j 􀀀 j < 1
kR(;A)k
ii) For every 2 (A), one has
kR(;A)k
1
dist(; (A))
(1.20)
iii) Let (n)n2N (A) with n ! 0 as n! 1.
Then 0 2 (A) if and only if limn!1 kR(n;A)k = 1
20
Note that if A is bounded, then since
R(;A) =
1

(1 􀀀
A

)􀀀1 =
1X
n=0
An
n+1 ; for all such that jj > kAk: (1.21)
It follows that
(A) f 2 C : jj kAkg :
Corollary 1.2.1 For a bounded linear operator A on X, the spectrum (A) is always compact and
nonempty; hence its spectral radius dened by
r(A) = sup fjj : 2 (A)g
is nite and satises
r(A) kAk :
Theorem 1.2.2 (Gelfand Formula) If
A 2 B(X) then r(A) = lim
n!1
kAnk
1
n :
When A is a closed operator, we have, by the Closed Graph Theorem, the following equivalent
denition of the resolvent
(A) = f 2 C : I 􀀀 A : D(A) ! X is bijective g
and so from the denition of the spectrum of the operator A now characterized by the non-bijectivity
of I 􀀀 A, we can distinguish three subsets of the spectrum set (A) summarized in the following
denition.
Denition 1.2.2 Let A : D(A) X ! X be a closed linear operator.
i) The point spectrum of A denoted by p(A) is dened by
p(A) =

2 C : I 􀀀 A is not injective

:
Moreover each element of p(A) is called an eigenvalue of A and each v 2 D(A)nf0g
satisfying (I 􀀀 A)v = 0 is an eigenvector of A corresponding to .
ii) The residual spectrum of A denoted by r(A) is dened by:
r(A) =

2 C : I 􀀀 A is injective but R(I 􀀀 A) is not dense in X

:
iii) The continuous spectrum of A denoted by c(A) is dened by:
c(A) =

2 C : I 􀀀 A is injective and R(I 􀀀 A) is a proper dense subset of X

:
iv) Besides, one also denes the approximate point spectrum of A denoted by a(A) by:
a(A) =

2 C : 9(xn)n2N D(A) such that kxnk = 1; and lim
n!1
kAxn 􀀀 xnk = 0

:
Theorem 1.2.3 For a closed operator A : D(A) X ! X, one has
a(A) = f 2 C : I 􀀀 A is not injective or R(I 􀀀 A) is not closed in Xg :
Theorem 1.2.4 Let A : D(A) X ! X be a self-adjoint linear operator which is densely dened.
Then
2 (A) () 9 c > 0 : k(A 􀀀 I)x k ckxk 8 x 2 D(A):
21
1.2.2 Semigroups of Linear Operators
In this section we will look at the semigroups of linear operators in a Banach space X, that will
be important in the project.
Denitions and properties
Denition 1.2.3 Let X be a Banach space. A family (T(t))t0 of bounded linear operators from
X to X is called a strongly continuous semigroup of bounded linear operators if the following three
conditions are satised :
i) T(0) = I = idX
ii) T(t + s) = T(t)T(s), 8t; s 0
iii) 8x 2 X, the map t 7! T(t)x 2 X dened from [0; +1) into X is continuous at the right of 0.
A strongly continuous semigroup of bounded linear operators on X will be called a Co semigroup.
If limt!0+ kT(t) 􀀀 Ik = 0, then we say that the semigroup is uniformly continuous.
1.2.3 Examples of Semigroups
Example 1.2.4
Let’s dene
T : [0;1) 􀀀! B
􀀀
C0(R)

t 7! T(t)
which at each f 2 C0(R) assigns T(t)f 2 C0(R) dened by
T(t)f : x 7! f(t + x) :
Then the family of operators (T(t))t0 is a strongly continuous semigroup.
Justication:
Clearly T is well-dened and (i) For all f 2 C0(R and for all x 2 R, we have
(T(0)f)(x) = f(0 + x) = f(x):
Thus T(0)f = f and so T(0) = I
(ii) Let s; t 2 [0;1), f 2 C0(R) and x 2 R. Then we have that
􀀀
(T(s)T(t))f

(x) =
h
T(s)
􀀀
T(t)f
i
(x)
=
􀀀
T(t)f

(s + x)
= f(t + s + x)
= f(s + t + x)
=
􀀀
T(s + t)f

(x)
which shows that T(s)T(t) = T(s + t).
(iii) We now show the continuity of t 7! T(t)f at the right of 0. Let f 2 C0(R), then
kT(t)f 􀀀 fkC0(R) = sup
x2R
j
􀀀
T(t)f

(x) 􀀀 f(x)j
= sup
x2R
jf(t + x) 􀀀 f(x)j 􀀀! 0 as t 􀀀! 0+
22
by uniform continuity of f.
Thus
lim
t􀀀!0+
kT(s)f 􀀀 fkC0(R) = 0:
And therefore (T(s))s0 dened above is a strongly continuous semigroup.
Example 1.2.5
Consider the following heat equation with boundary conditions
8<
:
yt 􀀀 yxx = 0; 0 < x < L and t > 0;
y(0; t) = y(L; t) = 0; t > 0
y(x; 0) = g(x); 0 < x < L
(1.22)
where L > 0, and g 2 H1
0 (0;L).
Using the Fourier sine expansion of y( ; t) in L2(0;L), we can apply suitably the separation of
variables method which leads to the weak solution
y(x; t) =
1X
k=1
ake􀀀k22
L2 t sin(
k
L
x)
where
ak =
2
L
Z L
0
g(x) sin(
k
L
x)dx
are the Fourier sine coecients of the initial value g of the problem.
Observe that for t > 0 xed, y( ; t) 2 L2(0;L) with
jjy( ; t)jj22

1X
k=1
jakj2 = jjgjj22
:
Also for any > 0,
1X
k=1
ake􀀀k22
L2 t sin(
k
L
x)
converges absolutely on [0;L] and uniformly with respect to t since
jakj e􀀀k22
L2 t jjgjj2 e􀀀k2
L2 8 k 1 :
Note also that the function y( ; ) is innitely many times dierentiable with respect to the two
variables (x; t) for t > 0. Now dene k : [0;L] ! R by
k(x) =
r
2
L
sin(
k
L
x) :
Then (k)k forms a complete orthonormal set of L2(0;L) endowed with the inner product
hf; hi =
Z L
0
f(x)h(x)dx :
23
So we can write the solution y as
y(x; t) =
1X
k=1
< g; k > e􀀀k22
L2 tk(x)
and this solution y( ; t) can be regarded as the image of the initial data g through some operator
which gives rise to a semigroup.
Indeed, let us consider for each t 0, the operator S(t) dened by
S(t)g =
1X
k=1
< g; k > e􀀀k22
L2 tk(x) :
We prove that
􀀀
S(t)

t0 is a strongly continuous semigroup of bounded operators of L2(0;L).
Firstly we show that
S(t) 2 B(L2(0;L)); 8t 0 :
Clearly for each t 0, S(t) is well-dened from L2(0;L) into L2(0;L) and
jjS(t)gjj22

1X
k=1
jhg; kij2 = jjgjj2
2
8 g 2 L2(0;L):
Moreover S(t) is linear because for all g; f 2 L2(0;L) and all ; 2 R, we have
S(t)(f + g) =
1X
k=1
h + g; kie􀀀k22
L2 tk(x)
=
1X
k=1
hf; kie􀀀k22
L2 tk(x) +
1X
k=1
hg; kie􀀀k22
L2 tk(x)
= S(t)f + S(t)g :
It follows that
S(t) 2 B(L2(0;L)); and jjS(t)jj 1; 8t 0 :
Moreover, it is not hard to see that the Fourier series expansion of g can be obtained by considering
the Fourier series expansion of the odd function dened on [􀀀; ] by:
f(x) =

􀀀g(􀀀x); x 2 (􀀀; 0)
g(x); x 2 (0; )
It then follows that,
i)
S(0)g =
1X
k=1
< g; k > k(x)
=
1X
k=1

2
L
Z L
0
g(y) sin(
k
L
y)dy

sin(
k
L
x)
=
1X
k=1
ak sin(
k
L
x)
= g
24
Thus S(0) = I.
ii)
[S(t) S(s)g](x) = S(t)

S(s)g

(x)
=
1X
k=1
< S(s)g; k > e􀀀k22
L2 tk(x)
=
1X
k=1
D 1X
n=1
< g; n > e􀀀k22
L2 sn(x); k
E
e􀀀k22
L2 t(x)
=
1X
k=1
1X
n=1
e􀀀k22
L2 s < g; n >< n; k > e􀀀k22
L2 t(x)
=
1X
k=1
< g; k > e􀀀k22
L2 se􀀀k22
L2 tk(x)
since the 0
ks form an orthogonal basis
=
1X
k=1
< g; k > e􀀀k22
L2 (t+s)k(x)
= [S(t + s)g](x)
Thus S(t + s)g = S(t) S(s)g
iii) Let g 22 L2(0;L) be arbitrarily xed and set ak = hg; ki for every k 1. Then
kS(t)g 􀀀 gk2
2 =

1X
k=1
ake􀀀k22
L2 tk(x) 􀀀 g

2
2
=

1X
k=1
ake􀀀k22
L2 tk(x) 􀀀
1X
k=1
akk(x)

2
2

1X
k=1
akk(x)(e􀀀k22
L2 t 􀀀 1)

2
2

1X
k=1
jakj2

1 􀀀 e􀀀k22
L2 t
2
􀀀! 0 in l2(R) as t 􀀀! 0+
by the dominated convergence theorem:
Thus
lim
t􀀀!0+
S(t)g = g :
Hence fS(t)gt0 is a semigroup.
25
1.2.4 Innitesimal Generator of a C0-semigroup
Denition 1.2.6 The innitesimal generator of a C0 semigroup (T(t))t0 of bounded linear operators
of a Banach space X, is the linear operator A : D(A) 􀀀! X with domain
D(A) =

x 2 X : lim
t!0+
T(t)x 􀀀 x
t
exists

and dened by
Ax := lim
t!0+
T(t)x 􀀀 x
t
=
d+T(t)x
dt
jt=0 for x 2 D(A) :
Theorem 1.2.5 Let (T(t))t0 be a C0 semigroup. Then there exist constants w 2 R and M 1,
such that
kT(t)k Mewt ; for 0 t < +1:
Theorem 1.2.6 If (T(t))t0 is a C0 semigroup, then 8x 2 X, the map t 7! T(t)x is continuous
from R+ into X.
We now give some properties of C0-semigroups in the following theorem.
Theorem 1.2.7 Let (T(t))t0 be a C0 semigroup and A be its innitesimal generator. Then
1) For x 2 X,
lim
h!0+
1
h
Z h
t
T(s)x ds = x
and
lim
h!0
1
h
Z t+h
t
T(s)x ds = T(t)x 8 t > 0 :
2) For x 2 X and t > 0,
Z t
0
T(s)x ds 2 D(A) and A
Z t
0
T(s)x ds

= T(t)x 􀀀 x :
3) For every x 2 D(A), T(t)x 2 D(A) and
d
dt
T(t)x = AT(t)x = T(t)Ax :
4) For x 2 D(A) and s; t 0,
T(t)x 􀀀 T(s)x =
Z t
s
T( )Ax d =
Z t
s
AT( )x d :
Corollary 1.2.2 If A is the innitesimal generator of a C0-semigroup (T(t))t0, then the domain
D(A) of A is dense in X and A is a closed linear operator.
In the following theorem we show that two C0-semigroups are equal if and only if they are
generated by the same linear operator.
26
Theorem 1.2.8 Let (T(t))t0 and (S(t))t0 be two C0 semigroups on X generated respectively by A
and B. If A = B, then T(t) = S(t), 8t 0.
Proof:
Assume A = B and let x 2 D(A) = D(B). Let us set
(s) = T(t 􀀀 s)S(s)x ; s 2 [0; t]:
From Theorem 1.2.7- (3), is dierentiable and we have that:
d
ds
(s) = 􀀀T(t 􀀀 s)AS(s)x + T(t 􀀀 s)BS(s)x = 0
since A = B. So (s) = const: and therefore (0) = (t) i.e
T(t)x = S(t)x; 8x 2 D(A):
By Corollary 1.2.2, D(A) is dense in X and T(t); S(t) are closed; therefore
T(t)x = S(t)x; 8x 2 X
Not all semigroups are uniformly continuous. Therefore having a linear dierential equation
du
dt
= Au
in an abstract space, it is important to nd out whether the operator A is the innitesimal generator
of some semigroup. An answer is rst given in the following theorem which is a characterization of
the innitesimal generator of a uniformly continuous semigroup.
Theorem 1.2.9 A linear operator A is the innitesimal generator of a uniformly continuous semigroup
if and only if A is a bounded linear operator.
Proof:
a) Let A be a bounded linear operator. It is well known that the series
P+1
n=0
(tA)n
n! converges in
norm for all t 0, and denes for each such t a bounded linear operator T(t).
It is not hard to see that
T(0) = I
T(t + s) = T(t)T(s), for all t; s 0.
etA = I +
P+1
n=1
(tA)n
n!
etA 􀀀 I = tA
P+1
n=1
(tA)n􀀀1
n(n􀀀1)!
Taking norm on both sides, we have
ketA 􀀀 Ik =

tA
X+1
n=1
(tA)n􀀀1
n(n 􀀀 1)!

ktAk

X+1
n=1
(tA)n􀀀1
n(n 􀀀 1)!

ktAk

X+1
n=1
(tA)n􀀀1
(n 􀀀 1)!

tkAketkAk
27
That is kT(t) 􀀀 Ik tkAketkAk, which goes to zero as t goes to 0. Now, we claim that A is the
innitesimal generator of T(t). We prove this claim. Let t > 0. We have that
etA 􀀀 I = tA
P+1
n=1
(tA)n􀀀1
n(n􀀀1)!
etA􀀀I
t = A
P+1
n=1
(tA)n􀀀1
n(n􀀀1)!
etA􀀀I
t 􀀀 A = A
hP+1
n=1
(tA)n􀀀1
n(n􀀀1)! 􀀀 I
i
Taking norm on both sides we have,
k
etA 􀀀 I
t
􀀀 Ak kAk

X+1
n=1
(tA)n􀀀1
n(n 􀀀 1)!
􀀀 I

kAk

X+1
n=1
(tA)n􀀀1
(n 􀀀 1)!
􀀀 I

= kAkketA 􀀀 Ik
That is

T(t)􀀀I
t 􀀀 A

kAkkT(t) 􀀀 Ik, which implies as t ! 0+ that T(t)􀀀I
t ! A.
Thus we have established that (T(t))t0 is a uniformly continuous semigroup of bounded linear
operators generated by A.
b) Let T(t) be a C0-semigroup of bounded linear operators on X.
Fix > 0 small enough such that

I 􀀀 􀀀1
R
0 T(s)ds

1. This implies that 􀀀1
R
0 T(s)ds is
invertible and so also is
R
0 T(s)ds.
Now let h > 0
h􀀀1(T(h) 􀀀 I)
R
0 T(s)ds = h􀀀1
􀀀R
0 T(h + s)ds 􀀀
R
0 T(s)ds

= h􀀀1
R +h
T(s)ds 􀀀
R
0 T(s)ds

And therefore, we have
h􀀀1(T(h) 􀀀 I) = h􀀀1
R +h
T(s)ds 􀀀
R
0 T(s)ds
􀀀R
0 T(s)ds
􀀀1 and letting h ! 0, we obtain that
h􀀀1(T(h) 􀀀 I) converges in norm to a bounded linear operator (T() 􀀀 I)
􀀀R
0 T(s)ds
􀀀1 which is
the innitesimal generator of T(t).
Denition 1.2.7 (T(t))t0 is a C0 semigroup of contraction if and only if kT(t)k 1; 8t 0
In Theorem 1.2.9, we gave the necessary and sucient condition for a linear operator A to be
the innitesimal generator of a uniformly continuous semigroup. The following theorem is one of
the most important results in the theory of semigroups. It gives a necessary and sucient condition
for an unbounded linear operator to to be the innitesimal generator of a C0 semigroup. Examples
of such operators are encountered in the theory of PDEs such as the Laplacian operator.
Theorem 1.2.10 (Hille-Yosida) A linear (unbounded) operator A is the innitesimal generator
of a C0-semigroup of contractions if and only if:
i) A is closed and D(A) = X.
ii) The resolvent set of A; (A), contains R+,
28
i.e
(A) (0;+1) ;
and for every > 0
kR(;A)k
1

; where R(;A) = (I 􀀀 A)􀀀1 :
In an attemp to prove the above theorem, one would need the following lemmas.
Lemma 1.2.1 Let A be a linear operator satisfying condition (i) and (ii) of Theorem 1.2.10 and
R(;A) = (I 􀀀 A)􀀀1 :
Then
lim
!+1
R(;A)x = x; 8x 2 X:
Proof:
Suppose rst that x 2 D(A).
One can write R(;A)(I 􀀀 A)x = x which implies that R(;A)x 􀀀 x = R(;A)Ax. It follows
that
kR(;A)x 􀀀 xk = kR(;A)Axk
1

kAxk
by (ii) of Theorem 1.2.10. So
lim
!+1
R(;A)x = x; 8x 2 D(A):
But D(A) is dense in X and kR(;A)k 1 for all > 0. Thus for each x 2 X and every z 2 D(A)
we have
kR(;A)x 􀀀 xk kR(;A)(x 􀀀 z)k + kR(;A)z 􀀀 zk + kz 􀀀 xk
2kz 􀀀 xk + kR(;A)z 􀀀 zk :
Therefore
R(;A)x ! x as ! +1
for every x 2 X.
We now dene for every > 0, the Yosida approximation of A by:
A = AR(;A) = 2R(;A) 􀀀 I :
Lemma 1.2.2 Let A be a linear operator satisfying the conditions (i) and (ii) of Theorem 1.2.10
and denote by A the Yosida approximation of A. Then
lim
!+1
Ax = Ax ; 8x 2 D(A) :
29
Lemma 1.2.3 Let A satisfy conditions (i) and (ii) of Theorem 1.2.10 and denote by A the Yosida
approximation of 􀀀 A. Then A is the innitesimal generator of a C0-semigroup of contractions
etA

t0. Furthermore, for every x 2 X; ; > 0, we have
ketAx 􀀀 etAxk tkAx 􀀀 Axk; t 0 :
Proof:
From the denition of A, we have
A = 2R(;A) 􀀀 I
so that A is bounded as the sum of two bounded linear operators. Moreover
kAk k2R(;A)k + kIk 2 :
So A is a bounded linear operator and therefore is the generator of a C0 semigroup
􀀀
etA

t0 of
bounded linear operators.
Claim: ketAk 1; 8 t 0
Proof of the claim
etA = e(2tR(;A)􀀀tIdX)
= e2tR(;A):e􀀀tIdX
Taking the norm one gets,
ketAk = e􀀀tke2tR(;A)k
e􀀀tek2tR(;A)k; since ek2tR(;A)k et
1
And therefore etA is a C0 semigroup of contractions, also from the denition we see that etA, A,
etA and A commute pairwise.
Now for x 2 X and t 0 xed, let
f(s) = et(1􀀀s)AestAx ; 0 s 1
so that
f(0) = etAx and f(1) = etAx :
Then f is dierentiable on [0; 1] and
f0(s) = 􀀀tAet(1􀀀s)AestAx + tAet(1􀀀s)AestAx
= tet(1􀀀s)AestA(Ax 􀀀 Ax)
satises
jjf0(s)jj tjjAx 􀀀 Axjj 8 s 2 [0; 1] :
Therefore by the Mean Value Theorem we have
jjf(0) 􀀀 f(1)jj sup
0<s<1
jjf0(s)jj ;
that is
ketAx 􀀀 etAxk tkAx 􀀀 Axk :
30
1.2.5 Lumer-Phillips Theorem
Let X be a Banach space and X be its dual. For every x 2 X, we dene the duality set
J(x) X by
J(x) =

x : x 2 Xand hx; xi = kxk2 = kxk

:
Denition 1.2.8 A linear operator A : D(A) X 􀀀! X is said to be dissipative if for every
x 2 D(A),
RehAx; xi 0 for some x 2 J(x) :
Theorem 1.2.11 A linear operator A : D(A) X ! X is dissipative if and only if
k(I 􀀀 A)xk kxk; 8x 2 D(A) and > 0 :
Denition 1.2.9 A linear operator A : D(A) X ! X is said to be m-dissipative if A is dissipative
and I 􀀀 A is surjective for every > 0.
Remark 1.2.2 [2] Let A : D(A) X ! X be an operator of a Banach space X.
i) A is m-dissipative if A is dissipative and there exists 0 > 0 such that 0I 􀀀 A is surjective.
ii) If A is m-dissipative, then A is closed.
iii) If X is a Hilbert space and A is m-dissipative, then A is closed and its domain is dense in X.
Theorem 1.2.12 (Lumer- Phillips) Let A be a densely dened operator on some Banach space
X.
a) If A is m-dissipative (i.e., A is dissipative and there exists 0 > 0 such that Im(0I 􀀀A) = X),
then A is the innitesimal generator of a C0-semigroup of contractions on X.
b) If A is the innitesimal generator of a C0-semigroup
􀀀
T(t)

t0 of contractions on X then
Im(I 􀀀 A) = X 8 > 0
and A is dissipative. Moreover
8 x 2 D(A) and x 2 J(x); Re < Ax; x > 0:
Also in this case (0; +1) (A) and for all > 0 and 8 x 2 X
R(;A)x := (I 􀀀 A)􀀀1x =
Z +1
0
e􀀀sT(s)x ds :
31
Example 1.2.10
Let
X =

u 2 C([0:1]) : u(0) = 0

equipped with the sup-norm:
kuk1 = max
0x1
ju(x)j ; (1.23)
and consider the linear (dierential) operator
A = 􀀀 d
dx
with domain
D(A) =

u 2 C1([0; 1]); u(0) = u0(0) = 0

: (1.24)
Then X is a Banach space as a closed subspace of (C([0:1]) and also one easily shows that D(A) is
dense in X.
We prove that A is m-dissipative; i.e., I 􀀀 A is bijective and has a bounded inverse satisfying
(I 􀀀 A)􀀀1

1. To do this, it suces to verify that for any f 2 X, the problem
8<
:
du
dx + u = f;
u(0) = u0
(1.25)
has a unique solution u in D(A), and kuk1 jjfjj1.
Indeed,(1.25) has a unique solution that is explicitely expressed as
u(x) =
1

e􀀀1
x
Z x
0
e

f()d 0 x 1: (1.26)
The function u dened above is continuously dierentiable. Moreover one easily sees that u 2 D(A),
and for all x 2 [0; 1], we have
ju(x)j
1

e􀀀1
xkfk
Z x
0
e

d
= (1 􀀀 e􀀀1
x)kfk1 kfk1
which yields kuk1 kfk1 and we conclude that A is m-accretive.
The corresponding evolution equation is
8<
:
du
dt = Au;
u(0) = u0
(1.27)
where the unknown u is such that u(t) 2 X for t 0, that is
8>>>><
>>>>:
@u
@t + @u
@x = 0;
u(t; 0) = 0;
u(0; x) = u0(x)
(1.28)
32
by writing u(t)(x) = u(t; x).
Since A is m-dissipative, A is the innitesimal generator of a semigroup of contractions
􀀀
T(t)

t0.
Therefore this evolution problem has a unique mild solution
u : t 7! u(t) = T(t)u0
for every u0 2 X. This solution is a classical solution when u0 2 D(A).
One can really solve this problem when u0 2 D(A) by making the change of variable
(y; s) = (t 􀀀 x; t + x)
to get
u(x; t) =
8<
:
u0(x 􀀀 t) if x t;
0 if 0 x < t
(1.29)
Thus if u0 is any continuous but not dierentiable function, then for any t 0, the function u
dened above is not dierentiable, and neither belongs to D(A).
When u0 2 X, but does not belong to D(A), we still can view u = S(t)u0 as a sort of solution
in the generalized sense, and it is usually called the mild solution in the literature. On the other
hand in some special cases when u0 2 X, for t > 0, S(t)u0 is still a classical solution. Also one can
check directly that the map dened by S(t)u0 := u(t; u0) is a semigroup.
Now we specify the denition of m-accretive operators.
Denition 1.2.11 (Dissipative operator in Hilbert spaces) Let H be a real Hilbert space. An
unbounded linear operator
A : D(A) H ! H is said to be dissipative on H if
< Av; v >H 0; 8v 2 D(A)
.
A is m-dissipative if A is dissipative and moreover
8f 2 H; 8 > 0 9 ! u 2 D(A) such that u 􀀀 Au = f,
Denition 1.2.12 Let H be a Hilbert space. An unbounded linear operator
A : D(A) H ! H
is said to be monotone (or accretive) if
hAv; vi 0; 8v 2 D(A) :
A is said to be maximal monotone if A is monotone and moreover R(I + A) = H. That is
8f 2 H; 9 u 2 D(A) such that u + Au = f :
Remark 1.2.3 A is monotone (or accretive) if and only if 􀀀A is dissipative
33
Note that in a Hilbert space, an operator A is said to be maximal monotone, if 􀀀A is

GET THE FULL WORK