Eugene Speer - Lecture 5

28
NOTES: ODE SPRING 1998 V. Stability Throughout most of this chapter we will consider the system x 0 = f (t, x), (5.1) where f (t, x) is assumed to be continuous on some domain D IR×IR n . Roughly speaking, a solution x(t) of (5.1) satisfying some initial condition x(t 0 )= x 0 is stable if all solutions ˜ x(t) initiating at some sufficiently nearby point (˜ x(t 0 )= x 1 , with x 1 x 0 ) approach or remain near to x(t), as t →∞, in some appropriate sense. Our goal is to develop criteria by which we may verify that particular solutions are or are not stable. 5.1 Linearized stability analysis of constant solutions We begin with two precise definitions of the idea of stability, due to Lyapunov. Definition 5.1: Let x(t) be a solution of (5.1) which is defined on some interval (a, ). Then x(t) is stable if there exists a t 0 >a such that, with x 0 = x(t 0 ): (a) There exists a b> 0 such that, if |x 1 - x 0 | <b, then every solution (on a maximal interval) of the IVP ˜ x 0 = f (t, ˜ x), ˜ x(t 0 )= x 1 , (5.2) is defined for all t t 0 . (b) For every > 0 there exists a δ, with 0 b, such that if |x 1 - x 0 | then every solution ˜ x(t) of the IVP (5.2) satisfies | ˜ x(t) - x(t)| < for all t>t 0 . The solution x(t) is asymptotically stable if (a) and (b) hold and if, in addition: (c) There exists a ¯ δ, with 0 < ¯ δ b, such that if |x 1 - x 0 | < ¯ δ then every solution ˜ x(t) of the IVP (5.2) satisfies lim t→∞ | ˜ x(t) - x(t)| = 0. A solution which is not stable is unstable. Remark 5.1: (a) If we wish to emphasize that these stability conditions refer to t →∞ we may speak of stability or asymptotic stability at or on the right. Stability at -∞ or on the left is defined similarly. (b) It is an exercise to show that, at least if f is such that solutions of initial value problems are unique, then continuity in initial conditions implies that if conditions (a) and (b), or (a), (b), and (c), hold for some t 0 >a then they hold for all such t 0 . We may immediately restate Theorem 3.10 as a stability result: Theorem 5.1: Let A be an n × n matrix. Then the solution x(t) 0 of the system x 0 = Ax is asymptotically stable if and only if Re λ< 0 for all eigenvalues λ of A; this solution is stable if and only if Re λ 0 for all eigenvalues λ, and every Jordan block in the Jordan form of A for which the eigenvalue λ satisfies Re λ =0 is 1 × 1. Note that the terminology here is slightly different from that used for two dimensional linear systems in Example 4.1. In our current terminology, a stable node or stable spiral point is asymptotically stable, and a center (or the origin in the case that A is the zero 68

Transcript of Eugene Speer - Lecture 5

Page 1: Eugene Speer - Lecture 5

NOTES: ODE SPRING 1998

V. Stability

Throughout most of this chapter we will consider the system

x′ = f(t, x), (5.1)

where f(t, x) is assumed to be continuous on some domainD ⊂ IR×IRn. Roughly speaking,a solution x(t) of (5.1) satisfying some initial condition x(t0) = x0 is stable if all solutionsx(t) initiating at some sufficiently nearby point (x(t0) = x1, with x1 ' x0) approach orremain near to x(t), as t→∞, in some appropriate sense. Our goal is to develop criteriaby which we may verify that particular solutions are or are not stable.

5.1 Linearized stability analysis of constant solutions

We begin with two precise definitions of the idea of stability, due to Lyapunov.

Definition 5.1: Let x(t) be a solution of (5.1) which is defined on some interval (a,∞).Then x(t) is stable if there exists a t0 > a such that, with x0 = x(t0):(a) There exists a b > 0 such that, if |x1 − x0| < b, then every solution (on a maximalinterval) of the IVP

x′ = f(t, x), x(t0) = x1, (5.2)

is defined for all t ≥ t0.(b) For every ε > 0 there exists a δ, with 0 < δ ≤ b, such that if |x1 − x0| < δ then everysolution x(t) of the IVP (5.2) satisfies |x(t)− x(t)| < ε for all t > t0.The solution x(t) is asymptotically stable if (a) and (b) hold and if, in addition:(c) There exists a δ, with 0 < δ ≤ b, such that if |x1 − x0| < δ then every solution x(t) ofthe IVP (5.2) satisfies limt→∞ |x(t)− x(t)| = 0.A solution which is not stable is unstable.

Remark 5.1: (a) If we wish to emphasize that these stability conditions refer to t → ∞we may speak of stability or asymptotic stability at ∞ or on the right. Stability at −∞ oron the left is defined similarly.(b) It is an exercise to show that, at least if f is such that solutions of initial value problemsare unique, then continuity in initial conditions implies that if conditions (a) and (b), or(a), (b), and (c), hold for some t0 > a then they hold for all such t0.

We may immediately restate Theorem 3.10 as a stability result:

Theorem 5.1: Let A be an n × n matrix. Then the solution x(t) ≡ 0 of the systemx′ = Ax is asymptotically stable if and only if Reλ < 0 for all eigenvalues λ of A; thissolution is stable if and only if Reλ ≤ 0 for all eigenvalues λ, and every Jordan block inthe Jordan form of A for which the eigenvalue λ satisfies Reλ = 0 is 1× 1.

Note that the terminology here is slightly different from that used for two dimensionallinear systems in Example 4.1. In our current terminology, a stable node or stable spiralpoint is asymptotically stable, and a center (or the origin in the case that A is the zero

68

Page 2: Eugene Speer - Lecture 5

NOTES: ODE SPRING 1998

matrix or that A has eigenvalues 0 and λ < 0) is stable but not asymptotically stable. Allother critical points classified in Example 4.1 are unstable.

We now turn to the general problem (5.1) in the special case in which there exists aconstant solution, which by a shift of the x and t variables we may assume to be x(t) ≡ 0and to be defined on [0,∞). We will consider the case in which the system (5.1) may beapproximated by a linear system for points (x, t) with x near 0; specifically, we will alwaysassume that (5.1) may be written in the form

x′ = A(t)x+ h(t, x), (5.3)

where A(t) is an n× n continuous real matrix defined on some interval I ⊃ [0,∞), h(t, x)is a IRn-valued function which is defined and continuous on (at least) Dρ ≡ {(t, x) | t ∈I and |x| < ρ} for some ρ > 0, and which satisfies h(t, 0) ≡ 0. Our goal is to show thatin certain cases the stability of the solution x(t) ≡ 0 for the system (5.3) is the same asfor the approximating linear system x′ = Ax. We expect this to be true when h is smallcompared to the linear term A(t)x and will therefore frequently suppose that h satisfiesthe following condition (Cη) for some η ≥ 0:CONDITION (Cη) : There exists a ρ > 0 such that h satisfies

|h(t, x)| ≤ η|x|, (5.4)

for all (t, x) ∈ Dρ.We first give a fairly general asymptotic stability theorem for the case of a constant

matrix A.

Theorem 5.2: Suppose that the matrix A(t) in (5.3) is an n × n constant matrix Afor which all eigenvalues λ1, . . . , λn have negative real part. Then there exists an η0 > 0(depending on A) such that, if the function h(t, x) satisfies condition (Cη) for some η < η0,then x(t) ≡ 0 is an asymptotically stable solution of the system (5.3).

Proof: Setµ ≡ − sup

1≤i≤n(Re λi)

and choose µ with 0 < µ < µ. Since Re(λi + µ) < 0 for any i, the function tket(λi+µ) isbounded on [0,∞) for any k ≥ 0, that is, there is a constant cik with tketλi ≤ ciketµ fort ≥ 0. Thus if J = P−1AP is the Jordan form of A, then

‖etA‖ ≤ ‖P−1‖‖P‖‖etJ‖ ≤ Ce−tµ

for some constant C . We will prove that 0 is asymptotically stable if h satisfies (Cη) forsome η < η0 ≡ µ/C .

Suppose then that (5.4) holds in Dρ and that x(t) is any solution of (5.3) with |x(0)| <ρ. We may treat u(t) ≡ h(t, x(t)) in (5.3) as a known function and apply the variation ofparameters formulas (3.7) and (3.8) to see that x(t) must satisfy the integral equation

x(t) = etAx(0) +∫ t

0

e(t−s)Ah(s, x(s))ds. (5.5)

69

Page 3: Eugene Speer - Lecture 5

NOTES: ODE SPRING 1998

Hence, as long as (t, x(t)) ∈ Dρ,

etµ|x(t)| ≤ C |x(0)|+ Cη

∫ t

0

esµ|x(s)| ds.

Now apply Gronwall’s inequality (taking u(t) = etµ|x(t)| in (2.6)) to find etµ|x(t)| ≤C |x(0)|etCη or

|x(t)| ≤ C |x(0)|e−t(µ−Cη). (5.6)

We can now verify of the conditions of Definition 5.1 for asymptotic stability. Letb = ρ/2C and suppose that x(t) is a solution of (5.3) with |x(0)| < b. Let β be the upperbound of the set of times for which x(t) is defined and lies in Dρ; if β < ∞, then (5.6)implies that for 0 ≤ t < β, (t, x(t)) is contained in the compact subset [0, β]× {|x| ≤ ρ/2}of Dρ, contradicting the extension theorem Theorem 2.19. Thus β =∞ and x(t) is definedand satisfies (5.6) for all times t ≥ 0, verifying both (a) and, with δ = b, (c). Finally, givenε, assume without loss of generality that ε < ρ/2 and let δ = ε/C ; then (5.6) implies thatif |x(0)| < δ then |x(t)| ≤ ε for all t ≥ 0.

We summarize in the following corollary some situations in which Theorem 5.2 imme-diately implies asymptotic stability of a constant solution. To state one of these conditionswe need a piece of standard terminology. If g(x) is an IRm-valued function defined ina neighborhood of 0 and if there exists a nonnegative function η(r) defined for r > 0,with limr→0+ η(r) = 0, such that |g(x)| ≤ η(|x|)|x|α for some α ∈ IR, then we write|g(x)| = o(|x|α). We will be particularly interested in the case in which h(t, x) ∈ IRn is afunction defined in some Dρ, and in which

|h(t, x)| ≤ η(|x|)|x|

in Dρ. Then we write |h| = o(|x|) uniformly in t. The reader should verify: |h| = o(|x|)uniformly in t iff h(t, x) satisfies condition (Cη) for every η > 0.

Corollary 5.3: (a) Suppose that A is an n × n matrix for which all eigenvalues havenegative real part. If C(t) is an n×n matrix function which is defined and continuous on aninterval containing [0,∞) and for which the norm ‖C(t)‖ is sufficiently small, uniformlyin t, then x(t) ≡ 0 is an asymptotically stable solution of

x′ = Ax+ C(t)x.

(b) Suppose that A is as in (a). If h(t, x) is defined in some Dρ and is o(|x|) uniformly int, then x(t) ≡ 0 is an asymptotically stable solution of

x′ = Ax+ h(t, x).

(c) Suppose that F (x) is defined and continuously differentiable in some neighborhood ofx = 0, satisfies F (0) = 0, and is such that all eigenvalues of (DF )0 have negative real part.Then x(t) ≡ 0 is an asymptotically stable solution of

x′ = F (x).

70

Page 4: Eugene Speer - Lecture 5

NOTES: ODE SPRING 1998

We have already indicated in Remark 4.3(a) that we do not expect stability (in thesense of Definition 5.1) of the solution of the linear problem to imply stability for thenon-linear problem. But there is a theorem corresponding to Theorem 5.2 for unstablesolutions.

Theorem 5.4: Suppose that the matrix A(t) in (5.3) is an n×n constant matrix A witheigenvalues λ1, . . . , λn, and that at least one λi has positive real part. Then if the functionh(t, x) satisfies condition (Cη) for some sufficiently small η, x(t) ≡ 0 is an unstable solutionof the system (5.3).

The proof is somewhat complicated algebraically and requires reduction of A to an ap-propriate canonical form; we begin by defining this form and then prove that the reductionis possible.

Definition 5.2: Suppose that γ ∈ IR, γ 6= 0. An n × n matrix K is in γ-modified realcanonical form if

K =

K1

0K2

. . .0Km

, (5.7a)

where each block Ki has the form

Ki =

Λi γI

0Λi γI

Λi. . .. . . γI0

Λi

. (5.7b)

Here either (i) Λi = λi, with λi a real eigenvalue of K, and I = 1 is the 1× 1 identity , or

(ii) Λi =[

Reλi Imλi− Imλi Reλi

], with λi a complex eigenvalue of K, and I is the 2× 2 identity

matrix.

Lemma 5.5: Suppose that γ 6= 0. Then every n× n matrix A is similar to a matrix Kin γ-modified real canonical form (5.7).

Proof: By our construction of the real canonical form of A we know that there exists abasis of Rn composed of vectors {uij, vij , wij} which satisfy

Auij = λiuij + ui,j−1,

Avij = αivij − βiwij + vi,j−1,

Awij = αiwij + βiv

ij + wi,j−1,

71

Page 5: Eugene Speer - Lecture 5

NOTES: ODE SPRING 1998

where it is understood that the last terms are absent if j = 1. If we define uij = γjuij,vij = γjvij , and wij = γjwij , then multiplying these equations by γj yields

Auij = λiuij + γui,j−1,

Avij = αivij − βiwij + γvi,j−1,

Awij = αiwij + βiv

ij + γwi,j−1,

and A has the form (5.7) in the new basis.

Our next result shows that the change of variables necessary to reduce A to canonicalform does not affect the nature of the stability problem. We state it in terms of a moregeneral, time-dependent, coordinate change, since that generality will be needed in thenext section.

Lemma 5.6: Suppose that Q(t) is an n× n matrix defined on I ⊃ [0,∞), and that Q iscontinuously differentiable and has continuous inverse. Then x(t) is a solution of (5.3) iffx(t) = Q(t)y(t) with y(t) a solution of

y′ = B(t)y + g(t, y), (5.8)

where B(t) = Q(t)−1[A(t)Q(t) − Q′(t)] and g(t, y) = Q−1(t)h(t,Q(t)y). Moreover, if also‖Q(t)‖ ≤ M1 and ‖Q−1(t)‖ ≤ M2 for all t ∈ I, then (i) if h satisfies condition (Cη) theng satisfies condition (CM1M2η), and vice versa; (ii) 0 is a stable, asymptotically stable,or unstable solution of (5.3) iff 0 is a stable, asymptotically stable, or unstable solution,respectively, of (5.8).

Proof: This is a straightforward verification which we leave to the reader.

Proof of Theorem 5.4: Let µ > 0 be the minimum among the positive real parts ofeigenvalues of A. Set γ = µ/6 and suppose that A = QKQ−1 with K of the form (5.7).Then Lemma 5.6 immediately implies that it suffices to prove the theorem with A replacedby K, that is, for the special system

x′ = Kx+ h(t, x). (5.9)

Let us label the components of x corresponding to a block Ki as (xij)1≤j≤ni , where

xij ∈ IR in case (i) of Definition 5.2 and xij =[xij1xij2

]∈ IR2 in case (ii). Then for any

solution x(t) of (5.9),x′ij = Λixij + γxi,j+1 + hij(t, x),

where the term involving γ is missing if j = ni. Suppose that the blocks of K are numberedso that Reλi > 0 if 1 ≤ i ≤ m′ and Reλi ≤ 0 otherwise. If x(t) is a solution of (5.9) wedefine

R2(t) =m′∑i=1

ni∑j=1

xTijxij and r2(t) =m∑

i=m′+1

ni∑j=1

xTijxij .

72

Page 6: Eugene Speer - Lecture 5

NOTES: ODE SPRING 1998

(Note xTijxij = x2ij if xij ∈ IR, xTijxij = x2

ij1 +x2ij2 if xij ∈ IR2.) The trick of the proof is to

find some quantity which must increase exponentially in time; we will show that R(t)−r(t)has this property.

We begin by deriving an estimate for R′(t), using the formula

d

dtR2 = 2RR′ = 2

m′∑i=1

ni∑j=1

[xTijΛixij + γxTijxi,j+1 + xTijhij

]. (5.10)

We estimate the three terms in (5.10) in turn. First, it is an easy calculation to see that

xTitΛixij = Reλi xTijxij ≥ µxTijxij . (5.11a)

Second, by the Cauchy-Schwarz inequality,∣∣∣∣∣∣m′∑i=1

ni−1∑j=1

xTijxi,j+1

∣∣∣∣∣∣ ≤ m′∑i=1

ni−1∑j=1

xTijxij

1/2 m′∑i=1

ni∑j=2

xTijxij

1/2

≤ R2(t). (5.11b)

Finally, note that for i ≤ m′ the numbers |xij |, |xij1|, and |xij2| are at most R, and thatfor any y ∈ IRn the Cauchy-Schwarz inequality yields |y| ≤

√n(∑y2k)1/2. Hence condition

(Cη) on h implies∣∣∣∣∣∣m′∑i=1

ni∑j=1

xTijhij

∣∣∣∣∣∣ ≤ R|h| ≤ ηR|x| ≤ η√nR(R2 + r2)1/2 ≤ η√nR(R + r). (5.11c)

Inserting (5.11) into (5.10) yields RR′ ≥ µR2 − γR2 − η√nR(R + r) or

R′ ≥ (µ− γ)R− η√n(R + r). (5.12)

A very similar calculation shows that

r′ ≤ γr+ η√n(R + r), (5.13)

and hence, subtracting (5.13) from (5.12), we have the estimate(R − r)′ ≥ (µ− γ − 2η

√n)R − (γ + 2η

√n)r. (5.14)

We now verify that 0 is an unstable solution of (5.9) whenever h satisfies condition(Cη) with η ≤ µ/(6

√n). If Dρ is the neighborhood specified in (Cη) let ε = ρ/2; we will

show that for any δ > 0 there is an x1 with |x1| < δ such that, if x(t) is a solution of (5.9)with x(0) = x1, then |x(t)| > ε for some t > 0. To see this, it suffices to choose x1 so thatc ≡ R(0)− r(0) > 0. Now with γ = µ/6 and η = µ/(6

√n), (5.14) becomes

(R − r)′(t) ≥ µ

2(R − r)(t);

this equation certainly holds for those t for which |x(s)| ≤ ρ for 0 ≤ s ≤ t. Hence, for sucht,

R(t)− r(t) ≥ (R(0)− r(0))etµ/2 ≥ cetµ/2. (5.15)Now consider t = 2 log(ε/c)/µ. (5.15) can be false for this t only if |x(s)| ≥ ρ > ε for somes ≤ t; otherwise, (5.15) is valid and implies that |x(t)| ≥ R(t) ≥ (R − r)(t) ≥ ε. In eithercase the instability of 0 is verified.

Remark 5.2: There is an immediate corollary of this theorem which is exactly parallel toCorollary 5.2 above; we omit a detailed statement.

73

Page 7: Eugene Speer - Lecture 5

NOTES: ODE SPRING 1998

5.2 Stability of periodic solutions of non-autonomous systems

In this section we again consider equation (5.1), x′ = f(t, x); here we study lineariza-tion near a non-constant solution. Rather than postulating directly (as we did in Section5.1) that a linearization exists near a specific solution, we will assume that f(t, x) is con-tinuously differentiable in the variables x, that is, that Dxf exists and is continuous in thedomain D, so that we may always expand

f(t, x + y) = f(t, x) +Dxf(t,x)y + e(t, x, y), (5.16)

with limy→0 |y|−1e(t, x, y) = 0 (see Theorem 2.14).Suppose now that x(t) is some solution of (5.1) defined on [0,∞). If we set A(t) =

Dxf(t,x(t)) and define h(t, y) = e(t, x(t), y) in D′ = {(t, y) | (t, y + x(t)) ∈ D}, then (5.16)becomes

f(t, x(t) + y) = f(t, x(t)) +A(t)y + h(t, y);

we know that |h(t, y)| = o(|y|), although not necessarily uniformly in t. Thus if x(t) is anyother solution of (5.1), and

y(t) = x(t)− x(t),

then y satisfies y′(t) = x′(t)− x′(t) = f(t, x(t) + y(t))− f(t, x(t)) or

y′ = A(t)y + h(t, y). (5.17)

Equation (5.17) is called the variational equation of (5.1), relative to the solution x(t); itslinearization y′ = A(t)y is called the linear variational equation. It is straightforward toverify

Lemma 5.7: x(t) is a stable solution of (5.1) in D iff 0 is a stable solution of (5.17) inD′.

Remark 5.3: The linear variational equation y′ = A(t)y is essentially (2.25b), the lineardifferential equation we solved in Chapter II to find J(t;T,X) ≡ DX x(t;T,X). In eachcase, the equation may be thought of as describing the time evolution of an infinitesimalperturbation of the initial value x(0). To see the connection more directly, suppose thatx(t) and x(t) above correspond to x(t;T,X) and x(t;T,X + εY ), respectively. Then

J(t;T,X)Y = limε→0

ε−1[x(t;T,X + εY )− x(t;T,X)] = limε→0

ε−1yε(t), (5.18)

and it follows formally from (5.17) and (5.18) that

J ′Y = limε→0

[A(t)

(ε−1yε(t)

)+ ε−1h(t, yε(t))

]= AJY,

since |h(t, yε)| = o(|yε|) = o(ε), and this is (2.25b).

We now consider the special case in which f(t, x) is periodic in t and x(t) is a periodicsolution of (5.1) with rationally related period. Specifically, we assume that, for someτ > 0,

f(t + τ, x) = f(t, x) and x(t + τ ) = x(t),

74

Page 8: Eugene Speer - Lecture 5

NOTES: ODE SPRING 1998

and (implicitly) that (t, x) ∈ D iff (t+ τ, x) ∈ D; τ is not necessarily the minimal period ofeither x or f . Then A(t) = Dxf(t,x(t)) is τ -periodic. h(t, y) in (5.17) is also periodic, andis therefore defined in some uniform neighborhood {(t, x) | |x| ≤ ρ} of x = 0 and satisfiescondition (Cη) for any η > 0.

Because A(t) is periodic, we may apply the Floquet theory of Section 3.4 to the linearvariational equation

y′ = A(t)y. (5.19)

Thus any fundamental matrix for (5.19) has the form P (t)etR, with P τ -periodic. Recallthat by definition the characteristic multipliers of A(t) are the eigenvalues of eτR, orequivalently the numbers eτλ with λ an eigenvalue of R.

The fundamental stability result is for this type of periodic solution is

Theorem 5.8: Let f(t, x) be τ -periodic and let x(t) be a τ -periodic solution of the systemx′ = f(t, x). Then:

(a) x(t) is asymptotically stable if all characteristic multipliers of A(t) ≡ Dxf(t,x(t)) havemagnitude less than one, and

(b) x(t) is unstable if at least one characteristic multiplier of A(t) has magnitude greaterthan one.

Proof: By Lemma 5.7 it suffices to analyze (5.17). We adopt the notation from Floquettheory recalled above and introduce a new variable z(t) by y(t) = P (t)z(t); by Lemma 5.6,the original problem is now reduced to the study of

z′ = P−1(t)[A(t)P (t) − P ′(t)]z + g(t, z), (5.20)

where g satisfies (Cη) for any η > 0. But from(P (t)etR

)′ = A(t)P (t)etR it follows thatP−1[AP −P ′] = R, a constant matrix, so that (5.20) is of the type analyzed in section 5.1.Now the conditions on the characteristic multipliers given in (a) and (b) imply respectivelythat either (a) all eigenvalues of R have negative real parts, or (b) at least one eigenvalueof R has positive real part. Then Theorem 5.2, in case (a), or Theorem 5.4, in case (b),implies the result.

75

Page 9: Eugene Speer - Lecture 5

NOTES: ODE SPRING 1998

5.3 Stability of periodic solutions of autonomous systems

In this section we consider the autonomous system

x′ = f(x), (5.21)

where, again in order to permit linearization, we will assume that f ∈ C1(D) for some open,connected D ⊂ IRn. The analysis of the preceding section is applicable for discussion ofthe stability of a critical point x0 of this system; we must simply look at the eigenvalues ofthe derivative Dfx0 . Here we want to investigate criteria for stability of periodic solutions.

Remark 5.4: Our investigation will in fact require new definitions of stability, since someof the concepts and results of the previous sections are not useful in our current context.(a) A periodic solution of an autonomous system can never be asymptotically stable. Forif x(t) is periodic (but not constant) and x(t0) = x0, then for any δ > 0, x(t) ≡ x(t − δ)solves the IVP x′ = f(x), x(t0) = x1 ≡ x(t0 − δ); by choosing δ small we may make|x1 − x0| as small as we like, but it is not true that limt→∞ |x(t)− x(t)| = 0.(b) This makes it clear that Theorem 5.8(a) cannot apply in the autonomous case. We mayalso see directly that if x(t) is a periodic, non-constant solution, then A(t) ≡ Dfx(t) musthave at least one characteristic multiplier equal to one. For differentiating x′(t) = f(x(t))with respect to t yields [x′(t)]′ = Dfx(t)x

′(t) = A(t)x′(t), so that x′(t) is a solution ofthe linear variational equation (5.19). Because P (t)etR is a fundamental matrix for (5.19),x′(t) = P (t)etRc; then since x(t) and hence x′(t) are periodic,

x′(τ ) = P (τ )eτRc = x′(0) = P (0)c.

Since P (t) is also periodic this yields eτRc = c, i.e., eτR has 1 as an eigenvalue.(c) The remarks in (a) and (b) above are closely related, for the exponent found in (b)describes the behavior of infinitesimal perturbations of the initial data along the orbit itself,and its value of 1 shows that such perturbations neither shrink nor grow. The discussion in(a) describes finite perturbations of the same type, which in fact show the same behavior.

We next introduce a new type of stability which is well suited for the study of au-tonomous systems. For simplicity we define it only for periodic orbits.

Definition 5.3: Let x(t) be a periodic solution of (5.21) and let Cp be its orbit. We saythat x(t) or Cp is orbitally stable if(a) There exists a b > 0 such that, if d(x1, Cp) < b, then the solution x(t) of the IVP

x′ = f(x), x(0) = x1, (5.22)

is defined for all t ≥ 0.(b) For every ε > 0 there exists a δ, with 0 < δ ≤ b, such that if d(x1, Cp) < δ then thesolution x(t) of the IVP (5.22) satisfies d(x(t), Cp) < ε for all t > 0.We say that x(t) or Cp is asymptotically orbitally stable if (a) and (b) hold and if, inaddition:

76

Page 10: Eugene Speer - Lecture 5

NOTES: ODE SPRING 1998

(c) There exists a δ, with 0 < δ ≤ b, such that if d(x1, Cp) < δ then the solution x(t) ofthe IVP (5.22) satisfies limt→∞ d(x(t), Cp) = 0.Finally, we say that x(t) or Cp is asymptotically orbitally stable with asymptotic phase if(a) and (b) hold and if, in addition:(d) There exists a δ, with 0 < δ ≤ b, such that if d(x1, Cp) < δ then, for some σ ∈ IR, thesolution x(t) of the IVP (5.22) satisfies limt→∞ |x(t)− x(t + σ)| = 0.

Example 5.1: To understand the force of condition (d), consider the two autonomoussystems in D = IR2 \ {0}, written in polar coordinates as

I. r′ = 1− rθ′ = r,

II. r′ = (1− r)3

θ′ = r.

Both systems have the periodic solution rp(t) = 1, θp(t) = θ0 + t, with the unit circle asperiodic orbit. The general solutions, for r(0) = r0 6= 1 and θ(0) = θ0, are

I. r(t) = 1− (1− r0)e−t,θ(t) = θ0 + t− (1− r0)(1− e−t),

II. r(t) = 1− (1− r0)/g(t),θ(t) = θ0 + t+ [1− g(t)]/(1 − r0),

where g(t) =√

1 + 2(1− r0)2t. For each solution, r(t) → 1 as t → ∞, which is asymp-totic orbital stability of the periodic solution. For system I there is an asymptotic phase:limt→∞ |r(t) − rp(t + σ)| = 0 and limt→∞ |θ(t) − θp(t + σ)| = 0 for σ = −(1 − r0). Forsystem II, on the other hand, for any fixed σ we have θ(t)− θp(t+ σ) ∼

√2t as t→∞, so

that there is no asymptotic phase.

Now we saw in Remark 5.4 that perturbations of the initial data along the periodicorbit itself neither grow nor shrink with time. It is possible, however, that all otherinfinitesimal perturbations shrink, so that other solutions approach the orbit. In fact, wehave

Theorem 5.9: Suppose that x(t) is a non-constant solution of (5.21) with period τ andorbit Cp, and that n− 1 of the characteristic multipliers of A(t) ≡ Dfx(t) have magnitudeless than one. Then x(t) is asymptotically orbitally stable with asymptotic period.

Note that this theorem may be difficult to apply since, in general, we must solvethe linear variational equation to obtain the characteristic multipliers of A(t). This is incontrast to the theorems of Section 5.1, which depended only on a calculation of Dxf atpoints of the solution.

We will give a proof of Theorem 5.9, based on that in Hirsch and Smale, which developsthe important concept of the Poincare map associated with a periodic orbit. We begin bydefining this map.

Let Cp be the orbit of a periodic solution x(t) of minimal period τ . Let H ⊂ IRn bea hyperplane H = {z ∈ IRn | z · n = C} which contains the point x0 ≡ x(0) on Cp andis transverse to Cp at this point, that is, for which the normal vector n is not parallel tof(x0). Suppose that z is a point of H which is very close to x0. Then the solution x(t)with x(0) = z will stay close to x(t) for a long time—say, at least time 2τ if |z−x0| is small

77

Page 11: Eugene Speer - Lecture 5

NOTES: ODE SPRING 1998

enough—and hence will intersect H again, at some time approximately equal to τ , at apoint z which is also close to x0. The mapping which takes z to z is called the Poincaremap for the hyperplane H.

We may construct the Poincare map more carefully as follows. Let V be a (relatively)open subset of H which contains x0 and which satisfies f(x) · n ≥ a > 0 for all x ∈ V . (Vis called a section or local section of the flow Φ.) By mimicking the construction of Lemma4.7 we may define a flow box φ : W → IRn, with W = (−ε, ε) × V for some ε > 0 andφ(t, x) = Φt(x) for (t, x) ∈W . In fact, the construction is considerably simplified here bythe fact that f and hence φ are C1; the proof that φ is well defined and 1–1 is the same,but the inverse function theorem now implies immediately that φ(W ) is open and thatφ−1 is also C1. Since x(τ ) ≡ x(τ ; 0, x0) = x0, continuity in initial conditions implies thatthere exists a (relatively) open subset U ⊂ V with x(τ ; 0, z) ∈ φ(W ) for z ∈ U .

Definition 5.4: For z ∈ U , define (s(z), g(z)) ∈ W by (s(z), g(z)) ≡ φ−1(x(τ ; 0, z)).Then T (z) ≡ τ − s(z) is the time of first return of the point z to the hyperplane H, andg : U → V is called the Poincare map for the hyperplane H.

It is clear that x0 is a fixed point of g, i.e., that g(x0) = x0, and that T (x0) = τ .Moreover, because x and φ−1 are C1 maps, g and T are also C1. The motivation for theintroduction of the Poincare map is that asymptotic stability of the solution x(t) shouldbe reflected in the behavior of iterates of g: specifically, in the fact that gm(z) shouldapproach x0 as m increases, for any z ∈ U sufficiently close to x0. When this is true, wewill say that x0 is an attracting fixed point for g.

In fact, we will deduce Theorem 5.9 from the fact that, under the hypotheses of thistheorem, x0 is an attracting fixed point for g. To prove the latter, we will first show(Lemma ?) that x0 is attracting if all eigenvalues of the derivative of g at x0 are lessthan 1 in magnitude, then verify (Lemmas ? and ?) that the eigenvalue condition in thetheorem translates immediately into an eigenvalue condition on the derivative of g. Notethat this derivative Dgz, defined as usual for z ∈ H by

(Dgz)y = limh→0

h−1[g(z + hy)− g(z)],

is naturally regarded as a map from H0 to H0, where H0 = {y | y ·n = 0} is the hyperplaneparallel to H through the origin.

Lemma 5.10: Suppose that U ⊂ V ⊂ H as above and that g : U → V is C1. LetB = Dgx0 , and suppose that all eigenvalues of B have absolute value less than 1. Then thereexists a norm ‖y‖ on H0 and numbers ν < 1 and δ > 0 such that if ‖z−x0‖ < δ then z ∈ Uand ‖g(z)−x0‖ ≤ ν‖z−x0‖. In particular, for ‖z−x0‖ < δ, ‖gm(z)−x0‖ ≤ νm‖z−x0‖ → 0as m→∞.

In the proof we will use

Lemma 5.11: Let C be an n × n complex matrix and let µ = sup |λ|, the supremumtaken over all eigenvalues λ of C. Then µ = limn→∞ ‖Cn‖1/n.

The constant µ in Lemma 5.11 is called the spectral radius of the matrix C .Proof: Since µ = |λ| for some eigenvalue λ we have Cnx = λnx for x a correspondingeigenvector; hence ‖Cn‖ ≥ µn and lim infn→∞ ‖Cn‖1/n ≥ µ. Now consider the matrix

78

Page 12: Eugene Speer - Lecture 5

NOTES: ODE SPRING 1998

function R(z) = (C − zI)−1, which is analytic (e.g., by Cramer’s rule) for z not an eigen-value of C , in particular, for |z| > µ. For |z| > ‖C‖ we have a convergent expansionR(z) = −

∑∞k=0 z

−(n+1)Cn, from which we find

Cn =1

2πi

∮|z|=r

zn(C − zI)−1dz (5.23)

whenever r > ‖C‖. By Cauchy’s formula, then, (5.23) must hold also for any r > µ,from which ‖Cn‖ ≤ Krrn for all n (with Kr = (r/2π) sup|z|=r ‖C − zI)−1‖). Thuslim supn→∞ ‖Cn‖1/n ≤ r for any r > µ.

Proof of Lemma 5.10: Let µ be the spectral radius of B and choose γ with µ < γ < 1. Wedefine ‖ ‖ by

‖y‖ =∞∑k=0

γ−k|Bky|;

it is easy to verify from Lemma 5.11 that the series converges and, from this, that ‖ ‖ is anorm (note ‖y‖ ≥ |y|). Clearly ‖By‖ = γ‖y‖. Finally,

‖g(z)− x0‖ = ‖B (z − x0) + o(‖z − x0‖)‖ ≤ ν‖z − x0‖,

if γ < ν < 1 and ‖z − x0‖ is sufficiently small.

The next step is to relate the eigenvalues of Dgx0 to the hypotheses on the character-istic exponents of the matrix A(t) given in the theorem. This is the subject of the nexttwo lemmas.

Lemma 5.12: The characteristic multipliers of A(t) = Dfx(t) are the eigenvalues ofDΦτ

∣∣x0 , the derivative with respect to initial condition of the flow for one trip around the

periodic orbit.Proof: Let X(t) = DΦt

∣∣x0 . According to (2.25), X(t) satisfies the IVP

X ′(t) = Dfx(t)X(t) ≡ A(t)X(t); X(0) = I.

X(t) is thus a fundamental matrix for the linear variational equation (5.19), so that X(t) =P (t)etR, with P having period τ . But X(0) = I implies that P (0) = I and hence X(τ ) =eτR. Since the characteristic multipliers of A(t) are the eigenvalues of eτR, the lemma isproved.

Now suppose that the periodic orbit x(t) satisfies the hypotheses of Theorem 5.9, andlet X(t) be as in proof of the previous lemma. X(τ ) has 1 as a simple eigenvalue, andfrom Remark 5.4 we see that an eigenvector is x′(0) = f(x0). Let X(τ ) = QKQ−1 withK in real canonical form and let H0 be the n− 1 dimensional subspace of IRn spanned bythe columns of Q (generalized eigenvectors) not proportional to f(x0). Finally, let H bethe hyperplane through x0 and parallel to H0. Note that H0 is invariant under X(τ ), i.e.,X(τ )y ∈ H0 if y ∈ H0, and that H is transverse to x(t) at t = 0.

Lemma 5.13: If the Poincare map g is defined using the hyperplane H described imme-diately above, then Dgx0 is given by the restriction X(τ )

∣∣H0

of X(τ ) to H0.

79

Page 13: Eugene Speer - Lecture 5

NOTES: ODE SPRING 1998

Proof: For z ∈ U , g(z) = x(T (z); 0, z) satisfies x(T (z); 0, z) · n = C . Differentiating thisequation with respect to z, applying the derivative to a vector y ∈ H0, and evaluating atz = x0 yields[

x′(τ ; 0, x0) · n](DTx0)y + [X(τ )y] · n =

[f(x0) · n

](DTx0 )y = 0,

where X(τ )y ·n = 0, i.e., X(τ )y ∈ H0, because H0 is invariant under X(τ ). But f(x0) ·n 6=0, so

DTx0

∣∣H0

= 0. (5.24)

Then differentiating g(z) = x(T (z); 0, z) with respect to z ∈ H yields

(Dgx0 )y = f(x0)(DTx0 )y +X(τ )y = X(τ )y

for y ∈ H0.

Proof of Theorem 5.9: If we define the Poincare map g as in Lemma 5.13, then bythat lemma, Lemma 5.12, and the choice of H, all eigenvalues of Dgx0 are less than 1 inabsolute value. Let ‖y‖ be the norm on H0 guaranteed by Lemma 5.10, and let δ > 0 be asin that lemma, that is, such that ‖z−x0‖ < δ implies z ∈ U and ‖g(z)−x0‖ ≤ ν‖z−x0‖.We may suppose that δ is so small that |T (z)− τ | < τ for ‖z − x0‖ < δ. For z ∈ H with‖z − x0‖ < δ we write

tm(z) =m−1∑k=0

T (gkx);

tm is the total time for x(t; 0, z) to make m trips around the orbit.We first verify orbital stability, and consider initially those solutions x(t) = x(t; 0, z)

with z ∈ H. Given ε > 0, uniform continuity of x(t; 0, z) for ‖z − x0‖ small and t ina closed interval enables us to find δ1 > 0 such that, if ‖z − x0‖ < δ1 and t ∈ [0, 2τ ],then d(x(t), Cp) < ε; we may assume that δ1 ≤ δ. Now suppose that ‖z − x0‖ < δ1; anyt ≥ 0 may be written as t = tm(z) + s for some m ≥ 0 and s ∈ [0, 2τ ), so that since‖gm(z)‖ ≤ ‖z‖ < δ1, x(t) = x(s; 0, gm(z)) satisfies d(x, Cp) < ε.

Now consider a general solution x(t) = x(t; 0, x1). Choose δ2 so small that whend(x1, Cp) < δ2, then, first, d(x(t), Cp) < ε for t ∈ [0, 2τ ], and, second, x(t) must intersectH at a point z = x(t0) with t0 ∈ [0, 2τ ) and ‖z − x0‖ < δ1 (to see that this is possible onemay use a flow box as in Definition 5.4). Then the argument above shows that d(x, Cp) < εfor t ≥ t0, and our choice of δ2 guarantees this for 0 ≤ t ≤ t0. This completes the proofthat x(t) is orbitally stable.

It remains to show that x(t) has asymptotic phase if d(x1, Cp) is sufficiently small;the argument of the previous paragraph shows that it suffices to consider solutions suchthat x(0) = z ∈ H. Suppose that ‖z − x0‖ < δ. Now from the definition of tm above wemay expect a total phase shift of x(t) relative to x(t), as t→∞, of

σ ≡ limm→∞

[mτ − tm(z)] =∞∑k=0

[τ − T (gk(z))]. (5.25)

80

Page 14: Eugene Speer - Lecture 5

NOTES: ODE SPRING 1998

Now|T (gk(z))− τ | = |T (gk(z))− T (x0)| ≤M‖gk(z)− x0‖ ≤Mνk‖z − x0‖,

with M a bound on ‖DT (y)‖ for ‖y − x0‖ < δ, so that σ as defined by (5.25) is finite.Then

‖x(tm)− x(tm + σ)‖ ≤ ‖x(tm)− x(mτ )‖+ ‖x(mτ )− x(tm + σ)‖= ‖gm(z)− x0‖+ ‖x(0)− x(tm + σ −mτ )‖≤ νm‖z − x0‖+ ‖x(0)− x(tm + σ −mτ )‖,

and since limm→∞(tm+σ−mτ ) = 0 and ν < 1, we have limm→∞ ‖x(tm)−x(tm+σ)‖ = 0.From this it follows by continuity of x that limt→∞ ‖x(t)− x(t + σ)‖ = 0.

Example 5.2: The Van der Pol oscillator. Our discussion of the Van der Pol oscillator inChapter IV shows that the limit cycle there is asymptotically orbitally stable; this is, infact, a global stability property in IR2 \ {0}, since every solution initiating in that domaintends to the limit cycle. Here we will show that this limit cycle satisfies the hypotheses ofTheorem 5.9 and hence has asymptotic phase. Note that the map φ : IR+ → IR+ whichwe defined in our previous discussion is in fact just the Poincare map for the limit cycle,constructed using the half space H = {(x, y) | y = 0} and taking z0 = (c0, 0). Thusby Lemmas 5.12 and 5.13 we may verify that one characteristic multiplier of A(t) hasmagnitude less than 1 by showing that φ′(c0) < 1.

Now φ = ψ◦ψ, where ψ is the map corresponding to a trip halfway around the origin,so that

φ′(c0) = ψ′(ψ(c0))ψ′(c0) = [ψ′(c0)]2.

On the other hand, the function F (c) = c2 − ψ(c)2 introduced in the earlier discussionsatisfies

F ′(c0) = 2c0 − 2ψ(c0)ψ′(c0) = 2c0(1 − ψ′(c0));

thus it suffices to show that F ′(c0) > 0 (recall that ψ is increasing so that ψ′(c0) > 0).We have the decomposition F = F1 + F2 + F3 where each of Fj is non-decreasing; henceit suffices to show that

F ′1(c0) = 2∫ 1

0

y2 − y4

(xc0(y) + y − y3)2

dxc(y)dc

∣∣∣∣c=c0

dy, (5.26)

is strictly positive. But we know that xc(y) is non-decreasing in c for 0 ≤ y ≤ 1, and hencethe integrand in (5.26) is non-negative. Moreover, xc

∣∣∣y=0

= c so that

dxc(y)dc

∣∣∣∣c=c0y=0

= 1,

and the integrand is strictly positive in some neighborhood of y = 0. This completes theverification.

81

Page 15: Eugene Speer - Lecture 5

NOTES: ODE SPRING 1998

It is instructive to compute the eigenvalues of

Dzf =[

0 −11 1− 3y2

]at various points of the plane. The product of the eigenvalues is always 1. For y2 < 1 theeigenvalues are complex and lie on the unit circle, with positive real part for y2 < 1/3 andnegative real part for 1/3 < y2 < 1, and for 1 < y2 the eigenvalues are real and negative.This means that for y2 < 1/3 nearby points are pulled apart by the flow, this is basicallyunstable behavior. The limit cycle passes through this region; nevertheless, it also passesthrough the region y2 > 1/3, where nearby points are pushed toward each other. Thenet result of these competing tendencies is shown in the characteristic multipliers of A(t),which give the behavior after one complete trip around the cycle: one multiplier is 1,corresponding to no net contraction or expansion in the direction of the flow, and one isless than 1, corresponding to a contraction perpendicular to the flow.

5.4 The Second Method of Lyapunov

We begin our discussion of the second, or direct, method of Lyapunov by consideringthe stability of a critical point, taken by convention to be x = 0, of an autonomous systemx′ = f(x), f ∈ C0(D). If W is a continuously differentiable scalar function defined in someneighborhood of 0 in D we define W (x) in this neighborhood by

W (x) = DWxf(x) ≡n∑i=1

∂W

∂xi(x)fi(x),

so that if x(t) is a solution of the system then

d

dtW (x(t)) = W (x(t)).

We say that a function W defined on in a neighborhood of x = 0 is positive semidefiniteif it is continuous, nonnegative, and satisfies W (0) = 0; it is positive definite if in additionW (x) = 0 only for x = 0. Negative semidefinite and negative definite functions are definedsimilarly. Note that if W is a quadratic form, W (x) = xTBx for some symmetric matrixB, then this terminology corresponds with the usual notions of definite and semidefinitematrices.

The basic result of Lyapunov for this system is that:(a) x = 0 is stable if there is a C1 function W (x), defined in a neighborhood of 0, suchthat W is positive definite and W is negative semidefinite.(b) x = 0 is asymptotically stable if a W as above exists for which W is negative definite.

We will not give a formal proof of this result, which follows from the theorem fortime-dependent systems which we give below, but we point out that simple geometricconsiderations make the result “obvious.” The contours W (x) = λ must appear as in

82

Page 16: Eugene Speer - Lecture 5

NOTES: ODE SPRING 1998

· 0 .....................................................................

.........................

..................................................................................................................................................................................................................................................................................................................................

........................................................................................

........

............................

........................................................................................................................................................................................................................................................... ........

....................................

......................

..............................

.....................................

...........................................

.................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................

...................................

...................................................................................

Figure 5.1

Figure 5.1 (drawn for n = 2); as λ decreases to 0 these contours shrink down to theorigin. Because W is not positive, W (x(t)) is a non-increasing function of t when x(t) isa solution, so that once the solution is inside some contour W (x) = λ it can never escape;this is stability. If W is negative definite then W (x(t)) is strictly decreasing (assumingthat x(t) is not the zero solution) and thus x(t) must cross all contours and approach theorigin as t→∞; this is asymptotic stability.

Now we try to generalize these considerations to time-dependent systems. We willconsider the equation

x′ = f(t, x), (5.27)

assume that f is defined and continuous in domain Dρ ≡ {(t, x) | t ∈ I and |x| < ρ} forsome ρ > 0 and I ⊃ [0,∞) and that f(t, 0) ≡ 0, and study the stability of the solutionx(t) ≡ 0. We must introduce time-dependent Lyapunov functions V (t, x) which play therole of the functions W (x) in the autonomous case, and will always assume that V isdefined in Dρ (it would suffice to have V defined in Dρ′ for some ρ′ < ρ, but in thiscase we simply replace Dρ by Dρ′ as our fundamental domain). If V ∈ C1(Dρ) we defineV ∈ C0(Dρ) by

V (t, x) = DxV(t,x)f(t, x) +DtV(t,x) ≡n∑i=1

∂V

∂xi(t, x)fi(t, x) +

∂V

∂t(t, x),

so that again, if x(t) solves (5.27), then

d

dtV (t, x(t)) = V (t, x(t)).

Definition 5.5: A function V (t, x) defined in Dρ is— positive semidefinite if it is continuous, nonnegative, and satisfies V (t, 0) ≡ 0;— positive quasidefinite if it is positive semidefinite and if V (t, x) = 0 only for x = 0;— positive definite if it is positive semidefinite and if V (t, x) ≥ W (x) for some positive

definite W (x) defined for {|x| < ρ}.Negative semidefinite, negative quasidefinite, and negative definite functions are definedsimilarly.

83

Page 17: Eugene Speer - Lecture 5

NOTES: ODE SPRING 1998

Remark 5.5: Note that in defining positive definite functions V (t, x) we make use ofour earlier definition of positive definiteness for functions of x alone. The terminology“positive quasidefinite” is not standard; it seems the most natural generalization of theearlier definition of positive definiteness for time-independent functions, but in fact isuseful primarily for conceptual purposes. A positive definite function may be thought ofas uniformly positive quasidefinite.

Theorem 5.14: The solution x(t) ≡ 0 of (5.27) is

(a) stable, if there exists a positive definite function V ∈ C1(Dρ) for which V (t, x) isnegative semidefinite;

(b) asymptotically stable, if a V as above exists for which V is negative definite and forwhich, in addition, there exists a positive definite function W1(x) defined for {|x| < ρ} andsuch that V (t, x) ≤W1(x) in Dρ.

When the last condition of (b) is satisfied it is said that V (t, x) has an infinitesimalupper bound. We will prove this theorem shortly, but first comment on the nature of thehypotheses. The obvious generalization of the autonomous result sketched above would bethat stability would follow from the existence of a positive quasidefinite V (t, x) with V (t, x)negative semidefinite, and asymptotic stability from the additional requirement that V benegative quasidefinite. Instead, the theorem makes additional hypotheses of uniformity:for stability, the extra hypothesis is

(U1) V is positive definite, i.e., uniformly positive quasidefinite,

and for asymptotic stability, the hypotheses are (U1) as well as

(U2) V is negative definite, i.e., uniformly negative quasidefinite,

and

(U3) V has an infinitesimal upper bound.

We will show by example that none of these uniformity hypotheses may be omitted (al-though it is possible that they may be replaced by alternate ones). All our examples willinvolve homogeneous linear equations in one unknown function (n = 1).

Example 5.3: (a) Consider the equation x′ = λx for x ∈ IR and λ > 0. The generalsolution of this equation is x(t) = x0eλt, so that the origin is certainly not stable. Onthe other hand, if α > 2λ then the function V (t, x) ≡ x2e−αt is positive quasidefiniteand V (t, x) = (2λ − α)x2e−αt is negative semidefinite. Part (a) of Theorem 5.14 is notcontradicted because V does not satisfy (U1).

The next two examples have the following general character: g(t) is a C1 function definedand strictly positive on some open interval I containing [0,∞), and the differential equationconsidered is

x′ =g′(t)g(t)

x, (5.28)

with general solution x(t) = x0g(t).

84

Page 18: Eugene Speer - Lecture 5

NOTES: ODE SPRING 1998

(b) Let g(t) = (2 + t)/(1 + t), so that 0 is a stable but not asymptotically stable solutionof (5.28). Define V (t, x) = x2. Then (U1) and (U3) are certainly satisfied, but

V (t, x) = 2x2 g′(t)g(t)

= − 2x2

(1 + t)(2 + t),

so that (U2) is violated.(c) Let G(t) be a strictly positive C1 function, defined on I ⊃ [0,∞) and satisfying(i) G(t) ≤M for some M > 0, (ii) G(n) ≥ 1 for all n ∈ ZZ, n ≥ 0, and (iii) for some C > 0,

I(t) ≡∫ ∞t

G(s)ds ≤ CG(t).

Note that since I(t) is finite and decreasing we must have I(t) → 0 as t →∞, and theremust exist a sequence {tk} with tk →∞ and G(tk)→ 0. G should be pictured as a rapidlydecreasing function of t on which have been superimposed very thin bumps at the integers;we leave it as an exercise to verify that

G(t) = e−t +∞∑n=1

11 + 10n(t− n)2

is one possible choice. Let g(t) = [G(t)]1/2; then again 0 is a stable but not asymptoticallystable solution of (5.28). For a ≥ 0 we define

Va(t, x) =x2

G(t)[a+ I(t)];

by direct calculation we find that Va(t, x) = −x2 so that (U2) is satisfied. If a = 0 thenV0 ≤ Cx2 so that (U3) is satisfied, but V0(n, x) ≤ I(n)x2 so that (U1) fails. On the otherhand, if a > 0 then Va(t, x) ≥ ax2/M , so that (U1) is satisfied, but Va(tn, x) ≥ ax2/G(tn),so that (U3) fails.

We conclude that all uniformity hypotheses in Theorem 5.14 are necessary.

Proof of Theorem 5.14: We begin with a preliminary observation. Suppose that W (x) ispositive definite in {|x| < ρ}, and that ε > 0 satisfies ε < ρ. Then

λ(ε) ≡ infε≤|x|<ρ

W (x) (5.29)

is strictly positive, or, more precisely, we may without loss of generality insure that λ(ε) > 0by decreasing ρ slightly if necessary. We will always assume that this has been done. Then(5.29) says that for any ε > 0 there exists a λ = λ(ε) such that if W (x) < λ (and |x| < ρ)then |x| < ε.

We now prove part (a) of the theorem. Since by hypothesis V (t, x) is positive definite,there exists a positive definite W (x) with W (x) ≤ V (t, x) in Dρ. Given ε > 0, we must find

85

Page 19: Eugene Speer - Lecture 5

NOTES: ODE SPRING 1998

a δ = δ(ε) > 0 such that, if |x1| < δ and x(t) is a solution of (5.27) with x(0) = x1, definedon a maximal interval (a, b), then b = ∞ and |x(t)| < ε for all t ≥ 0. We may supposewithout loss of generality that ε < ρ. Let λ > 0 be a number such that |x| < ε if W (x) < λ,and let δ > 0 be chosen so that |x1| < δ implies V (0, x1) < λ. Since V (0, x1) < λ and(d/dt)V (t, x(t)) = V (t, x(t)) ≤ 0, we have λ > V (t, x(t)) ≥ W (x(t)) and hence |x(t)| < ε,for all x ∈ [0, b). Now the standard argument from our extension theorem (Theorem 2.19)shows that b =∞.

We now turn to part (b) of the theorem. We are given that W (x) ≤ V (t, x) ≤W1(x)and that V (t, x) ≤ −W2(x) in Dρ, for some positive definite functions W , W1, and W2.Take ε < ρ; we know from (a) that there is a δ > 0 such that if |x1| < δ and x(t) is asolution of (5.27) with x(0) = x1 then |x(t)| < ε for all t > 0. We will verify that inthis case also limt→∞ x(t) = 0. Because V ≤ 0, V (t, x(t)) is monotonic non-increasing;if V (t, x(t)) decreases to zero then so does W (x(t)), and the observation above impliesthat x(t) → 0. Suppose then that V (t, x(t)) ↘ λ > 0. Then W1(x(t)) ≥ λ for all t, andcontinuity of W1 implies that there exists an ε′ > 0 with |x(t)| > ε′ for all t. But now,again by the observation above, there is a λ′ with W2(x(t)) > λ′ for all t; this impliesthat V (t, x(t)) ≤ −λ′ for all t and hence that V (t, x(t)) ≤ V (0, x1) − λ′t, contradictingV (t, x(t)) ≥ λ.

Remark 5.6: We can use the preceding theorem to give a new proof of Theorem 5.2,which asserted that the solution x ≡ 0 of

x′ = Ax+ h(t, x),

is asymptotically stable if all eigenvalues of A have negative real part and if h(t, x) satisfiescondition (Cη) for some sufficiently small η. To do so, let −µ < 0 be the maximum realpart of any eigenvalue, choose γ with 0 < γ < µ, let A = QKQ−1 with K in γ-modifiedreal canonical form, let x have coordinates xij in the basis formed by the columns of Q,and define

V (x) =∑ij

xTijxij .

V is clearly positive definite with infinitesimal upper bound (take W = W1 = V in thenotation of the proof of Theorem 5.14(b)), and

V (x, t) = 2∑ij

[xTijΛixij + γxTijxi,j+1 + xTijhij(t, x)

]≤ −2

[µ− γ − η

√n]V (x),

where we have estimated just as in the proof of Theorem 5.4. Thus if η is chosen so smallthat µ − γ − η

√n is positive, then V is positive definite and Theorem 5.2 follows from

Theorem 5.14.

The subject of Lyapunov’s second method is a wide one, and there are many resultsnot given here—other types of stability may be established via Lyapunov functions, thehypotheses of Theorem 5.14 may be altered, “inverse theorems” may be proved which showthat a Lyapunov function must exist if stability holds, Lyapunov functions may be usedto prove instability, etc. Some of these are discussed in Cronin or may be tracked downthrough the references there; a few are included in our problems.

86

Page 20: Eugene Speer - Lecture 5

NOTES: ODE SPRING 1998

5.5 Stable and Unstable Manifolds

In this section we sketch an introduction to the important topic of stable and unstablemanifolds by considering the special case of a critical point of a C1 autonomous system.As usual, we write the system as

x′ = f(x), (5.30)

and for simplicity take the critical point to be x = 0. We will suppose throughout thatx = 0 is a hyperbolic critical point (see Remark 4.3), that is, that A ≡ Df0 has noeigenvalues with real part zero. We let k denote the total multiplicity of all eigenvalueswith negative real part, so that by hyperbolicity n−k is the total multiplicity of eigenvalueswith positive real part.

We begin by discussing the stable and unstable manifolds for the linearized system

x′ = Ax. (5.31)

We decompose IRn into the (direct) sum of two subspaces, writing IRn = Ls ⊕ Lu, whereLs is the k dimensional subspace spanned by all eigenvectors and generalized eigenvectorsof A corresponding to eigenvalues of negative real part, and Lu, of dimension n − k, isdefined similarly for the remaining eigenvalues. Ls and Lu are invariant under A and henceunder etA, that is, since x(t; t0, x0) = e(t−t0)Ax0, they are invariant sets for (5.31) in thesense of Chapter IV. If x0 ∈ Ls then limt→∞ x(t; 0, x0) = 0; on the other hand, if x0 /∈ Ls,then the expansion of x0 in generalized eigenfunctions of A will contain some generalizedeigenfunctions for eigenvalues with positive real part, so that limt→∞ |x(t)| =∞. Thus

Ls = {x0 ∈ IRn | limt→∞

x(t; 0, x0) = 0}. (5.32a)

For this reason Ls is called the stable subspace or stable manifold for the origin in thesystem (5.31). Similarly, Lu may be characterized by

Lu = {x0 ∈ IRn | limt→−∞

x(t; 0, x0) = 0}, (5.32b)

and is called the unstable manifold.

Remark 5.7: (a) Note that Lu is not the set of initial conditions x0 for which x(t; 0, x0)is unbounded as t → ∞; the latter set is just the complement of Ls and is not a linearsubspace. Initial conditions which do not lie in either Ls or Lu lead to trajectories whichbecome unbounded both as t approaches ∞ and as t approaches −∞.(b) Figure 5.2(a) shows a typical configuration of Ls and Lu in the case n = 2, k = 1; thisis just the saddle point we studied in Section 4.2. Figure 5.2(b) illustrates the case n = 3,k = 2. In this latter case, the configuration within Ls might equally well be a stable noderather than the stable spiral shown.

We now ask to what extent this picture persists in the full system (5.30). The answeris that near the critical point 0 it survives with one minor change—the linear subspaces Lsand Lu are replaced by more general manifolds of the same dimension. To describe these

87

Page 21: Eugene Speer - Lecture 5

NOTES: ODE SPRING 1998

.................................................................................................................................................................................................................................................................................................................................................................................................................................. ..........................

....................................................

................................................................................................................................................................................................................................................................................................................................................................................................................................................

..........................

.....................

.....

..........................

..........................

................................................................ ......................................................................................

......................................

.......................................................................

.........................

......................

....................

....................

..............................

....................

..........................

..........................

..................................................................................

.......................... ....................................

................................................

..........................

..........................................................................................................................

..........................

..........................

........................................................ ..........................

..............................................................................................................

..........................

..........................................................................................................................

..........................

........

.........

........

.

.......................................................................................... ..........................

........................................................................................................ ........................

........................................................................................................................................................

........................

............................

Ls

Lu

............................................................................................................................................. .............................................................................................................................................................................................................................. ..........................

................................

................................

................................

.................................

................................

................................

.............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................

........

........

........

........

........

........

.........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

.........

........

........

........

........

........

........

........

........

........

...

........

........

...................................................................................................................................................

...........................

............................

...........................................................................................................................................................................................................................................

..........................................

.................................................................................................................

.............................................................................................................................................................................

Lu

Ls

(a) (b)

Figure 5.2: Stable and unstable subspaces.

manifolds it is convenient to use a special norm ‖ · ‖ in IRn, which we will define shortly.Then for ε > 0 we define local stable and unstable sets, W ε

s and W εu respectively, for the

critical point 0, by

W εs = {x0 ∈ IRn | ‖x(t; 0, x0)‖ ≤ ε for t ≥ 0, and limt→∞ x(t; 0, x0) = 0}, (5.33a)

W εu = {x0 ∈ IRn | ‖x(t; 0, x0)‖ ≤ ε for t ≤ 0, and limt→−∞ x(t; 0, x0) = 0}. (5.33b)

Our main theorem describes these sets very explicitly. To express it we introduce thefollowing notation: since we know that each x ∈ IRn may be written uniquely as x = y+ zwith y ∈ Ls and z ∈ Lu, we treat y and z as coordinates and write x = (y, z). We willwrite Bε ≡ {x ∈ IRn | ‖x‖ ≤ ε}.

Theorem 5.15: For sufficiently small ε there exists a C1 mapping φ : Bε ∩ Ls → Lu,with φ(0) = 0 and Dφ0 = 0, such that

W εs = {(y, φ(y)) | y ∈ Bε ∩ Ls}. (5.34a)

Similarly, there exists a ψ : Bε ∩ Lu → Ls with ψ(0) = 0 and Dψ0 = 0, such that

W εu = {(ψ(z), z) | z ∈ Bε ∩ Lu}. (5.34b)

Remark 5.8: (a) Theorem 5.15 essentially presents the local stable manifold as the graphof the C1 function φ. This graph is a surface or manifold of the same dimension, k, as Ls.Because φ(0) and Dφ0 vanish, the manifold is tangent to Ls at the origin. The situationis illustrated in Figure 5.3. Similar observations apply to the unstable manifold. Thus the

88

Page 22: Eugene Speer - Lecture 5

NOTES: ODE SPRING 1998

..................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................... ................

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

.....................

................

Ls y

Lu

z

W εs

z = φ(y) ...............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................

........................................................................................................

........................................................................

....................

Figure 5.3. The local stable manifold as a graph.

............................................................

......................................................................................

...........................................................................................................................................................................................................................................................

..........................

.................................................... ..........................

....................

....................

....................

..............................

........................................................................................................................................................................................................................................................................................................................................................

..........................

..........................

...................

.......

........

........

........

..

W εs

W εu

.................................................................................................................................................

...............................................................................................................................................

..................................................................................

..........................

.............................................................................................................................................................................................................................................................................................................

..........................................

....................................

..............................

.................................................................................................................................................................................................................................

...................................................................................................................................................................................................................................................................................................................................

.........................................................................................................

......................

..................................................................................................................

.....................

.....................

..................................................................................... ...........

..........

........................

........................

.......... ..........................................

W εu

W εs

(a) (b)

Figure 5.4: Local stable and unstable manifolds.

typical pictures of Figure 5.2 are modified for a general autonomous system as shown inFigure 5.4. We may also think of the set W ε

s as the zero set of the C1 function F : Bε → Lu,defined by F (y, z) = z − φ(y).(b) We may define global stable and unstable sets by

Ws = {x0 ∈ IRn | limt→∞

x(t; 0, x0) = 0},

Wu = {x0 ∈ IRn | limt→−∞

x(t; 0, x0) = 0}.

However, these can be much more complicated than the local sets. For example, in thecase shown in Figure 5.5, Ws and Wu are identical! In higher dimensions the global picturemay be extremely difficult to unravel.

Before proving the theorem we must define the norm ‖ · ‖; we begin by introducingsome additional notation. We let Ps and Pu be the projections of IRn onto Ls and Lu,respectively; in the notation above, Psx = (y, 0) and Pux = (0, z) for x = (y, z). Pu and Psare linear maps of IRn to itself which commute with both A and etA and satisfy P 2

s = Ps,P 2u = Pu and Ps + Pu = I. We write etA = Ys(t) + Yu(t), where

Ys(t) = PsetAPs = etAPs = Pse

tA and Yu(t) = PuetAPu = etAPu = Pue

tA.

89

Page 23: Eugene Speer - Lecture 5

NOTES: ODE SPRING 1998

Note that Y ′s (t) = AYs(t) and Ys(t)Ys(t′) = Ys(t + t′), and that the same equations holdfor Yu.

W εs = W ε

u

‖x‖ = ε.................................................................................................................................................................

................

......................................................................................................................................................................................... ................

...........................................................................................................................................................................................................................................................................................................

..........................

..........................

..........................

..........................

........................

........................

.......................

........................

........................

........................

.......................

........................

........................

.......................

........................

........................

..............

..........................

..........................

..........................

..........................

W εs

W εs

W εu

W εu

........

...............................

.............

.............

..........................

...........................................................................................

..........................

..........................

.................................................................

.......................... ............. ............. ............. ............. ............. .............

..........................

.................................................................

Figure 5.5. The disk ‖x‖ < ε is shown in expanded view at the right.

Hyperbolicity of the fixed point implies that, for some C and γ > 0, |Ys(t)x| ≤Ce−tγ|x|, if t ≥ 0, and |Yu(t)x| ≤ Cetγ|x|, if t ≤ 0, (see the proof of Theorem 5.2). Wechoose ν with 0 < ν < γ and define norms ‖ · ‖s and ‖ · ‖u on Ls and Lu, respectively, by

‖y‖s =∫ ∞

0

eντ |Ys(τ )y| dτ, ‖z‖u =∫ ∞

0

eντ |Yu(−τ )z| dτ ;

it is elementary to verify that these are norms and that

‖Ys(t)x‖ ≤ e−tν |x|, if t ≥ 0;‖Yu(t)x‖ ≤ etν |x|, if t ≤ 0.

(5.35)

Finally, we define ‖x‖ = max{‖Psx‖s, ‖Pux‖u}. We will now let ‖T‖ denote the operatornorm on n× n matrices which is defined using the norm ‖ · ‖ on IRn.

Proof of Theorem 5.15: We give only an outline of the proof of the existence of the mapφ defining the local stable manifold.(i) Let f(x) = Ax+ g(x); then g is C1 and satisfies g(0) = 0, Dg(0) = 0. By the variationof parameters formula (3.7) and (3.8), a solution x(t) of (5.30) will satisfy

x(t) = etAx(0) +∫ t

0

e(t−τ)Ag(x(τ ))dτ. (5.36)

Suppose now that x(t) is defined and satisfies ‖x(t)‖ < ε for all t ≥ 0 and some appropri-ately small ε; in particular, this will be true if x(0) ∈ W ε

s . Then if we write x(0) = (y, z)and etA = Ys(t) + Yu(t), (5.36) becomes

x(t) = Ys(t)y +∫ t

0

Ys(t − τ )g(x(τ ))dτ −∫ ∞t

Yu(t− τ )g(x(τ ))dτ

+ Yu(t)[z +

∫ ∞0

Yu(−τ )g(x(τ ))dτ].

(5.37)

90

Page 24: Eugene Speer - Lecture 5

NOTES: ODE SPRING 1998

From (5.35) and the boundedness of x(t) it is clear that the first three terms in (5.37)are bounded in t. On the other hand, because Yu contains only growing exponentials, thelast term cannot be bounded unless the quantity in brackets (which is independent of t)vanishes. We conclude: if (y, z) ∈ W ε

s for sufficiently small ε, then the solution of (5.30)with initial value (y, z), a solution which we now write as x(t; y), must satisfy the integralequation

x(t; y) = Ys(t)y +∫ t

0

Ys(t − τ )g(x(t; y))dτ −∫ ∞t

Yu(t− τ )g(x(t; y))dτ. (5.38)

Moreover, z must satisfy z = φ(y), where

φ(y) = −∫ ∞

0

Yu(−τ )g(x(τ ; y))dτ. (5.39)

(ii) Next we show, by straightforward application of the method of successive approxima-tions, that for and ‖y‖ < ε with ε > 0 sufficiently small, (5.38) has a unique solution x(t; y)satisfying ‖x(t; y)‖ < ε for all t ≥ 0. Moreover, x(t; y) solves the original system (5.30), iscontinuous in y, is exponentially decreasing according to

‖x(t; y)‖ ≤ ‖y‖e−tν/2, (5.40)

and satisfies the initial condition x(0; y) = (y, φ(y)), where φ(y) is given by (5.39) (notethat this last conclusion follows directly from (5.38)). This verifies (5.34a). Moreover,x(t; 0) ≡ 0 by inspection, so that φ(0) = 0.(iii) We next prove that φ is C1; since φ(y) = x(0; y), it certainly suffices to show thatx(t; y) is C1 in y. To verify this, we mimic the proof of Theorem 2.15 (differentiability ininitial conditions). That is, we first derive, by formal differentiation of (5.38), the integralequation which should be satisfied by Dyx(t;y), then show, by successive approximations,that this integral equation has a continuous, exponentially decreasing solution, say K(t; y).The last step is to show that

limv→0‖v‖−1

[x(t; y + v)− x(t, v)−K(t; y)v

]= 0 (5.41)

(Gronwall’s inequality, which we used at this stage in the earlier proof, is not availablehere, but a somewhat similar trick works). Equation (5.41) implies that Dyx = K.(iv) Finally, it follows from (5.39), (5.40), and ‖g(x)‖ = o(‖x‖) that Dφ0 = 0.

91

Page 25: Eugene Speer - Lecture 5

NOTES: ODE SPRING 1998

Appendix to Chapter V

Inspection of the references for this course reveals various definitions of stability andasymptotic stability, differing in the permitted choices of time at which to specify theinitial closeness of solutions. Moreover, some authors define a concept of uniform stabilityor define various sorts of uniform asymptotic stability; in these definitions, some uniformityin the initial time or initial condition is imposed on the degree of initial closeness requiredor on the rate of convergence for asymptotic stability. In this appendix we summarize someof the possibilities. The discussion is based in part on: Jose L. Massera, “Contributionsto Stability Theory,”, Ann. Math. 64, 1956, 182–206.

Throughout this appendix we take x(t) to be a solution of (5.1) defined on some openinterval which contains [τ,∞). In particular, this implies that the domain D contains anopen neighborhood of the trajectory {(t, x(t)) | t ≥ τ}. We will on occasion refer to variouspossible additional hypotheses:

(U) f(t, x) is such that solutions of (5.1) are unique.

(N) D contains a uniform neighborhood {(t, x) | t ≥ τ, |x− x(t)| < ρ} of {(t, x(t))}.(P) f(t, x) is periodic in t.

(A) The system is autonomous: f(t, x) is independent of t.

(C) The solution x(t) is constant.

(AC) Both (A) and (C) hold, i.e., the solution is a critical point of an autonomous system.

The various stability definitions we will discuss are given in the boxed display onthe next page. (Notation: for (t0, x0) ∈ D we let x(t; t0, x0) denote any solution of (5.1)satisfying x(t0; t0, x0) = x0 and defined on a maximal interval; under assumption (U),x(t; t0, x0) is defined and x(t; t0, x0) = x(t; t0, x0).) Roughly speaking, (i) and (iii) are thedefinitions of stability and asymptotic stability that we have used. (ii) adds to the stabilitydefinition the condition that δ may be chosen independently of the time t0 at which initialconditions are imposed. (iv)–(vi) add uniformity conditions to the definition of asymptoticstability: (iv) requires that δ may be chosen independently of t0, (v) that the rate of decayto x(t) be independent of the initial condition x1, and (vi) that (iv) and (v) hold and thatthe decay rate also be independent of t0. (vii) requires that the convergence to x(t) beexponential and that all choices be uniform.

Remark 5.9: (a) It is easy to verify the implications

(i.c) ⇒ (i.b) ⇒ (i.a)(ii.c) ⇒ (ii.b) ⇒ (ii.a)(iii.c) ⇒ (iii.b) ⇒ (iii.a)(iv.b) ⇒ (iv.a)

Under the uniqueness assumption (U), continuous dependence on parameters implies thatall these implications become equivalences. We will generally make this assumption in whatfollows, and therefore refer simply to (i), (ii), (iii), and (iv). The distinctions have beenintroduced here primarily because different authors give the definitions in different forms.

(b) All of (ii), (iv), (vi), or (vii) imply assumption (N).

92

Page 26: Eugene Speer - Lecture 5

NOTES: ODE SPRING 1998

STABILITY DEFINITIONS

(i) Stability: (a) For any ε > 0 there exists a t0 ≥ τ and a δ = δ(ε, t0) > 0 such that,if |x1 − x(t0)| < δ, then x(t; t0, x1) is defined for t ≥ t0 and satisfies |x(t; t0, x1)−x(t)| < ε there.

(b) For any ε > 0 there exists a δ = δ(ε) > 0 such that, if |x1 − x(τ )| < δ, thenx(t; τ, x1) is defined for t ≥ τ and satisfies |x(t; τ, x1)− x(t)| < ε there.

(c) For any ε > 0 and for all a t0 ≥ τ there exists a δ = δ(ε, t0) > 0 such that, if|x1 − x(t0)| < δ, then x(t; t0, x1) is defined for t ≥ t0 and satisfies |x(t; t0, x1) −x(t)| < ε there.

(ii) Uniform stability: (a) For any ε > 0 there exists a tε ≥ τ and a δ = δ(ε) > 0 suchthat, if |x1 − x(t0)| < δ and t0 ≥ tε, then x(t; t0, x1) is defined for t ≥ t0 andsatisfies |x(t; t0, x1)− x(t)| < ε there.

(b) There exists a T > τ such that for any ε > 0 and any t0 ≥ T there exists aδ = δ(ε) > 0 such that, if |x1 − x(t0)| < δ, then x(t; t0, x1) is defined for t ≥ t0and satisfies |x(t; t0, x1)− x(t)| < ε there.

(c) For any ε > 0 and for all a t0 ≥ τ there exists a δ = δ(ε) > 0 such that, if|x1 − x(t0)| < δ, then x(t; t0, x1) is defined for t ≥ t0 and satisfies |x(t; t0, x1) −x(t)| < ε there.

(iii) Asymptotic Stability: (a) (i.a) is satisfied and (with t0 from (i.a)) there exists aδ(t0) > 0 such that, if |x1 − x(t0)| < δ, then limt→∞ |x(t; t0, x1)− x(t)| = 0.

(b) (i.b) is satisfied and there exists a δ > 0 such that, if |x1 − x(τ )| < δ, thenlimt→∞ |x(t; τ, x1)− x(t)| = 0.

(c) (i.c) is satisfied and, for any t0 ≥ 0, there exists a δ(t0) > 0 such that, if |x1 −x(t0)| < δ, then limt→∞ |x(t; t0, x1)− x(t)| = 0.

(iv) (a) (ii.a) is satisfied and there exists a δ such that if t0 ≥ tε and |x1 − x(t0)| < δ,then limt→∞ |x(t; t0, x1)− x(t)| = 0.

(b) (ii.c) is satisfied and there exists a δ such that if t0 ≥ 0 and |x1 − x(t0)| < δ,then limt→∞ |x(t; t0, x1)− x(t)| = 0.

(v) Equiasymptotic stability There is a δ > 0 and, for each ε > 0, a Tε such that if|x1 − x(τ )| < δ and t ≥ Tε, then |x(t; τ, x1)− x(t)| < ε.

(vi) Uniform asymptotic stability (ii.c) holds, and there exists a δ > 0 and, for eachε > 0, an Sε such that if t0 ≥ τ , |x1−x(t0)| < δ, and t ≥ t0+Sε, then |x(t; τ, x1)−x(t)| < ε.

(vii) Exponential asymptotic stability There exists a µ > 0 and, for each ε > 0, aδ = δ(ε) > 0, such that if t0 ≥ τ and |x1 − x(t0)| < δ, then |x(t; τ, x1) − x(t)| <ε exp[−µ(t− t0)].

93

Page 27: Eugene Speer - Lecture 5

NOTES: ODE SPRING 1998

(c) The conditions (i) through (vii) are related by

⇒ (v) ⇒(vii) ⇒ (vi) (iii)

⇒ (iv) ⇒⇓ ⇓

(ii) =⇒ (i)These implications are quite straightforward to verify. It is instructive to construct exam-ples showing that, in general, the reverse implications fail.(d) Massera points out that there are further implications among these conditions when fsatisfies additional assumptions. For example, under assumption (P) or assumption (A),(i) ⇒ (ii) and (iii) ⇒ (vi).

A summary of the definitions given by authors in our references is given on the fol-lowing page.

94

Page 28: Eugene Speer - Lecture 5

NOTES: ODE SPRING 1998

Author Terminology Page Definition Assumptions

Arnold Stability 155 (i.b) (AC)(U)Asymptotic stability 156 (iii.b) (AC)(U)

Birkhoff and Stability 121 (i.b) (AC)(U)Rota Strict stability 121 (iii.b) (AC)(U)

Coddington and Stability 314 (i.b)Levinson Asymptotic stability 314 (iii.b)

Cronin Stability 151 (i.a)Uniform stability 179 (ii.b) (A)Asymptotic stability 151 (iii.a)

Hale Stability 26 (i.c) (N)Uniform stability 26 (ii.c) (N)Asymptotic stability 26 (iii.b) (N)Uniform asymptotic 26 (vi) (N)

stabilityHartman Stability 40 (ii.a) (N)(C)

Uniform stability 40 (ii.c) (N)(C)Asymptotic stability 40 (iv.a) (N)(C)Uniform asymptotic 40 (iv.b) (N)(C)

stabilityHirsch and Stability 185 (i.b) (AC)(U)

Smale Asymptotic stability 186 (iii.b) (AC)(U)Lefschetz Stability 78 (i.c) (U)(N)(C)

Uniform stability 78 (ii.c) (U)(N)(C)Asymptotic stability 78 (iii.b) (U)(N)(C)Uniform asymptotic 78 (vi) (U)(N)(C)

stabilityStability 83 (ii.c) (U)Asymptotic stability 83 (iv.b) (U)Equiasymptotic stability 85 (v) (U)(N)(C)Exponential asymptotic 85 (vi) (U)(N)(C)

stabilityPetrovski Stability 151 (i.b) (N)

Asymptotic stability 151 (iii.b) (N)

Note: (a) Definitions related to orbital stability are not included; there may be otheromissions.(b) An assumption that f(t, x) is differentiable is indicated here by the weaker as-sumption (U). Recall that (AC) ⇒ (N); otherwise, (N) is indicated explicitly whenmentioned by the author.

95