Eigen values and eigenvectors

Post on 25-May-2015

744 views 7 download

Transcript of Eigen values and eigenvectors

2

A nonzero vector x is an eigenvector (or characteristic vector) of a square matrix A if there exists a scalar λ such that Ax = λx. Then λ is an eigenvalue (or characteristic value) of A.

Note: The zero vector can not be an eigenvector even though A0 = λ0. But λ = 0 can be an eigenvalue.

Example:

Show x 2

1

isaneigenvector for A

2 4

3 6

Solution : Ax 2 4

3 6

2

1

0

0

But for 0, x 02

1

0

0

Thus,x isaneigenvector of A,and 0 isaneigenvalue.

An n×n matrix A multiplied by n×1 vector x results in another n×1 vector y=Ax. Thus A can be considered as a transformation matrix.

In general, a matrix acts on a vector by changing both its magnitude and its direction. However, a matrix may act on certain vectors by changing only their magnitude, and leaving their direction unchanged (or possibly reversing it). These vectors are the eigenvectors of the matrix.

A matrix acts on an eigenvector by multiplying its magnitude by a factor, which is positive if its direction is unchanged and negative if its direction is reversed. This factor is the eigenvalue associated with that eigenvector.

Example 1: Find the eigenvalues of

two eigenvalues: 1, 2 Note: The roots of the characteristic equation can be repeated. That

is, λ1 = λ2 =…= λk. If that happens, the eigenvalue is said to be of multiplicity k.

Example 2: Find the eigenvalues of

λ = 2 is an eigenvector of multiplicity 3.

51

122A

)2)(1(23

12)5)(2(51

122

2

AI

200

020

012

A

0)2(

200

020

0123

AI

5

6

7

Example 1 (cont.):

00

41

41

123)1(:1 AI

0,1

4

,404

2

11

2121

ttx

x

txtxxx

x

00

31

31

124)2(:2 AI

0,1

3

2

12

ss

x

xx

To each distinct eigenvalue of a matrix A there will correspond at least one eigenvector which can be found by solving the appropriate set of homogenous equations. If λi is an eigenvalue then the corresponding eigenvector xi is the

solution of (A – λiI)xi = 0

Example 2 (cont.): Find the eigenvectors of

Recall that λ = 2 is an eigenvector of multiplicity 3.

Solve the homogeneous linear system represented by

Let . The eigenvectors of = 2 are of the form

s and t not both zero.

0

0

0

000

000

010

)2(

3

2

1

x

x

x

AI x

txsx 31 ,

,

1

0

0

0

0

1

0

3

2

1

ts

t

s

x

x

x

x

200

020

012

A

10

11

Definition: The trace of a matrix A, designated by tr(A), is the sum of the elements on the main diagonal.

Property 1: The sum of the eigenvalues of a matrix equals the trace of the matrix.

Property 2: A matrix is singular if and only if it has a zero eigenvalue.

Property 3: The eigenvalues of an upper (or lower) triangular matrix are the elements on the main diagonal.

Property 4: If λ is an eigenvalue of A and A is invertible, then 1/λ is an eigenvalue of matrix A-1.

Property 5: If λ is an eigenvalue of A then kλ is an eigenvalue of kA where k is any arbitrary scalar.

Property 6: If λ is an eigenvalue of A then λk is an eigenvalue of Ak for any positive integer k.

Property 8: If λ is an eigenvalue of A then λ is an eigenvalue of AT.

Property 9: The product of the eigenvalues (counting multiplicity) of a matrix equals the determinant of the matrix.

Algebraic Multiplicity of an eigenvalue λ is defined as the order of the eigenvalue as a root of the characteristics equation and is denoted by multa(λ) (or M λ)

Geometric multiplicity of λ is defined as the number of linearly independent eigenvectors corresponding and is denoted by multa(λ) (or M λ)

Theorem :- If A is a square matrix then for every eigenvalue of A, the algebraic multiplicity is greater than geometric multiplicity.

This theorem states that Every square matrix A satisfies its own characteristics equation; that is, if

λn+kn-1λn-1+kn-2λn-2+…..+k1λ+k0=0Is the characteristic equation of an

n×n matrix, thenAn+kn-1An-1+kn-2An-2+…..+k1A+k0I=0

16

If P = [ p1 p2 ], then AP = PD even if A is not diagonalizable.

Since AP 1 21 5

3 2 5 0

A

rp1

rp2

A

rp1 A

rp2

5 4 5 10

5(1) 2( 2)

5(1) 2(5)

5 1

1

2 2

5

1

rp1 2

rp2

rp1

rp2

1 0

0 2

11

25

5 00 2

1 2

1 5

5 00 2

PD.

17

So, our two distinct eigenvalues both have algebraic multiplicity 1 and geometric multiplicity 1. This ensures that p1 and p2 are not scalar multiples of each other; thus, p1 and p2 are linearly independent eigenvectors of A.

Since A is 2 x 2 and there are two linearly independent eigenvectors from the solution of the eigenvalue problem, A is diagonalizable and P-1AP = D.

We can now construct P, P-1 and D. Let

Then, P 1 5 / 7 2 / 7 1 / 7 1 / 7

and D

1 0

0 2

5 0

0 2

.

P rp1

rp2

1

1

25

1 2

1 5

.

18

Note that, if we multiply both sides on the left by P 1, then

AP Arp1

rp2

A

rp1 A

rp2

5 4

5 10

5(1) 2( 2)

5(1) 2(5)

5 11

2 2

5

1

rp1 2

rp2

rp1

rp2

1 0

0 2

11

25

5 00 2

1 2

1 5

5 00 2

PD becomes

P 1AP 5 / 7 2 / 7 1 / 7 1 / 7

1 21 5

3 2 5 0

5 / 7 2 / 7 1 / 7 1 / 7

5 4 5 10

35 / 7 0 / 7

0 / 7 14 / 7

5 0

0 2

D.

19

Example 2: Find the eigenvalues and eigenvectors for A.

Step 1: Find the eigenvalues for A.

Recall: The determinant of a triangular matrix is the product of the elements at the diagonal. Thus, the characteristic equation of A is

λ1 = 1 has algebraic multiplicity 1 and λ2 = 3 has algebraic multiplicity 2.

.

A 3 4 00 3 00 0 1

p() det(I A) I A 0

3 4 0

0 3 00 0 1

( 3)2 ( 1)1 0.

20

Step 2: Use Gaussian elimination with back-substitution to solve (λI - A) x = 0. For λ1 = 1, the augmented matrix for the system is

Column 3 is not a leading column and x3 = t is a free variable. The geometric multiplicity of λ1 = 1 is one, since there is only one free variable. x2 = 0 and x1 = 2x2 = 0.

I A |r0

2 4 00 2 00 0 0

000

:

12 r1 r1

r2r3

1 2 00 2 00 0 0

000

:r1

12 r2 r2

r3

1 2 00 1 00 0 0

000

.

21

The eigenvector corresponding to λ1 = 1 is

The dimension of the eigenspace is 1 because the eigenvalue has only one linearly independent eigenvector. Thus, the geometric multiplicity is 1 and the algebraic multiplicity is 1 for λ1 = 1 .

x

x1

x2

x3

00t

t

001

. If we choose t 1, then

rp1

001

is

our choice for the eigenvector.

B1 {rp1} is a basis for the eigenspace, E

1, with dim(E

1) 1.

22

The augmented matrix for the system with λ2 = 3 is

Column 1 is not a leading column and x1 = t is a free variable. Since there is only one free variable, the geometric multiplicity of λ2 is one.

3I A |r0

0 4 00 0 00 0 2

000

:

14 r1 r1

r2r3

0 1 00 0 00 0 2

000

:r1

r3 r2r2 r3

0 1 00 0 20 0 0

000

:r1

12 r2 r2

r3

0 1 00 0 10 0 0

000

.

23

x2 = x3 = 0 and the eigenvector corresponding to λ2 = 3 is

The dimension of the eigenspace is 1 because the eigenvalue has only one linearly independent eigenvector. Thus, the geometric multiplicity is 1 while the algebraic multiplicity is 2 for λ2 = 3 . This means there will not be enough linearly independent eigenvectors for A to be diagonalizable. Thus, A is not diagonalizable whenever the geometric multiplicity is less than the algebraic multiplicity for any eigenvalue.

x

x1

x2

x3

t00

t

100

,we choose t 1, and

rp2

100

is

our choice for the eigenvector.

B2 {rp2} is a basis for the eigenspace, E

2, with dim(E

2) 1.

24

This time, AP Arp1

rp2

rp2

A

rp1 A

rp2 A

rp2

1

rp1 2

rp2 2

rp2

rp1

rp2

rp2

1 0 0

0 2 0

0 0 2

PD.

P 1 does not exist since the columns of P are not linearly independent.

It is not possible to solve for D P 1AP, so A is not diagonalizable.

25

AP Arp1

rp2 ...

rpn

A

rp1 A

rp2 ... A

rpn

1

rp1 2

rp2 ... n

rpn

PD rp1

rp2 ...

rpn

1 0O

0 n

For A, an n x n matrix, with characteristic polynomial roots

for eigenvalues λi of A with corresponding eigenvectors pi.

P is invertible iff the eigenvectors that form its columns are linearly independent iff

1,2 ,...,n , then

a lgbraic multiplicity for each distinct i .dim(Ei

) geometric multiplicity

26

P 1AP P 1PD D 1 0

O

0 n

.

This gives us n linearly independent eigenvectors for P, so

P-1 exists. Therefore, A is diagonalizable since

The square matrices S and T are similar iff there exists a nonsingular P such that S = P-1TP or PSP-1 = T.

Since A is similar to a diagonal matrix, A is diagonalizable.

27

Example 4: Solve the eigenvalue problem Ax = λx and find the eigenspace, algebraic multiplicity, and geometric multiplicity for each eigenvalue.

Step 1: Write down the characteristic equation of A and solve for its eigenvalues.

A 4 3 60 1 0 3 3 5

p() I A 4 3 6

0 1 03 3 5

( 1)( 1) 4 6

3 5

28

Since the factor (λ - 2) is first power, λ1 = 2 is not a repeated root. λ1 = 2 has an algebraic multiplicity of 1. On the other hand, the factor (λ +1) is squared, λ2 = -1 is a repeated root, and it has an algebraic multiplicity of 2.

p()

( 1)( 4)( 5) 18( 1) 0

(2 5 4)( 5) 18( 1) 0

( 3 52 4 52 25 20) 18 18 0

3 3 2 ( 1)(2 2) ( 1)( 2)( 1) 0

( 2)( 1)2 0.

So the eigenvalues are 1 2,2 1.

29

Step 2: Use Gaussian elimination with back-substitution to solve (λI - A) x = 0 for λ1 and λ2.

For λ1 = 2 , the augmented matrix for the system is

2I A |r0

603

333

60 3

000

~

16 r1 r1

13 r2 r2

r3

103

1 / 213

10 3

000

~r1r2

3r1 r3 r3

100

1 / 21

3 / 2

100

000

:r1r2

32 r2 r3 r3

100

1 / 210

100

000

.

In this case,

x3 = r, x2 = 0, and

x1 = -1/2(0) + r

= 0 + r = r.

30

Thus, the eigenvector corresponding to λ1 = 2 is

x

x1

x2

x3

r0r

r

101

,r 0. If we choose

rp1

101

,

then B1 101

is a basis for the eigenspace of 1 2.

E1span({

rp1}) and dim(E1

) 1, so the geometric multiplicity is 1.

Arx 2

rx or (2I A)

rx

r0.

4 3 60 1 0 3 3 5

101

4 60

3 5

202

2

101

.

31

For λ2 = -1, the augmented matrix for the system is

( 1)I A |r0

303

303

60 6

000

~

13 r1 r1

r2r3

103

103

20 6

000

~r1r2

3r1 r3 r3

100

100

200

000

x3 = t, x2 = s, and x1 = -s + 2t. Thus, the solution has two linearly independent eigenvectors for λ2 = -1 with

x

x1

x2

x3

s 2tst

s

110

t

201

, s 0, t 0.

32

If we chooserp2

110

, and

rp3

201

, then B2

110

,201

is a basis for E2span({

rp2 ,

rp3}) and dim(E2

) 2,

so the geometric multiplicity is 2.

33

Thus, we have AP PD as follows :

4 3 60 1 0 3 3 5

1 1 20 1 01 0 1

1 1 20 1 01 0 1

2 0 00 1 00 0 1

2 1 20 1 02 0 1

2 1 20 1 02 0 1

.

Since the geometric multiplicity is equal to the algebraic multiplicity for each distinct eigenvalue, we found three linearly independent eigenvectors. The matrix A is diagonalizable since P = [p1 p2 p3] is nonsingular.

34

P | I 1 1 20 1 01 0 1

1 0 00 1 00 0 1

:1 1 20 1 00 1 1

1 0 00 1 0 1 0 1

:

1 0 20 1 00 0 1

1 1 00 1 01 1 1

:1 0 00 1 00 0 1

1 1 20 1 01 1 1

. So, P 1

1 1 20 1 01 1 1

.

We can find P-1 as follows:

35

AP PD gives us A APP 1 PDP 1.

Thus, PDP 1 1 1 20 1 01 0 1

2 0 00 1 00 0 1

1 1 20 1 01 1 1

2 1 20 1 02 0 1

1 1 20 1 01 1 1

4 3 60 1 0 3 3 5

A

Note that A and D are similar matrices.

36

D 1 1 20 1 01 1 1

4 3 60 1 0 3 3 5

1 1 20 1 01 0 1

2 2 40 1 0 1 1 1

1 1 20 1 01 0 1

2 0 00 1 00 0 1

.

Also, D = P-1 AP =

So, A and D are similar with D = P-1 AP and A = PD P-1 .

37

If P is an orthogonal matrix, its inverse is its transpose, P-1 =

PT . Since

PTP

rp1

T

rp2

T

rpT

3

rp1

rp2

rp3

rp1

T rp1

rp1

T rp2

rp1

T rp3

rpT

2

rp1

rp2

T rp2

rp2

T rp3

rp3

T rp1

rp3

T rp2

rpT

3

rp3

rp1

rp1

rp1

rp2

rp1

rp3

rp2

rp1

rp2

rp2

rp2

rp3

rp3

rp1

rp3

rp2

rp3

rp3

rpi

rp j I because

rpi

rp j 0

for i j andrpi

rp j 1 for i j 1,2, 3. So, P 1 PT .

38

A is a symmetric matrix if A = AT. Let A be diagonalizable so that A = PDP-1. But A = AT and

AT (PDP 1)T (PDPT )T (PT )T DTPT PDPT A.

This shows that for a symmetric matrix A to be diagonalizable, P must be orthogonal.

If P-1 ≠ PT, then A ≠AT. The eigenvectors of A are mutually orthogonal but not orthonormal. This means that the eigenvectors must be scaled to unit vectors so that P is orthogonal and composed of orthonormal columns.

39

Example 5: Determine if the symmetric matrix A is diagonalizable; if it is, then find the orthogonal matrix P that orthogonally diagonalizes the symmetric matrix A.

Let A 5 1 0 1 5 00 0 2

, then det(I A)

5 1 01 5 00 0 2

( 1)31( 2) 5 1

1 5( 2)( 5)2 ( 2)

3 82 4 48 ( 4)(2 4 12) ( 4)( 2)( 6) 0

Thus, 1 4, 2 2, 3 6.

Since we have three distinct eigenvalues, we will see that we are guaranteed to have three linearly independent eigenvectors.

40

Since λ1 = 4, λ2 = -2, and λ3 = 6, are distinct eigenvalues, each of the eigenvalues has algebraic multiplicity 1.

An eigenvalue must have geometric multiplicity of at least one. Otherwise, we will have the trivial solution. Thus, we have three linearly independent eigenvectors.

We will use Gaussian elimination with back-substitution as follows:

41

For 1 4 ,

1I Ar0

1 1 01 1 00 0 6

000

:1 1 00 0 00 0 1

000

:

1 1 00 0 10 0 0

000

x2 s, x3 0, x1 s .

rx

x1

x2

x3

s

110

orrp1

1 / 2

1 / 20

.

42

For 2 2 ,

2I Ar0

7 1 01 7 00 0 0

000

:1 7 00 1 00 0 0

000

x3 s, x2 0, x1 0 .

rx

x1

x2

x3

s

001

orrp2

001

.

43

For 3 6,

3I Ar0

1 1 01 1 00 0 8

000

:

1 1 00 0 10 0 0

000

x2 s, x3 0, x1 s.

rx

x1

x2

x3

s

110

orrp3

1 / 2

1 / 20

.

44

As we can see the eigenvectors of A are distinct, so {p1, p2, p3} is linearly independent, P-1 exists for P =[p1 p2 p3] and

Thus A is diagonalizable.

Since A = AT (A is a symmetric matrix) and P is orthogonal with approximate scaling of p1, p2, p3, P-1 = PT.

AP PD PDP 1.

PP 1 PPT 1 / 2 0 1 / 2

1 / 2 0 1 / 20 1 0

1 / 2 1 / 2 00 0 1

1 / 2 1 / 2 0

1 0 00 1 00 0 1

I .

45

As we can see the eigenvectors of A are distinct, so {p1, p2, p3} is linearly independent, P-1 exists for P =[p1 p2 p3] and

Thus A is diagonalizable.

Since A = AT (A is a symmetric matrix) and P is orthogonal with approximate scaling of p1, p2, p3, P-1 = PT.

AP PD PDP 1.

PP 1 PPT 1 / 2 0 1 / 2

1 / 2 0 1 / 20 1 0

1 / 2 1 / 2 00 0 1

1 / 2 1 / 2 0

1 0 00 1 00 0 1

I .

46

PDPT 1 / 2 0 1 / 2

1 / 2 0 1 / 20 1 0

4 0 00 2 00 0 6

1 / 2 1 / 2 00 0 1

1 / 2 1 / 2 0

4 / 2 0 6 / 2

4 / 2 0 6 / 20 2 0

1 / 2 1 / 2 00 0 1

1 / 2 1 / 2 0

5 1 0 1 5 00 0 2

A.

Note that A and D are similar matrices. PD P-1 =

47

1 / 2 1 / 2 0

0 0 1

1 / 2 1 / 2 0

5 1 0 1 5 00 0 2

1 / 2 0 1 / 2

1 / 2 0 1 / 20 1 0

4 / 2 4 / 2 0

0 0 2

6 / 2 6 / 2 0

1 / 2 0 1 / 2

1 / 2 0 1 / 20 1 0

4 0 00 2 00 0 6

.

Also, D = P-1 AP = PTAP

So, A and D are similar with D = PTAP and A = PDPT .

48

1 / 2 0 1 / 2

1 / 2 0 1 / 20 1 0

4 0 00 2 00 0 6

T1 / 2 0 1 / 2

1 / 2 0 1 / 20 1 0

T

1 / 2 0 1 / 2

1 / 2 0 1 / 20 1 0

4 0 00 2 00 0 6

1 / 2 1 / 2 00 0 1

1 / 2 1 / 2 0

4 / 2 0 6 / 2

4 / 2 0 6 / 20 2 0

1 / 2 1 / 2 00 0 1

1 / 2 1 / 2 0

5 1 0 1 5 00 0 2

AT = (PD P-1)T = (PD PT )T = (PT)T DT PT = P DT PT

= A. This shows that if A is a symmetric matrix, P must be orthogonal with P-1 = PT.

49

From Example 1, the diagonal matrix for matrix A is :

D 1 0

0 2

5 0

0 2

P 1AP D A PDP 1,

and A3 PDP 1PDP 1PDP 1 PD3P 1

1 21 5

( 5)3 0

0 (2)3

5 / 7 2 / 7 1 / 7 1 / 7

125 16

125 40

5 / 7 2 / 7 1 / 7 1 / 7

641 / 7 234 / 7585 / 7 290 / 7

. For A3, the eigenvalues, are 1

3 125 aand 23 8.

In general, the power of a matrix, Ak PDkP 1. and the eigenvalues are ik ,

where i is on the main diagonal of D.

50