Linear Predictive Analysis 主講人:虞台文. Contents Introduction Basic Principles of Linear...

Post on 05-Jan-2016

237 views 2 download

Transcript of Linear Predictive Analysis 主講人:虞台文. Contents Introduction Basic Principles of Linear...

Linear Predictive Analysis

主講人:虞台文

Contents Introduction Basic Principles of Linear Predictive Analysis The Autocorrelation Method The Covariance Method More on the above Methods Solution of the LPC Equations Lattice Formulations

Linear Predictive Analysis

Introduction

Linear Predictive Analysis

A powerful speech analysis technique. Powerful for estimating speech parameters.

– Pitch– Formants– Spectra– Vocal tract area functions

Especially useful for compression. Linear predictive analysis techniques is often

referred as linear predictive coding or LPC.

Basic Idea A speech sample can be approximated as a

linear combination of past speech samples.

This prediction model corresponds to an all-zero model, whose inverse matches the vocal tract model we developed.

LPC vs. System Identification

The linear prediction have been in use in the areas of control, and information theory under the name of system estimation and system identification.

Using LPC methods, the result system will be modeled as an all-pole linear system.

LPC to Speech Processing

Modeling the speech waveform. There are different formulations. The differences among them are often those of

philosophy or way of viewing the problem.

They almost lead to the same result.

Formulations The covariance method The autocorrelation formulation The lattice method The inverse filter formulation The spectral estimation formulation The maximum likelihood estimation The inner product formulation

Linear Predictive Analysis

Basic Principles of Linear Predictive Analysis

Speech Production Model

Impulse Train Generator

Impulse Train Generator

Random NoiseGenerator

Random NoiseGenerator

Time-VaryingDigital Filter

Time-VaryingDigital Filter

Vocal TractParameters

G

u(n)

s(n)

H(z)

Speech Production Model

Impulse Train Generator

Impulse Train Generator

Random NoiseGenerator

Random NoiseGenerator

Time-VaryingDigital Filter

Time-VaryingDigital Filter

Vocal TractParameters

G

u(n)

s(n)

p

k

kk za

1

1

1

p

k

kk za

1

1

1

p

k

kk za

G

zU

zSzH

1

1)(

)()(

p

k

kk za

G

zU

zSzH

1

1)(

)()(

)()()(1

nGuknsansp

kk

)()()(1

nGuknsansp

kk

Linear Prediction Model

)()()(1

nGuknsansp

kk

)()()(1

nGuknsansp

kk

p

kk knsαns

1

)()(ˆLinear Prediction:

)()(ˆ)( nensns Error compensation:

)()()(1

neknsαnsp

kk

)()()(1

neknsαnsp

kk

Speech Production vs. Linear Prediction

)()()(1

nGuknsansp

kk

)()()(1

nGuknsansp

kk

)()()(1

neknsαnsp

kk

)()()(1

neknsαnsp

kk

Speech production:

Linear Prediction:

Vocal Tract Excitation

Linear Predictor Error

ak = k

Prediction Error Filter

)()()(1

neknsαnsp

kk

)()()(1

neknsαnsp

kk

Linear Prediction:

p

kk knsαnsne

1

)()()(

)()()(1)(1

zSzAzSzαzEp

k

kk

1 ,)( 00

αknsαp

kk

Prediction Error Filter

p

kk knsαnsne

1

)()()(

)()()(1)(1

zSzAzSzαzEp

k

kk

)(zA )(zAs(n) e(n)

p

k

kk zαzA

1

1)(

p

k

kk zαzA

1

1)(

1 ,)( 00

αknsαp

kk

p

kk knsαne

0

)()(

p

kk knsαne

0

)()(

Prediction Error Filter

)(zA )(zAs(n) e(n)

p

k

kk zαzA

1

1)(

p

k

kk zαzA

1

1)(

Goal: Minimize

n

ne )(2

p

kk knsαne

0

)()(

p

kk knsαne

0

)()(

Prediction Error Filter

Goal: Minimize

n

ne )(2

p

kk knsαne

0

)()(

p

kk knsαne

0

)()(

n

p

kk

n

knsαne2

0

2 )()(

p

jj

n

p

ii jnsαinsα

00

)()(

p

i

p

jj

ni αjnsinsα

0 0

)()(

)()( jnsinscn

ij

)()( jnsinscn

ij

p

i

p

jjiji αcα

0 0

Suppose that cij’s can be estimated from the speech sample.

Suppose that cij’s can be estimated from the speech sample.

Our goal now is to find ak’s to minimize the sum of squared errors.

Our goal now is to find ak’s to minimize the sum of squared errors.

Prediction Error Filter

Goal: Minimize

n

ne )(2

p

kk knsαne

0

)()(

p

kk knsαne

0

)()(

p

i

p

jjiji αcα

0 0

)()( jnsinscn

ij

)()( jnsinscn

ij

Fact:cij = cji

E

pkα

E

k

,2,1 ,0

Let and solve the equations.

p

j

p

iikijkj

k

cααcα

E

0 0

i = k j = k

pkαcp

iiki ,,2,1 ,02

0

Prediction Error Filter

p

j

p

iikijkj

k

cααcα

E

0 0

i = k j = k

pkαcp

iiki ,,2,1 ,02

0

01212111010 ppαcαcαcαc k = 1:02222121020 ppαcαcαcαc k = 2:

0221100 pppppp αcαcαcαc k = p:

)()( jnsinscn

ij

)()( jnsinscn

ij

Fact:cij = cji

=1

Prediction Error Filter

)()( jnsinscn

ij

)()( jnsinscn

ij

Fact:cij = cji

0

0

0

01

2

1

210

2222120

1121110

pppppp

p

p

α

α

α

cccc

cccc

cccc

01212111010 ppαcαcαcαc k = 1:02222121020 ppαcαcαcαc k = 2:

0221100 pppppp αcαcαcαc k = p:

=1

Prediction Error Filter

011221111 cαcαcαc pp

022222112 cαcαcαc pp

pppppp cαcαcαc 02211

)()( jnsinscn

ij

)()( jnsinscn

ij

Fact:cij = cji

01212111010 ppαcαcαcαc k = 1:02222121020 ppαcαcαcαc k = 2:

0221100 pppppp αcαcαcαc k = p:

Prediction Error Filter

011221111 cαcαcαc pp

022222112 cαcαcαc pp

pppppp cαcαcαc 02211

pppppp

p

p

c

c

c

α

α

α

ccc

ccc

ccc

0

02

01

2

1

21

22221

11211

)()( jnsinscn

ij

)()( jnsinscn

ij

Fact:cij = cji

Remember this equation

Prediction Error Filter

)()( jnsinscn

ij

)()( jnsinscn

ij

Fact:cij = cji

ΨΦα

ΨΦα 1Such a formulation in fact is unrealistic.

Such a formulation in fact is unrealistic.

Why?Why?

pppppp

p

p

c

c

c

α

α

α

ccc

ccc

ccc

0

02

01

2

1

21

22221

11211

Error Energy

n

ne )(2

p

kk knsαne

0

)()(

p

kk knsαne

0

)()(

2

0

)(

n

p

kk knsα

p

i

p

j nji insinsαα

0 0

)()(

p

i

p

jjiji αcα

0 0

10 11 12 11

20 21 22 22

0 1 2

1 0

0

0

0

p

p

p p p ppp

c c c cα

c c c cα

c c c cα

p

jjjαc

00

=0, i 0

p

kkkcαc

1000

Short-Time Analysis

Original Goal: Minimize

n

neE )(2

Vocal tract is a slowly time-varying system.

Minimizing the error energy for whole speech signal is unreasonable.

Short-Time Analysis

Original Goal: Minimize

n

neE )(2

n

New Goal: Minimize m

nn meE )(2

Short-Time Analysis

n

New Goal: Minimize m

nn meE )(2

m

nnn msmsE 2)(ˆ)(

m

p

knk kmsα

2

0

)(

Linear Predictive Analysis

The Autocorrelation Method

The Autocorrelation Method

n

)()()( mwmnsmsn ]1,0[for ,0)( Nmmw

Usually, we use a Hamming window.Usually, we use a Hamming window.

The Autocorrelation Method

)()()( mwmnsmsn ]1,0[for ,0)( Nmmw

1

0

2 )(pN

mnn meEError energy

m

n me )(2

0 N1

So, the original formulation can be directly applied to find the prediction coefficients.

So, the original formulation can be directly applied to find the prediction coefficients.

The Autocorrelation Method

np

n

n

np

n

n

npp

np

np

np

nn

np

nn

c

c

c

α

α

α

ccc

ccc

ccc

0

02

01

2

1

21

22221

11211

)()( jmsimsc nm

nnij

)()( jmsimsc nm

nnij

nnn ΨαΦ

nnn ΨΦα 1

s?' estimate toHow nijc s?' estimate toHow n

ijc What properties they have?What properties they have?

For convenience, I’ll drop the sup/subscripts n in the following discussion.

Properties of cij’s

)()( jmsimscm

ij

)()( jmsimscm

ij

Property 1: jiij cc

Property 2:

)]([)( jimsmscim

imij

)]([)( jimsmsm

)(,0 jic

Its value depends on the difference |ij|.

|| jiij rc

The Equations for the Autocorrelation Methods

np

n

n

np

n

n

npp

np

np

np

nn

np

nn

c

c

c

α

α

α

ccc

ccc

ccc

0

02

01

2

1

21

22221

11211

np

n

n

n

np

n

n

n

nnp

np

np

np

nnn

np

nnn

np

nnn

r

r

r

r

α

α

α

α

rrrr

rrrr

rrrr

rrrr

3

2

1

3

2

1

0321

3012

2101

1210

The Equations for the Autocorrelation Methods

np

n

n

n

np

n

n

n

nnp

np

np

np

nnn

np

nnn

np

nnn

r

r

r

r

α

α

α

α

rrrr

rrrr

rrrr

rrrr

3

2

1

3

2

1

0321

3012

2101

1210

A Toeplitz Matrix

nnn ΨαΦ nnn ΨΦα 1

The Error Energy

1

0

2 )(pN

mnn meE

21

0 1

)()(

pN

m

p

kn

nkn kmsαms

1

0 1 11

2 )()()()(2)(pN

m

p

k

p

jn

nj

p

in

nin

nknn jmsαimsαkmsαmsms

1

0 1 1

1

01

1

0

2 )()()()(2)(pN

m

p

i

p

j

pN

mnn

nj

ni

p

k

pN

mnn

nkn jmsimsααkmsmsαms

p

i

p

j

nji

nj

ni

p

k

nk

nk

nn rααrαrE

1 1||

10 2

The Error Energy

p

i

p

j

nji

nj

ni

p

k

nk

nk

nn rααrαrE

1 1||

10 2

np

n

n

n

np

n

n

n

nnp

np

np

np

nnn

np

nnn

np

nnn

r

r

r

r

α

α

α

α

rrrr

rrrr

rrrr

rrrr

3

2

1

3

2

1

0321

3012

2101

1210

p

i

ni

ni

p

k

nk

nk

n rαrαr11

0 2

p

k

nk

nk

p

k

nk

nk

nn rαrαrE

010

p

k

nk

nk

p

k

nk

nk

nn rαrαrE

010 10

The Error Energy

np

n

n

n

np

n

n

n

nnp

np

np

np

nnn

np

nnn

np

nnn

r

r

r

r

α

α

α

α

rrrr

rrrr

rrrr

rrrr

3

2

1

3

2

1

0321

3012

2101

1210

p

k

nk

nk

p

k

nk

nk

nn rαrαrE

010

p

k

nk

nk

p

k

nk

nk

nn rαrαrE

010 10

0

0

0

0

1

3

2

1

0321

30123

21012

12101

3210

n

np

n

n

n

nnp

np

np

np

np

nnnn

np

nnnn

np

nnnn

np

nnnn E

α

α

α

α

rrrrr

rrrrr

rrrrr

rrrrr

rrrrr

Linear Predictive Analysis

The Covariance

Method

The Covariance Method

n

)()( mnsmsn

Goal: Minimize

1

0

2 )(N

mnn meE

The Covariance Method

n

)()( mnsmsn

Goal: Minimize

1

0

2 )(N

mnn meE

The range for evaluating error

energy is different from the

autocorrelation method.The range for evaluating error

energy is different from the

autocorrelation method.

The Covariance Method

Goal: Minimize

1

0

2 )(N

mnn meE

1

0

2

0

)(N

m

p

kn

nkn kmsαE

)()(1

00 0

jmsimsαα n

N

mn

p

i

nj

p

j

ni

1

0 0 0

( ) ( )p pN

n ni n j n

m i j

α s m i α s m j

c ij

)()(1

0

jmsimsc n

N

mn

nij

)()(1

0

jmsimsc n

N

mn

nij

The Covariance Method

)()(1

0

jmsimsc n

Nm

mn

nij

)()(1

0

jmsimsc n

N

mn

nij

)()(1

0

jmsimsc n

N

mn

nij

)()(1

0

jimsms n

Nim

imn

)()(1

jimsms n

iN

imn

)()(1

ijmsms n

jN

jmn

or

Property:nji

nij cc

The Covariance Method

)()(1

0

jmsimsc n

Nm

mn

nij

)()(1

0

jmsimsc n

N

mn

nij

)()(1

0

jmsimsc n

N

mn

nij

)()(1

0

jimsms n

Nim

imn

)()(1

jimsms n

iN

imn

)()(1

ijmsms n

jN

jmn

or

0 N1i j

)(msn)(msn i

Ni1

)( jimsn )( jimsn

The Covariance Method

)()(1

0

jmsimsc n

Nm

mn

nij

)()(1

0

jmsimsc n

N

mn

nij

)()(1

0

jmsimsc n

N

mn

nij

)()(1

0

jimsms n

Nim

imn

)()(1

jimsms n

iN

imn

)()(1

ijmsms n

jN

jmn

or

0 N1i j

)(msn)(msn i

Ni1

)( jimsn )( jimsn

cij is, in fact, a cross-correlation function.cij is, in fact, a cross-correlation function.

The samples involved in computation of cij’s are values of sn(m) in the interval pmN1.

The samples involved in computation of cij’s are values of sn(m) in the interval pmN1.

The value of cij depends on both i and j.The value of cij depends on both i and j.

The Equations for the Covariance Methods

np

n

n

n

np

n

n

n

npp

np

np

np

np

nnn

np

nnn

np

nnn

c

c

c

c

α

α

α

α

cccc

cccc

cccc

cccc

0

03

02

01

3

2

1

321

3333231

2232221

1131211

Symmetric but not Toeplitz

nnn ΨαΦ nnn ΨΦα 1

The Error Energy

1

0

2 )(N

mnn meE

21

0 1

)()(

N

m

p

kn

nkn kmsαms

1

0 1 11

2 )()()()(2)(N

m

p

k

p

jn

nj

p

in

nin

nknn jmsαimsαkmsαmsms

1

0 1 1

1

01

1

0

2 )()()()(2)(N

m

p

i

p

j

N

mnn

nj

ni

p

k

N

mnn

nkn jmsimsααkmsmsαms

p

i

p

j

nij

nj

ni

p

k

nk

nk

nn cααcαcE

1 11000 2

The Error Energy

p

i

p

j

nij

nj

ni

p

k

nk

nk

nn cααcαcE

1 11000 2

np

n

n

n

np

n

n

n

npp

np

np

np

np

nnn

np

nnn

np

nnn

c

c

c

c

α

α

α

α

cccc

cccc

cccc

cccc

0

03

02

01

3

2

1

321

3333231

2232221

1131211

p

k

nk

nk

p

k

nk

nk

nn cαcαcE

00

1000

p

k

nk

nk

p

k

nk

nk

nn cαcαcE

00

1000 10

p

i

ni

ni

p

k

nk

nk

n cαcαc1

01

000 2

Linear Predictive Analysis

More on the above Methods

The Equations to be Solved

nnn ΨαΦ nnn ΨΦα 1The Autocorrelation

Method

nnp

np

np

np

nnn

np

nnn

np

nnn

n

rrrr

rrrr

rrrr

rrrr

0321

3012

2101

1210

Φ

The CovarianceMethod

npp

np

np

np

np

nnn

np

nnn

np

nnn

n

cccc

cccc

cccc

cccc

321

3333231

2232221

1131211

Φ

n and n for the Autocorrelation Method

Define

)1()2()()1()0(0000

0)1()1()()1()0(000

00)2()3()2()3()0(00

00)1()2()1()2()1()0(0

000)1()()1()2()1()0(

NsNspNspNss

NspNspNsss

NsNspspss

NsNspspsss

Nspspssss

nnnnn

nnnnn

nnnnn

nnnnnn

nnnnnn

n

S

nnnp

np

np

nnnp

np

np

np

np

nnn

np

np

nnn

np

np

nnn

nn

Tn

n

rrrrr

rrrrr

rrrrr

rrrrr

rrrrr

r

0121

10321

23012

12101

1210

0

ΦΨ

Ψ TnnSS

n is positive definite.

n is positive definite.

Why?Why?

n and n for the Covariance Method

Define

)1()2()0()1()2()1()(

)()1()1()0()3()2()1(

)3()4()2()3()0()1()2(

)2()3()1()2()1()0()1(

)1()2()()1()2()1()0(

pNspNssspspsps

pNspNssspspsps

NsNspspssss

NsNspspssss

NsNspspssss

nnnnnnn

nnnnnnn

nnnnnnn

nnnnnnn

nnnnnnn

n

S

TnnSS

n is positive definite.

n is positive definite.

Why?Why?

npp

npp

np

np

np

npp

npp

np

np

np

np

np

nnn

np

np

nnn

np

np

nnn

nn

Tn

n

ccccc

ccccc

ccccc

ccccc

ccccc

r

1,321

,11,13,12,11,1

,21,2222120

,11,1121110

,01,0020100

0

ΦΨ

Ψ

Linear Predictive Analysis

Solution of the LPC Equations

Covariance Method---Cholesky Decomposition Method

Also called the square root method.

ppppppp

p

p

p

c

c

c

c

α

α

α

α

cccc

cccc

cccc

cccc

0

03

02

01

3

2

1

321

3333231

2232221

1131211

Symmetric and

positive definite.

Φα ΨΦα Ψ

Covariance Method---Cholesky Decomposition Method

TVDVΦ

1

01

001

0001

321

3231

21

ppp vvv

vv

v

V

pd

d

d

d

000

000

000

000

3

2

1

D

A lower triangularmatrix

A diagonalmatrix

ΨαVDV T

Φα ΨΦα Ψ

Covariance Method---Cholesky Decomposition Method

TVDVΦ ΨαVDV T

Y

ΨVY

ppppp ψ

ψ

ψ

ψ

y

y

y

y

vvv

vv

v

3

2

1

3

2

1

321

3231

21

1

01

001

0001

11 ψy 11 ψy

12122 yvψy

23213133 yvyvψy

1

1

i

jjijii yvψy

1

1

i

jjijii yvψy

Y can be recursively solved.

Y can be recursively solved.

Covariance Method---Cholesky Decomposition Method

TVDVΦ ΨαVDV T

Y

YαDV T YDαV 1T

ppp

p

p

p

dy

dy

dy

dy

α

α

α

α

v

vv

vvv

/

/

/

/

1000

100

10

1

33

22

11

3

2

1

3

232

13121

ppp dyα / ppp dyα /

pppppp αvdyα 1,111 /

ppppppppp αvαvdyα 2,12,1222 /

jp

i

jipjpipipip αvdyα

1

0,/ jp

i

jipjpipipip αvdyα

1

0,/

Covariance Method---Cholesky Decomposition Method

TVDVΦ How?

1000

100

10

1

000

000

000

000

1

01

001

0001

3

232

13121

3

2

1

321

3231

21

321

3333231

2232221

1131211

p

p

p

ppppppppp

p

p

p

v

vv

vvv

d

d

d

d

vvv

vv

v

cccc

cccc

cccc

cccc

p

p

p

p

ppp d

vdd

vdvdd

vdvdvdd

vvv

vv

v

000

00

0

1

01

001

0001

333

223222

113112111

321

3231

21

1iiv

jkk

j

kikij vdvc

1

1

1

j

kjkkikjjjij vdvvdv

=1

=1

Covariance Method---Cholesky Decomposition Method

TVDVΦ jkk

j

kikij vdvc

1

1

1

j

kjkkikjjjij vdvvdv

1

1

j

kjkkikijjij vdvcdv

Considerdiagonalelements

1

1

i

kikkikiiiii vdvcdv

1

1

2i

kkikiii dvcd 111 cd

1000

100

10

1

000

000

000

000

1

01

001

0001

3

232

13121

3

2

1

321

3231

21

321

3333231

2232221

1131211

p

p

p

ppppppppp

p

p

p

v

vv

vvv

d

d

d

d

vvv

vv

v

cccc

cccc

cccc

cccc

1000

100

10

1

000

000

000

000

1

01

001

0001

3

232

13121

3

2

1

321

3231

21

321

3333231

2232221

1131211

p

p

p

ppppppppp

p

p

p

v

vv

vvv

d

d

d

d

vvv

vv

v

cccc

cccc

cccc

cccc

Covariance Method---Cholesky Decomposition Method

1

1

j

kjkkikijjij vdvcdv

=1

TVDVΦ jkk

j

kikij vdvc

1

1

1

j

kjkkikjjjij vdvvdv

1

1

2i

kkikiii dvcd 111 cd

1000

100

10

1

000

000

000

000

1

01

001

0001

3

232

13121

3

2

1

321

3231

21

321

3333231

2232221

1131211

p

p

p

ppppppppp

p

p

p

v

vv

vvv

d

d

d

d

vvv

vv

v

cccc

cccc

cccc

cccc

1000

100

10

1

000

000

000

000

1

01

001

0001

3

232

13121

3

2

1

321

3231

21

321

3333231

2232221

1131211

p

p

p

ppppppppp

p

p

p

v

vv

vvv

d

d

d

d

vvv

vv

v

cccc

cccc

cccc

cccc

111 ii cdv 111 / dcv ii

Covariance Method---Cholesky Decomposition Method

1

1

j

kjkkikijjij vdvcdv

=1

TVDVΦ jkk

j

kikij vdvc

1

1

1

j

kjkkikjjjij vdvvdv

1

1

2i

kkikiii dvcd

1

1

22222

kkk dvcd

1000

100

10

1

000

000

000

000

1

01

001

0001

3

232

13121

3

2

1

321

3231

21

321

3333231

2232221

1131211

p

p

p

ppppppppp

p

p

p

v

vv

vvv

d

d

d

d

vvv

vv

v

cccc

cccc

cccc

cccc

1000

100

10

1

000

000

000

000

1

01

001

0001

3

232

13121

3

2

1

321

3231

21

321

3333231

2232221

1131211

p

p

p

ppppppppp

p

p

p

v

vv

vvv

d

d

d

d

vvv

vv

v

cccc

cccc

cccc

cccc

1

2 2 2 21

i i ik k kk

v d c v d v

1

2 2 2 21

i i ik k kk

v c v d v d

The story is, then, continued.The story is, then, continued.

Covariance Method---Cholesky Decomposition Method

Error Energy

p

kkkn cαcE

1000

p

kkkn cαcE

1000

ppppppp

p

p

p

c

c

c

c

α

α

α

α

cccc

cccc

cccc

cccc

0

03

02

01

3

2

1

321

3333231

2232221

1131211

ppppppp

p

p

p

c

c

c

c

α

α

α

α

cccc

cccc

cccc

cccc

0

03

02

01

3

2

1

321

3333231

2232221

1131211

ΨαTn cE 00

ΨVDY 1100

Tc

YDY 100

Tc

p

kkk dYc

1

200 /

T VDV α ΨT VDV α Ψ VY ΨVY ΨT DV α Y

1( )T α DV Y1 1( )T α V D Y

1 1T T α Y D V

Autocorrelation Method---Durbin’s Recursive Solution

The recursive solution proceeds in steps.

In each step, we already have a solution for a lower order predictor, and we use that solution to compute the coefficients for the higher order predictor.

Autocorrelation Method---Durbin’s Recursive Solution

Notations:Coefficients forthe nthorderpredictor:

Error energy forthe nth order predictor:

The Toeplitz matrix forthe nth order predictor:

021

2012

1101

210

)(

rrrr

rrrr

rrrr

rrrr

nnn

n

n

n

n

R

( )nE (0) ?E (0) ?E

( )1

( ) ( )2

( )

1n

n n

nn

α

( )

( ) ( )2( )1

1

nn

n n

n

α

(0) (0) 1 α α (0) (0) 1 α α

(0)0rR (0)

0rR

Autocorrelation Method---Durbin’s Recursive Solution

The equation for the autocorrelation method:

0

0

0

1 )(

)(

)(2

)(1

021

2012

1101

210

n

nn

n

n

nnn

n

n

n E

rrrr

rrrr

rrrr

rrrr

n

nnn E

0αR

)()()( How the

procedure proceeds recursively?

How the procedure proceeds recursively?

(0)0E r

(0)0E r

(0) 1α (0) 1α (0)0rR (0)

0rR

Permutation Matrix

0001

0010

0100

1000

4P

44434241

34333231

24232221

14131211

aaaa

aaaa

aaaa

aaaa

A

44434241

34333231

24232221

14131211

4

0001

0010

0100

1000

aaaa

aaaa

aaaa

aaaa

AP

14131211

24232221

34333231

44434241

aaaa

aaaa

aaaa

aaaa

0001

0010

0100

1000

44434241

34333231

24232221

14131211

4

aaaa

aaaa

aaaa

aaaa

AP

41424344

31323334

21222324

11121314

aaaa

aaaa

aaaa

aaaa

Row inversing

Column inversing

Property of a Toeplitz Matrix

021

2012

1101

210

)(

rrrr

rrrr

rrrr

rrrr

nnn

n

n

n

n

R

1)()(

1 nnn

n PRRP

A Toeplitz Matrix

Autocorrelation Method---Durbin’s Recursive Solution

0

0

0

1 )(

)(

)(2

)(1

021

2012

1101

210

n

nn

n

n

nnn

n

n

n E

rrrr

rrrr

rrrr

rrrr

n

nnn E

0αR

)()()(

n

n

nnn

n

E

0PαRP

)(

1)()(

1

n

n

nn

nn E

0PαPR

)(

1)(

1)(

)(

)()(nnnn

E

0αR

)(

)(1

)(2

)(

021

2012

1101

210

0

0

0

1 n

n

n

nn

nnn

n

n

n

Errrr

rrrr

rrrr

rrrr

(0)0E r

(0)0E r (0) (0) 1 α α

(0) (0) 1 α α

(0)0rR (0)

0rR

Autocorrelation Method---Durbin’s Recursive Solution

0

0

0

1 )(

)(

)(2

)(1

021

2012

1101

210

n

nn

n

n

nnn

n

n

n E

rrrr

rrrr

rrrr

rrrr

n

nn

Tn

nn E

r 0α

r

rR )()(

0

)1(

)(

)(1

)(2

)(

021

2012

1101

210

0

0

0

1 n

n

n

nn

nnn

n

n

n

Errrr

rrrr

rrrr

rrrr

)(

)(

)1(0

nnn

nn

Tn

E

r 0α

Rr

r

(0)0E r

(0)0E r (0) (0) 1 α α

(0) (0) 1 α α

(0)0rR (0)

0rR

Autocorrelation Method---Durbin’s Recursive Solution

)1(1

)1(

)1(

)1()1()1(

0

)1(

0 nTn

n

n

nTn

nnn

Tn

nn

E

rαr

0αr

αRα

r

rR

)1(

1

)1(

)1()1(

)1(

)1()1(0 0

nn

nTn

nn

nTn

nnn

Tn

E

r0

αr

αR

αr

αRr

r

q

E

n

n

1

)1(

0

)1(1

nn

E

q

0

n

nn

Tn

nn E

r 0α

r

rR )()(

0

)1(

)(

)(

)1(0

nnn

nn

Tn

E

r 0α

Rr

r

(0)0E r

(0)0E r (0) (0) 1 α α

(0) (0) 1 α α

(0)0rR (0)

0rR

Autocorrelation Method---Durbin’s Recursive Solution

)1(1

)1(

)1(

)1()1()1(

0

)1(

0 nTn

n

n

nTn

nnn

Tn

nn

E

rαr

0αr

αRα

r

rR

)1(

1

)1(

)1()1(

)1(

)1()1(0 0

nn

nTn

nn

nTn

nnn

Tn

E

r0

αr

αR

αr

αRr

r

q

E

n

n

1

)1(

0

)1(1

nn

E

q

0

n

nnn E

0αR

)()()(

This is what we want.This is what we want.

(0)0E r

(0)0E r (0) (0) 1 α α

(0) (0) 1 α α

(0)0rR (0)

0rR

Autocorrelation Method---Durbin’s Recursive Solution

)1(1

)1(

)1(

)1()1()1(

0

)1(

0 nTn

n

n

nTn

nnn

Tn

nn

E

rαr

0αr

αRα

r

rR

)1(

1

)1(

)1()1(

)1(

)1()1(0 0

nn

nTn

nn

nTn

nnn

Tn

E

r0

αr

αR

αr

αRr

r

q

E

n

n

1

)1(

0

)1(1

nn

E

q

0

n

nnn E

0αR

)()()(

)(

)(2

)(1

)(

1

nn

n

n

n

α

)(

)(2

)(1

)(

1

nn

n

n

n

α

0

1

0 )1(1

)1(1)1(

nn

n

n

α

0

1

0 )1(1

)1(1)1(

nn

n

n

α

1

0

0

)1(1

)1(1

)1(n

nn

n

α

1

0

0

)1(1

)1(1

)1(n

nn

n

α

(0)0E r

(0)0E r (0) (0) 1 α α

(0) (0) 1 α α

(0)0rR (0)

0rR

Autocorrelation Method---Durbin’s Recursive Solution

)1(1

)1(

)1(

)1()1()1(

0

)1(

0 nTn

n

n

nTn

nnn

Tn

nn

E

rαr

0αr

αRα

r

rR

)1(

1

)1(

)1()1(

)1(

)1()1(0 0

nn

nTn

nn

nTn

nnn

Tn

E

r0

αr

αR

αr

αRr

r

q

E

n

n

1

)1(

0

)1(1

nn

E

q

0

n

nnn E

0αR

)()()(

)(

)(2

)(1

)(

1

nn

n

n

n

α

)(

)(2

)(1

)(

1

nn

n

n

n

α

0

1

0 )1(1

)1(1)1(

nn

n

n

α

0

1

0 )1(1

)1(1)1(

nn

n

n

α

1

0

0

)1(1

)1(1

)1(n

nn

n

α

1

0

0

)1(1

)1(1

)1(n

nn

n

α

)1(

)1()( 0

0 nn

nn k

α

αα

(0)0E r

(0)0E r (0) (0) 1 α α

(0) (0) 1 α α

(0)0rR (0)

0rR

Autocorrelation Method---Durbin’s Recursive Solution

)1(1

)1(

)1(

)1()1()1(

0

)1(

0 nTn

n

n

nTn

nnn

Tn

nn

E

rαr

0αr

αRα

r

rR

)1(

1

)1(

)1()1(

)1(

)1()1(0 0

nn

nTn

nn

nTn

nnn

Tn

E

r0

αr

αR

αr

αRr

r

q

E

n

n

1

)1(

0

)1(1

nn

E

q

0

n

nnn E

0αR

)()()(

)1(

)1()( 0

0 nn

nn k

α

αR

)1(11

)1(

nnnn

n

E

q

k

q

E

00

)1( nTnq αr )1( nT

nq αr

(0)0E r

(0)0E r (0) (0) 1 α α

(0) (0) 1 α α

(0)0rR (0)

0rR

Autocorrelation Method---Durbin’s Recursive Solution

n

nnn E

0αR

)()()(

)1(

)1()( 0

0 nn

nn k

α

αR

)1(11

)1(

nnnn

n

E

q

k

q

E

00

)1(1

)1(

)(

nn

n

nn

n

n

Ekq

qkEE

00

)1( nTnq αr )1( nT

nq αr

(0)0E r

(0)0E r (0) (0) 1 α α

(0) (0) 1 α α

(0)0rR (0)

0rR

Autocorrelation Method---Durbin’s Recursive Solution

)1(1

)1(

)(

nn

n

nn

n

n

Ekq

qkEE

00

)1( nTnq αr )1( nT

nq αr

=0

)1( nn E

qk )1()1(

1

nn

kn

n

kk Er 1)(

0 n

qkEE nnn )1()( )( )1()1( n

nnn EkkE

)1(2 )1( nn Ek

(0)0E r

(0)0E r (0) (0) 1 α α

(0) (0) 1 α α

(0)0rR (0)

0rR

Autocorrelation Method---Durbin’s Recursive Solution

)1( nn E

qk )1()1(

1

nn

kn

n

kk Er 1)(

0 n

qkEE nnn )1()( )( )1()1( n

nnn EkkE

)1(2 )1( nn Ek

)1(

)1()( 0

0 nn

nn k

α

αα

)1(

)1()( 0

0 nn

nn k

α

αα

1)(0 n

nik ninn

ni

ni ,,2,1,)1()1()(

nn

n k)(

(0)0E r

(0)0E r (0) (0) 1 α α

(0) (0) 1 α α

(0)0rR (0)

0rR

Autocorrelation Method---Durbin’s Recursive Solution

)1( nn E

qk )1()1(

1

nn

kn

n

kk Er 1)(

0 n

qkEE nnn )1()( )( )1()1( n

nnn EkkE

)1(2 )1( nn Ek

)1(

)1()( 0

0 nn

nn k

α

αα

)1(

)1()( 0

0 nn

nn k

α

αα What can you say about kn?What can you say about kn?

11 nk 11 nk1)(0 n

nik ninn

ni

ni ,,2,1,)1()1()(

nn

n k)(

(0)0E r

(0)0E r (0) (0) 1 α α

(0) (0) 1 α α

(0)0rR (0)

0rR

Autocorrelation Method---Durbin’s Recursive Solution

Summary: Construct a pth order linear predictor.

Step1. Compute the values of r0, r1, , rp.

Step2. Set E(0) = r0.

Step3. Recursively compute the following terms from n=1 to p.

)1()1(

1

nn

kn

n

kkn Erk )1(2)( )1( n

nn EkE

1)(0 n

1,,2,1 ,)1()1()(

nik ninn

ni

ni

nn

n k)(

Linear Predictive Analysis

Lattice Formulations

The Steps for Finding LPC Coefficients

Both the covariance and the autocorrelation methods consist of two steps:– Computation of a matrix of correlation values.– Solution of a set of linear equations.

Lattice method:– Combine them into one.

The Clue from Autocorrelation Method

Consider the system function of an nth order the linear predictor.

( ) ( 1) ( 1) , 1, 2, , 1n n ni i n n ik i n

( ) ( )

1

( ) 1n

n n ii

i

A z a z

The recursive relation from autocorrelation method:

1

( ) ( 1) ( 1)

1

( ) 1n

n n n k ni n n i n

i

A z k z k z

1 1

( ) ( 1) ( 1)

1 1

( ) 1n n

n n i n i ni n n i

i i

A z z k z z

nn

n k)(

The Clue from Autocorrelation Method

1 1( ) ( 1) ( 1)

1 1

( ) 1n n

n n i n i ni n n i

i i

A z z k z z

1( ) ( 1) ( 1)

1

( ) ( )n i n

n n n n i nn i

n i

A z A z k z z

Change indexi n iA(n 1) (z)

The Clue from Autocorrelation Method

1( 1) ( 1)

1

( )n

n n n i nn i

i

A z k z z

1( 1) ( 1)

0

( )n

n n n in i

i

A z k z z

A(n 1) (z 1 )

)()()( 1)1()1()( zAzkzAzA nnn

nn )()()( 1)1()1()( zAzkzAzA nnn

nn

1( ) ( 1) ( 1)

1

( ) ( )n i n

n n n n i nn i

n i

A z A z k z z

Interpretation

)()()()()()( 1)1()1()( zSzAzkzSzAzSzA nnn

nn

)()()()()( 1)1()1()( zSzAzkzSzAzE nnn

nn 1 1

( ) ( 1) ( 1)

0 0

( ) ( ) ( )n n

n n ni n i

i i

e m a s m i k a s m n i

e(n 1) (m) b(n 1) (m 1)

)()()( 1)1()1()( zAzkzAzA nnn

nn )()()( 1)1()1()( zAzkzAzA nnn

nn

1( 1) ( 1)

1

( ) ( ) ( )n

n ni

i

e m s m a s m i

1

( 1) ( 1)

1

( ) ( ) ( )n

n ni

i

e m s m a s m i

1( 1) ( 1)

1

( 1) ( ) ( )n

n ni

i

b m s m n a s m n i

1

( 1) ( 1)

1

( 1) ( ) ( )n

n ni

i

b m s m n a s m n i

order n 1

Interpretation

Forward PredictionError Filter What is this?

. . .s(m)

s(m1)

s(m2)s(m3)

s(mn+3)s(mn+2)

s(mn+1)

s(mn)1)1(

1na

)1(2na

)1(3na

)1(3n

na)1(

2n

na

)1(1n

na

)()()( 1)1()1()( zAzkzAzA nnn

nn )()()( 1)1()1()( zAzkzAzA nnn

nn

1( 1) ( 1)

1

( ) ( ) ( )n

n ni

i

e m s m a s m i

1

( 1) ( 1)

1

( ) ( ) ( )n

n ni

i

e m s m a s m i

1( 1) ( 1)

1

( 1) ( ) ( )n

n ni

i

b m s m n a s m n i

1

( 1) ( 1)

1

( 1) ( ) ( )n

n ni

i

b m s m n a s m n i

order n 1

. . .s(m)

s(m1)

s(m2)s(m3)

s(mn+3)s(mn+2)

s(mn+1)

s(mn)

Interpretation

Backward PredictionError Filter

1

)1(1na

)1(2na

)1(3na

)1(3n

na

)1(2n

na

)1(1n

na

)()()( 1)1()1()( zAzkzAzA nnn

nn )()()( 1)1()1()( zAzkzAzA nnn

nn

1( 1) ( 1)

1

( ) ( ) ( )n

n ni

i

e m s m a s m i

1

( 1) ( 1)

1

( ) ( ) ( )n

n ni

i

e m s m a s m i

1( 1) ( 1)

1

( 1) ( ) ( )n

n ni

i

b m s m n a s m n i

1

( 1) ( 1)

1

( 1) ( ) ( )n

n ni

i

b m s m n a s m n i

Backward Prediction Defined

Define )()( 1)(1)( zAzzB nnn

)()()( )1()1()( zBkzAzA nn

nn )()()( )1()1()( zBkzAzA nn

nn

( ) ( )

1

( ) 1n

n n ii

i

A z a z

1)()0( zA

)()()( 1)1()1()( zAzkzAzA nnn

nn )()()( 1)1()1()( zAzkzAzA nnn

nn

Backward Prediction Defined

( ) 1 ( )

1

( ) 1n

n n n ii

i

B z z a z

1 ( ) 1

1

nn n n i

ii

z a z

Define )()( 1)(1)( zAzzB nnn

)()()( )1()1()( zBkzAzA nn

nn )()()( )1()1()( zBkzAzA nn

nn

1)0( )( zzB

( ) ( )

1

( ) 1n

n n ii

i

A z a z

1)()0( zA

Forward Prediction vs. Backward Prediction

1)()0( zA 1)()0( zA

1)0( )( zzB1)0( )( zzB

Define )()( 1)(1)( zAzzB nnn

)()()( )1()1()( zBkzAzA nn

nn )()()( )1()1()( zBkzAzA nn

nn

)]()([)( 1)1(1)1(1)( zBkzAzzB nn

nnn

)()( )(11)( zAzzB nnn

)()( )1(11)1(1 zAzkzAz nn

nn

)()( )1(1)1(1 zAzkzBz nn

n

)]()([)( )1()1(1)( zBzAkzzB nnn

n )]()([)( )1()1(1)( zBzAkzzB nnn

n

The Prediction Errors

1)()0( zA 1)()0( zA

1)0( )( zzB1)0( )( zzB

)()()( )1()1()( zBkzAzA nn

nn )()()( )1()1()( zBkzAzA nn

nn

)]()([)( )1()1(1)( zBzAkzzB nnn

n )]()([)( )1()1(1)( zBzAkzzB nnn

n

( ) 1 ( ) 1

1

( )n

n n n n ii

i

B z z a z

( ) ( )

1

( ) 1n

n n ii

i

A z a z

)()()( )( zSzAzE nn

( )

1

( ) ( ) ( )n

nn i

i

e m s m a s m i

)()()( )( zSzBzE n

n

( )

1

( ) ( 1) ( 1)n

nn i

i

e m s m n a s m n i

)()()( 11 mekmeme nnnn

)()()( 11 mekmeme nnnn

)1()1()( 11

memekme nnnn)1()1()( 11

memekme nnnn

)()(0 msme )()(0 msme

)1()(0 msme )1()(0 msme

The forwardprediction error

The backwardprediction error

The Lattice Structure

)()()( 11 mekmeme nnnn

)()()( 11 mekmeme nnnn

)1()1()( 11

memekme nnnn)1()1()( 11

memekme nnnn

)()(0 msme )()(0 msme

)1()(0 msme )1()(0 msme

z1

s(m))(0 me

)(0 me

k1

k1

z1

)(1 me

)(1 me

k2

k2

z1

)(2 me

)(2 me

kp

)(mep)(1 mep

)(1 mep

z1

The Lattice Structure

ki=?ki=?Throughout the discussion, we have assumed that ki’s are the same as that developed for the autocorrelation method.

So, ki’s can be found using the autocorrelation method.

z1

s(m))(0 me

)(0 me

k1

k1

z1

)(1 me

)(1 me

k2

k2

z1

)(2 me

)(2 me

kp

)(mep)(1 mep

)(1 mep

z1

Another Approach to Find ki’s

1

0

2)(

N

mn meFor the nth order predictor, our goal is to minimize

)()()( 11 mekmeme nnnn

)()()( 11 mekmeme nnnn

So, we want to minimize

1

0

211

)( )]()([N

mnnn

n mekmeE

z1

s(m))(0 me

)(0 me

k1

k1

z1

)(1 me

)(1 me

k2

k2

z1

)(2 me

)(2 me

kp

)(mep)(1 mep

)(1 mep

z1

Another Approach to Find ki’s

So, we want to minimize

1

0

211

)( )]()([N

mnnn

n mekmeE

0)(

n

n

k

E

)()]()([2 1

1

011

)(

memekmek

En

N

mnnn

n

n

Set

1

0

1

0

2111 )]([)()(2

N

m

N

mnnnn mekmeme 0

1

0

21

1

011

)]([

)()(

N

mn

N

mnn

n

me

memek

Another Approach to Find ki’s

0)(

n

n

k

ESet

1

0

21

1

011

)]([

)()(

N

mn

N

mnn

n

me

memek

Fact:

1

0

21

1

0

21 )]([)]([

N

mn

N

mn meme

1

0

21

1

0

21

1

011

)]([)]([

)()(

N

mn

N

mn

N

mnn

n

meme

memek

1

0

21

1

0

21

1

011

)]([)]([

)()(

N

mn

N

mn

N

mnn

n

meme

memek

PARCOR

)(mep

kpCORRCORRk1CORRCORR

s(m) )(0 me

)(0 mez1

k2CORRCORR

)(1 mep

)(1 mep

z1

)(1 me

)(1 mez1

)(2 me

)(2 me

z1

CORRCORR

)()( meme p )()( meme p

1

0

21

1

0

21

1

011

)]([)]([

)()(

N

mn

N

mn

N

mnn

n

meme

memek

1

0

21

1

0

21

1

011

)]([)]([

)()(

N

mn

N

mn

N

mnn

n

meme

memek

)()( 0 mems )()( 0 mems

( )

1 10

( )( )( ) 1 1

( ) ( )

p pp p i i

i ii i

E zE zA z α z α z

S z E z

( )

1 10

( )( )( ) 1 1

( ) ( )

p pp p i i

i ii i

E zE zA z α z α z

S z E z

Given kn’s, can you find i’s?

Given kn’s, can you find i’s?

All-Pole Lattice

)()()( 11 mekmeme nnnn

)()()( 11 mekmeme nnnn

)1()1()( 11

memekme nnnn)1()1()( 11

memekme nnnn

)()()( 11 mekmeme nnnn

)()()( 11 mekmeme nnnn

)1()1()( 11

memekme nnnn)1()1()( 11

memekme nnnn

z1

s(m))(0 me

)(0 me

k1

k1

z1

)(1 me

)(1 me

k2

k2

z1

)(2 me

)(2 me

kp

)(mep)(1 mep

)(1 mep

z1

All-Pole Lattice

z1

s(m))(0 me

)(0 me

k1

k1

z1

)(1 me

)(1 me

k2

k2

z1

)(2 me

)(2 me

kn

)(men)(1 men

)(1 men

z1

)1()1()( 11

memekme nnnn)1()1()( 11

memekme nnnn

)()()( 11 mekmeme nnnn

)()()( 11 mekmeme nnnn

)()()( 11 zEkzEzE nnnn

)()()( 11 zEkzEzE nnnn

)()()( 11 zEzEkzzE nnnn

)()()( 11 zEzEkzzE nnnn

All-Pole Lattice

z1

s(m))(0 me

)(0 me

k1

k1

z1

)(1 me

)(1 me

k2

k2

z1

)(2 me

)(2 me

kn

)(men)(1 men

)(1 men

z1

)(zEn

)(zzEn

kn

kn

z1

)(1 zEn

)(1 zzEn

)()()( 11 zEkzEzE nnnn

)()()( 11 zEkzEzE nnnn

)()()( 11 zEzEkzzE nnnn

)()()( 11 zEzEkzzE nnnn

)()()( 11 zEzEkzzE nnnn

)()()( 11 zEzEkzzE nnnn

)()()( 11 zEkzEzE nnnn

)()()( 11 zEkzEzE nnnn

All-Pole Lattice

)(zEn

)(zzEn

kn

kn

z1

)(1 zEn

)(1 zzEn

)()()( 11 zEzEkzzE nnnn

)()()( 11 zEzEkzzE nnnn

)()()( 11 zEkzEzE nnnn

)()()( 11 zEkzEzE nnnn

)(1 zE

)(1 zzE

k1

k1

z1

)(2 zE

)(2 zzE

k2

k2

z1

)(zEp

z1

kp

)(1 zEp

)(1 zzEp

z1

kp1

kp1

)(2 zEp

)(2 zzEp

10 )( zzE

10 )( zzE

)()( zEzEp )()( zEzEp

)(0 zE

)(0 zzE

1

All-Pole Lattice)()( zEzE p

)()( zEzE p )()( 0 zEzS )()( 0 zEzS

e(m) s(m)

p

k

kk

p

k

kpk

p zαzαzE

zE

zE

zSzV

11

)(

0

1

1

1

1

)(

)(

)(

)()(

p

k

kk

p

k

kpk

p zαzαzE

zE

zE

zSzV

11

)(

0

1

1

1

1

)(

)(

)(

)()(

)(1 zE

)(1 zzE

k1

k1

z1

)(2 zE

)(2 zzE

k2

k2

z1

)(zEp

z1

kp

)(1 zEp

)(1 zzEp

z1

kp1

kp1

)(2 zEp

)(2 zzEp

)(0 zE

)(0 zzE

1

Comparison

)(1 zE p

)(1 zEp

)(zEp

)(2 zE

)(2 zE

)(0 zE )(1 zE

)(1 zE

k1

k1

z1

k2

k2

z1 z1

kp

z1

kp1

kp1

)(0 zE

e(m)s(m)

PARCOR

)(1 zE

)(1 zzE

k1

k1

z1

)(2 zE

)(2 zzE

k2

k2

z1

)(zEp

z1

kp

)(1 zEp

)(1 zzEp

z1

kp1

kp1

)(2 zEp

)(2 zzEp

)(0 zE

)(0 zzE

1

Normalize Lattice

)(zEn

)(zzEn

kn

kn

z1

)(1 zEn

)(1 zzEn

)()()( 11 zEkzEzE nnnn

)()()( 11 zEkzEzE nnnn

)()()( 11 zEzEkzzE nnnn

)()()( 11 zEzEkzzE nnnn

)(zEn

)(zzEn

kn

kn

z1

)(1 zEn

)(1 zzEn

Sectionn

Normalize Lattice

Section1

Section1

)(0 zE

)(0 zzE

)(1 zE

)(1 zzE

Section2

Section2

)(2 zE

)(2 zzE

Sectionp

Sectionp

)(1 zEp

)(1 zzEp

)(zEp

)(zzE p

10 )( zzE

10 )( zzE

)()( zEzEp )()( zEzEp

)(zE

)(zEn

)(zzEn

kn

kn

z1

)(1 zEn

)(1 zzEn

Sectionn

)()()( 11 zEkzEzE nnnn

)()()( 11 zEkzEzE nnnn

)()()( 11 zEzEkzzE nnnn

)()()( 11 zEzEkzzE nnnn

)(zS1

)()()( 11 zEkzEzE nnnn

)()()( 11 zEkzEzE nnnn

)()()( 11 zEzEkzzE nnnn

)()()( 11 zEzEkzzE nnnn

Normalize Lattice

)()]()([)( 11 zEzEkzEkzzE nnnnnn

)()1()( 12 zEkzEk nnnn

)()1()()( 12 zEkzEkzzE nnnnn

)()1()()( 12 zEkzEkzzE nnnnn

)()()( 11 zEkzEzE nnnn

)()()( 11 zEkzEzE nnnn

Normalize Lattice

Three multiplier form

)()1()()( 12 zEkzEkzzE nnnnn

)()1()()( 12 zEkzEkzzE nnnnn

)()()( 11 zEkzEzE nnnn

)()()( 11 zEkzEzE nnnn

)(zEn

)(zzEn

kn kn

z1

)(1 zEn

)(1 zzEn

21 nk

Normalize Lattice

Three multiplier form

Let nn kθ sin21cos nn kθ

Four multiplier form

)(zEn

)(zzEn

kn kn

z1

)(1 zEn

)(1 zzEn

21 nk

)(zEn

)(zzEn

z1

)(1 zEn

)(1 zzEn

sin nθsin nθ

cos nθ

cos nθ

Normalize Lattice

Kelly-Lochbaum form

)(zEn

)(zzEn

kn kn

z1

)(1 zEn

)(1 zzEn

21 nk

)(zEn

)(zzEn

z1

)(1 zEn

)(1 zzEn

sin nθsin nθ

cos nθ

cos nθ

)(zEn

)(zzEn

kn kn

z1

)(1 zEn

)(1 zzEn

1 nk

1 nk

Normalize Lattice1

( ) 1p

ii

i

A z α z

1

( ) 1p

ii

i

A z α z

)(

1)(

zAzV )(

1)(

zAzV

)(

cos)( 1

zA

θzV

p

nn

)(

cos)( 1

zA

θzV

p

nn

)(

)1()( 1

zA

kzV

p

nn

)(

)1()( 1

zA

kzV

p

nn

Section1

Section1

)(0 zE

)(0 zzE

)(1 zE

)(1 zzE

Section2

Section2

)(2 zE

)(2 zzE

Sectionp

Sectionp

)(1 zEp

)(1 zzEp

)(zEp

)(zzE p

)(zE )(zS1

)(zEn

)(zzEn

kn kn

z1

)(1 zEn

)(1 zzEn

21 nk

)(zEn

)(zzEn z1

)(1 zEn

)(1 zzEn

sin nθsin nθ

cos nθ

cos nθ

)(zEn

)(zzEn

kn kn

z1

)(1 zEn

)(1 zzEn

1 nk

1 nk