CS344: Introduction to Artificial Intelligence (associated lab: CS386)

25
Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 25: Perceptrons; # of regions; training and convergence 14 th March, 2011

description

CS344: Introduction to Artificial Intelligence (associated lab: CS386). Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 25: Perceptrons ; # of regions; training and convergence 14 th March, 2011. Functions in one-input Perceptron. Y. True function. θ function. Identity function. - PowerPoint PPT Presentation

Transcript of CS344: Introduction to Artificial Intelligence (associated lab: CS386)

Page 1: CS344: Introduction to Artificial Intelligence (associated lab: CS386)

CS344: Introduction to Artificial Intelligence

(associated lab: CS386)Pushpak Bhattacharyya

CSE Dept., IIT Bombay

Lecture 25: Perceptrons; # of regions; training and convergence

14th March, 2011

Page 2: CS344: Introduction to Artificial Intelligence (associated lab: CS386)

Functions in one-input Perceptronw.0 > θ i.e. 0 > θw.1 > θ w > θ

w.0 ≤ θ i.e. θ ≥ 0w.1 ≤ θ w ≤ θ

w.0 ≤ θ i.e. θ ≥ 0w.1 > θ w > θ

w.0 > θ i.e. 0 < θw.1 ≤ θ w ≤ θ

True function

θ function

Identity function

Complement function

θ

X

Y

Page 3: CS344: Introduction to Artificial Intelligence (associated lab: CS386)

Functions in Simple Perceptron

Page 4: CS344: Introduction to Artificial Intelligence (associated lab: CS386)

Multiple representations of a concept

AND y = x1 & x2

θ ≥ 0w1 ≤ θw2 ≤ θ

w1+ w2 > θ

θ = 0.5

W2 = 0.4 W1 = 0.4

x1 x2

x2 x1 y

0 0 0

0 1 0

1 0 0

1 1 1

θ

W1

W2

The concept

of ANDing

Plain English

Linear Inequalities

Truth Table

Hyperplanes

Algebra

Perceptron

Page 5: CS344: Introduction to Artificial Intelligence (associated lab: CS386)

Inductive Bias Once we have decided to use a particular

representation, we have assumed “inductive bias”

The inductive bias of a learning algorithm is the set of assumptions that the learner uses to predict outputs given inputs that it has not encountered (Mitchell, 1980).

You can refer to: A theory of the LearnableLG Valiant - Communications of the ACM, 1984

Page 6: CS344: Introduction to Artificial Intelligence (associated lab: CS386)

Fundamental Observation The number of TFs computable by a

perceptron is equal to the number of regions produced by 2n hyper-planes,obtained by plugging in the values <x1,x2,x3,…,xn> in the equation

∑i=1nwixi= θ

Page 7: CS344: Introduction to Artificial Intelligence (associated lab: CS386)

The geometrical observation

Problem: m linear surfaces called hyper-planes (each hyper-plane is of (d-1)-dim) in d-dim, then what is the max. no. of regions produced by their intersection?

i.e., Rm,d = ?

Page 8: CS344: Introduction to Artificial Intelligence (associated lab: CS386)

Co-ordinate Spaces

We work in the <X1, X2> space or the <w1, w2, Ѳ> space

W2

W1

Ѳ

X1

X2

(0,0) (1,0

)

(0,1)

(1,1)

Hyper-plane(Line in 2-D)

W1 = W2 = 1, Ѳ = 0.5X1 + x2 = 0.5

General equation of a Hyperplane:Σ Wi Xi = Ѳ

Page 9: CS344: Introduction to Artificial Intelligence (associated lab: CS386)

Regions produced by lines

X1

X2L1

L2L3

L4

Regions produced by lines not necessarily passing through originL1: 2L2: 2+2 = 4L3: 2+2+3 = 7L4: 2+2+3+4 = 11

New regions created = Number of intersections on the incoming line by the original lines Total number of regions = Original number of regions + New regions created

Page 10: CS344: Introduction to Artificial Intelligence (associated lab: CS386)

Number of computable functions by a neuron

4:21)1,1(3:1)0,1(2:2)1,0(

1:0)0,0(2*21*1

PwwPwPw

Pxwxw

P1, P2, P3 and P4 are planes in the <W1,W2, Ѳ> space

w1 w2

Ѳ

x1 x2

Y

Page 11: CS344: Introduction to Artificial Intelligence (associated lab: CS386)

Number of computable functions by a neuron (cont…)

P1 produces 2 regions P2 is intersected by P1 in a line. 2 more new

regions are produced.Number of regions = 2+2 = 4

P3 is intersected by P1 and P2 in 2 intersecting lines. 4 more regions are produced.Number of regions = 4 + 4 = 8

P4 is intersected by P1, P2 and P3 in 3 intersecting lines. 6 more regions are produced.Number of regions = 8 + 6 = 14

Thus, a single neuron can compute 14 Boolean functions which are linearly separable.

P2

P3

P4

Page 12: CS344: Introduction to Artificial Intelligence (associated lab: CS386)

Points in the same region

X1

X2If W1*X1 + W2*X2 > ѲW1’*X1 + W2’*X2 > Ѳ’Then

If <W1,W2, Ѳ> and <W1’,W2’, Ѳ’>

share a region then they compute the same

function

Page 13: CS344: Introduction to Artificial Intelligence (associated lab: CS386)

No. of Regions produced by Hyperplanes

Page 14: CS344: Introduction to Artificial Intelligence (associated lab: CS386)

Number of regions founded by n hyperplanes in d-dim passing through origin is given by the following recurrence relation

we use generating function as an operating function

Boundary condition:1 hyperplane in d-dim

n hyperplanes in 1-dim, Reduce to n points thru origin

The generating function is

1,1,, 1 dnddn RRR n

22

1,

,1

n

d

RR

d

n d

ndn yxRyxf

1 1

,),(

Page 15: CS344: Introduction to Artificial Intelligence (associated lab: CS386)

From the recurrence relation we have,

Rn-1,d corresponds to ‘shifting’ n by 1 place, => multiplication by xRn-1,d-1 corresponds to ‘shifting’ n and d by 1 place => multiplication by xy

On expanding f(x,y) we get

01,1,, 1 dnddn RRR n

........

.............

........),(

,3

3,2

2,1,

2,2

323,2

222,2

21,2

,13,12,11,132

dndn

nn

nn

nn

dd

d

yxRyxRyxRyxR

yxRyxRyxRyxR

yxRyxRyxRxyRyxf d

Page 16: CS344: Introduction to Artificial Intelligence (associated lab: CS386)

22 2

,1

2

1,1

2 2

,1

2 2

1,11

1 1

1,

2 1

,1

1 1

1,

1 1

,

2

),(

),(

),(

),(

n

n

n d

dndn

n

nn

n d

dndn

d

n d

ndn

d

n d

ndn

d

n d

ndn

n d

dndn

d

n d

ndn

yxyxR

yxRyxRyxfx

yxRyxRyxfxy

yxRyxRyxfx

yxRyxf

Page 17: CS344: Introduction to Artificial Intelligence (associated lab: CS386)

After all this expansion,

since other two terms become zero

xyyxyxyxyxR

xyRyxRxyRyxR

yxRyxf

n

n

d

dd

n d

ndn

n

nn

d

dd

d

n d

ndn

d

n d

ndn

222

),(

112 2

,

1,1

1

1,

1

,1

2 2

,

1 1

,

1

121

2 2

1,1,1,

2

2222

)(

d

d

d

d

nn

d

d

n d

ndndndn

yx

yxxyxyxy

yxRRR

),(),(),( yxfxyyxfxyxf

Page 18: CS344: Introduction to Artificial Intelligence (associated lab: CS386)

This implies

also we have,

Comparing coefficients of each term in RHS we get,

].....)1(...)1()1(1[

]........[2

2)]1(1[

1),(

2),(]1[

22

32

1

1

dd

d

d

d

d

d

yxyxyx

yyyyx

yxyx

yxf

yxyxfxyx

d

n d

ndn yxRyxf

1 1

,),(

Page 19: CS344: Introduction to Artificial Intelligence (associated lab: CS386)

1

0

1d

i

n

iC

Comparing co-efficients we get

dnR ,

Page 20: CS344: Introduction to Artificial Intelligence (associated lab: CS386)

Perceptron Training Algorithm (PTA)Preprocessing:1. The computation law is modified

toy = 1 if ∑wixi > θy = o if ∑wixi < θ

. . .

θ, ≤

w1 w2 wn

x1 x2 x3 xn

. . .

θ, <

w1 w2 w3 wn

x1 x2 x3 xn

w3

Page 21: CS344: Introduction to Artificial Intelligence (associated lab: CS386)

PTA – preprocessing cont…2. Absorb θ as a weight

3. Negate all the zero-class examples

. . .

θ

w1 w2 w3 wn

x2 x3 xnx1

w0=θ

x0= -1

. . .

θ

w1 w2 w3

wn

x2 x3 xnx1

Page 22: CS344: Introduction to Artificial Intelligence (associated lab: CS386)

Example to demonstrate preprocessing OR perceptron1-class<1,1> , <1,0> , <0,1>0-class<0,0>

Augmented x vectors:-1-class<-1,1,1> , <-1,1,0> , <-1,0,1>0-class<-1,0,0>

Negate 0-class:- <1,0,0>

Page 23: CS344: Introduction to Artificial Intelligence (associated lab: CS386)

Example to demonstrate preprocessing cont..Now the vectors are

x0 x1 x2

X1 -1 0 1X2 -1 1 0X3 -1 1 1X4 1 0 0

Page 24: CS344: Introduction to Artificial Intelligence (associated lab: CS386)

Perceptron Training Algorithm1. Start with a random value of w

ex: <0,0,0…>2. Test for wxi > 0 If the test succeeds for i=1,2,…n

then return w3. Modify w, wnext = wprev + xfail

Page 25: CS344: Introduction to Artificial Intelligence (associated lab: CS386)

Tracing PTA on OR-example

w=<0,0,0> wx1 fails w=<-1,0,1> wx4 fails

w=<0,0,1> wx2 fails w=<-1,1,1> wx1 fails

w=<0,1,2> wx4 fails w=<1,1,2> wx2 fails w=<0,2,2> wx4 fails w=<1,2,2> success