EECS 126: Probability & Random Processes Fall 2020
Transcript of EECS 126: Probability & Random Processes Fall 2020
EECS 126: Probability & Random ProcessesFall 2021
PageRank
Shyam Parekh
โข Originally used by Google for ranking the pages from a keyword search.
๐ ๐ =
๐โ๐
๐ ๐ ๐ ๐, ๐ , โ ๐ โ ๐ณ โบ ๐ = ๐๐
โข Original algorithm also used a damping factor.
โข With the normalization condition ๐๐๐ ๐(๐) = 1, ฯ =1
39[12, 9, 10, 6, 2]
โข This is similar to a Markov Chain (defined next).
PageRank
๐ ๐ด, ๐ต = 1/2, ๐ ๐ท, ๐ธ = 1/3, โฆ
โข DTMC ๐(๐ , ๐ โฅ 0} over state space ๐ณ with ๐ = [๐(๐, ๐)] as the transition probability matrix with ๐ ๐, ๐ = ๐ ๐ ๐ + 1 = ๐ ๐ ๐ = ๐].
โ State transition diagram.
โ Memoryless Property: ๐ ๐ ๐ + 1 = ๐ ๐ ๐ = ๐, ๐ ๐ ,๐ < ๐] = ๐ ๐, ๐ โ ๐, ๐.
โข ๐๐+1 i = ๐โ๐ ๐๐ ๐ ๐(๐, ๐) โบ ๐๐+1 = ๐๐๐ โบ ๐๐ = ๐0๐๐, โ ๐ โฅ 0.
โ A normalized ๐ is an invariant distribution โบ ๐ = ๐P.
โ Irreducible: MC can reach any state from any other state (possibly in multiple steps).
โ Aperiodic: Let ๐ ๐ โ ๐. ๐. ๐. ๐ โฅ 1 ๐๐ ๐, ๐ > 0}. An irreducible DTMC is aperiodic if ๐ ๐ = 1, โ ๐. (Fact: In an irreducible DTMC, ๐ ๐ is same โ i.)
Discrete Time Markov Chain (DTMC)
Irreducible & Periodic
Irreducible & Aperiodic
Reducible
โข Theorem: Consider an irreducible DTMC over a finite state space. Then,
(a) There is a unique invariant distribution ๐.
(b) Long-term fraction of time (๐ ๐ = ๐) โ lim๐โโ
1
๐ ๐=0๐โ11{๐ ๐ = ๐} = ฯ(๐).
(c) If the DTMC is aperiodic, ๐๐ โ ๐.
โข Example showing aperiodicity is necessary for (c).
โข In part (c), ๐๐ โ ๐ irrespective of the initial distribution ๐0.
โ This implies each row of ๐๐ converges to ๐ as ๐ โ โ.
Big Theorem for Finite DTMC
0 1
Illustrations
1. Mean Hitting Time ฮฒ ๐ โ ๐ธ ๐๐ด ๐0 = ๐], ๐ โ ๐ณ, A โ ๐ณ.
โ First Step Equations (FSEs) are: ๐ฝ ๐ = 1 + ๐ ๐ ๐, ๐ ๐ฝ ๐ , ๐๐ ๐ โ ๐ด
0, ๐๐ ๐ โ ๐ด
2. Probability of hitting a state before another ฮฑ ๐ โ ๐ ๐๐ด < ๐๐ต ๐0 = ๐], ๐ โ ๐ณ, ๐ด, ๐ต โ ๐ณ, ๐ด โฉ ๐ต = โ .
โ FSEs are: ๐ผ ๐ =
๐ ๐ ๐, ๐ ๐ผ ๐ , ๐๐ ๐ โ ๐ด โช ๐ต
1, ๐๐ ๐ โ ๐ด0, ๐๐ ๐ โ ๐ต
3. Average discounted reward ๐ฟ ๐ โ ๐ธ ๐ ๐0 = ๐], ๐ โ ๐ณ, where ๐ = ๐=0
๐๐ด ๐ฝ๐โ ๐ ๐ ,๐ด โ ๐ณ with 0 < ๐ฝ โค 1 as the discount factor.
โ FSEs are: ๐ฟ ๐ = โ ๐ + ๐ฝ ๐ ๐ ๐, ๐ ๐ฟ ๐ , ๐๐ ๐ โ ๐ด
โ ๐ , ๐๐ ๐ โ ๐ด
Hitting Times
โข Canonical Sample Space with outcome ๐ being the Markov Chain trajectory/realization:
โข ๐ ๐0 = ๐0, ๐1 = ๐1, โฆ , ๐๐ = ๐๐ = ๐ ๐0 ๐ ๐0, ๐1 โฆ๐(๐๐โ1, ๐๐)
โข There exists a consistent probability measure over the sample space.
Markov Chain Trajectory/Realization
โข Let {๐ ๐ , ๐ โฅ 1} be a sequence of Independent and Identically Distributed (IID) RVs with mean ๐, and let ๐(๐) = ๐=1
๐ ๐(๐). Assume ๐ธ ๐ ๐ < โ.
โข Strong Law of Large Numbers (SLLN): ๐( lim๐โโ
๐(๐)
๐= ฮผ) = 1.
โ๐(๐)
๐converges to ๐ almost surely.
โข Weak Law of Large Numbers (WLLN): lim๐โโ๐(๐(๐)
๐โ ๐ > ๐) = 0, โ ๐ > 0.
โ๐(๐)
๐converges to ๐ in probability.
โข Almost sure convergence implies convergence in probability.
โข SLLN Example: ๐ ๐ =๐(๐)
๐, where ๐ ๐ = outcome of ith roll of a balanced die.
โข Part (b) of the Big Theorem indicates almost sure convergence.
โ For an irreducible DTMC over a finite numerical state space, 1
๐ ๐=0๐โ1๐(๐) converges almost
surely to ๐ธ(๐) where ๐ is an RV with the distribution same as the invariant distribution ๐.
Laws of Large Numbers
โข Let {๐ ๐ , ๐ โฅ 1} be a sequence of Independent and Identically Distributed (IID) RVs with mean ๐, and let ๐(๐) = ๐=1
๐ ๐(๐). Assume ๐ธ ๐(๐)4 < โ.
โข Strong Law of Large Numbers (SLLN): ๐( lim๐โโ
๐(๐)
๐= ฮผ) = 1.
SLLN vs. WLLN, SLLN Proof
โข Ti = Time to return to i. Tiโs are IID.
Key Points from the Big Theorem Proof
T0 T1 T2 T3 T4 T5