南京大学:《随机算法 Randomized Algorithms》课程教学资源(课件讲稿)Markov Chain

Stochastic Process {Xt|t∈T Xt∈2 time f state space state x∈2 ●discrete time: T is countable T={0,1,2,.… ● discrete space: is finite or countably infinite X0,X1,X2,…
Stochastic Process • discrete time: • discrete space: {Xt | t 2 T} time t state space Ω Xt 2 ⌦ state x 2 ⌦ T is countable T = {0, 1, 2,...} Ω is finite or countably infinite X0, X1, X2

Markov Property dependency structure of Xo,X1,X2,... ●Markov property: (memoryless) X+1 depends only on X: ∀t=0,1,2,.,c0,c1,.,xt-1,x,y∈2 Pr[Xt+1=y Xo =Jo;...,Xt-1=xt-1,Xt= =PrX:+1=y Xi=x Markov chain:discrete time discrete space stochastic process with Markov property
Markov Property • dependency structure of • Markov property: X0, X1, X2,... (memoryless) Xt+1 depends only on Xt Pr[Xt+1 = y | X0 = x0,...,Xt1 = xt1, Xt = x] = Pr[Xt+1 = y | Xt = x] 8t = 0, 1, 2,..., 8x0, x1,...,xt1, x, y 2 ⌦ Markov chain: discrete time discrete space stochastic process with Markov property

Transition Matrix Markov chain:Xo,X1,X2,... Pr[X+1=y Xo=o,...,Xt-1=t-1,Xt=] =PrX+1=y|X&=四=P )=Pcy (time-homogenous) P y∈D 。 x∈2 stochastic matrix P1=1
Transition Matrix Pr[Xt+1 = y | X0 = x0,...,Xt1 = xt1, Xt = x] = Pr[Xt+1 = y | Xt = x] Markov chain: X0, X1, X2,... = P(t) xy = Pxy P y 2 ⌦ x 2 ⌦ Pxy (time-homogenous) stochastic matrix P1 = 1

chain: X0,X1,X2,. distribution: π(o)π(1四)T2)∈[0,12 ∑π=1 π=Pr[Xt=x π(t+1)=π()P 0=PK+1= =>Pr[X:x]Pr[X+1==] x∈2 =∑Pw c∈2 =(πP)g
X0, X1, X2, ... (0) (1) (2) (t+1) = (t) P chain: distribution: 2 [0, 1]⌦ X x2⌦ ⇡x = 1 ⇡(t) x = Pr[Xt = x] ⇡(t+1) y = Pr[Xt+1 = y] = X x2⌦ Pr[Xt = x] Pr[Xt+1 = y | Xt = x] = X x2⌦ ⇡(t) x Pxy = (⇡(t) P)y

π(o)Pπ(1)P.…π()Pπ(t+1)P.… ●initial distribution:π(o) ●transition matrix:P π(t)=π(o)Pt Markoy chain:t =(P)
• initial distribution: • transition matrix: P (0) ⇡(0) P ! ⇡(1) P ! ······ ⇡(t) P ! ⇡(t+1) P ! ··· ⇡(t) = ⇡(0)Pt Markov chain: M = (⌦, P)

1 1 1/3 2 1/3 3 3 2/3 1/3 0 1 0 P 1/3 0 2/3 1/3 1/3 1/3
1/3 1/3 1/3 1/3 2/3 1 1 2 3 P = ⇤ 010 1/302/3 1/3 1/3 1/3 ⇥ ⌅

Convergence 0 1 0 1/3 P= 1/3 0 2/3 /3 1/3 1/3 1/3 0.2860 0.0a 0.6660 He ≈ 0.2500 0.3630 0.3888 0.2308 0.6860 0.3830 V distribution m, πP20≈(8,)
Convergence P = ⇤ 010 1/302/3 1/3 1/3 1/3 ⇥ ⌅ 1/3 1/3 1/3 1/3 2/3 1 1 2 3 P5 ⇤ 0.2469 0.4074 0.3457 0.2510 0.3621 0.3868 0.2510 0.3663 0.3827 ⇥ P10 ⌅ ⇤ 0.2500 0.3747 0.3752 0.2500 0.3751 0.3749 0.2500 0.3751 0.3749 ⇥ P20 ⌅ ⇤ 0.2500 0.3750 0.3750 0.2500 0.3750 0.3750 0.2500 0.3750 0.3750 ⇥ ⌅ distribution , P20 ( 1 4 , 3 8 , 3 8 ) P2 ⇤ 0.3333 0 0.6667 0.3333 0.5556 0.2222 0.2778 0.6111 0.3333 ⇥ ⌅

Stationary Distribution Markoy chain=(,P) stationary distribution πP=π (fixed point) Perron-Frobenius Theorem: stochastic matrix P:P1 =1 1 is also a left eigenvalue of P (eigenvalue of pT) ●the left eigenvectorπP=πis nonnegative stationary distribution always exists
Stationary Distribution • stationary distribution π: • Perron-Frobenius Theorem: • stochastic matrix P: • 1 is also a left eigenvalue of P (eigenvalue of PT) • the left eigenvector is nonnegative • stationary distribution always exists Markov chain M = (⌦, P) ⇡P = ⇡ P1 = 1 ⇡P = ⇡ (fixed point)

Perron-Frobenius Perron-Frobenius Theorem: A:a nonnegative nxn matrix with spectral radius (A) (A)>0 is an eigenvalue of A; there is a nonnegative (left and right)eigenvector associated with o(A); if further A is irreducible,then: there is a positive (left and right)eigenvector associated with o(A)that is of multiplicity 1; for stochastic matrix A the spectral radius o(A)=1
Perron-Frobenius • A : a nonnegative n×n matrix with spectral radius ρ(A) • ρ(A) > 0 is an eigenvalue of A; • there is a nonnegative (left and right) eigenvector associated with ρ(A); • if further A is irreducible, then: • there is a positive (left and right) eigenvector associated with ρ(A) that is of multiplicity 1; • for stochastic matrix A the spectral radius ρ(A)=1. Perron-Frobenius Theorem:

Stationary Distribution Markoy chain=(,P) stationary distribution πP=π (fixed point) Perron-Frobenius Theorem: stochastic matrix P:P1 =1 1 is also a left eigenvalue of P (eigenvalue of pT) ●the left eigenvectorπP=πis nonnegative stationary distribution always exists
Stationary Distribution • stationary distribution π: • Perron-Frobenius Theorem: • stochastic matrix P: • 1 is also a left eigenvalue of P (eigenvalue of PT) • the left eigenvector is nonnegative • stationary distribution always exists Markov chain M = (⌦, P) ⇡P = ⇡ P1 = 1 ⇡P = ⇡ (fixed point)
按次数下载不扣除下载券;
注册用户24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
- 南京大学:《随机算法 Randomized Algorithms》课程教学资源(课件讲稿)Lovász Local Lemma.pdf
- 南京大学:《随机算法 Randomized Algorithms》课程教学资源(课件讲稿)Identity Testing.pdf
- 南京大学:《随机算法 Randomized Algorithms》课程教学资源(课件讲稿)Finger printing.pdf
- 南京大学:《随机算法 Randomized Algorithms》课程教学资源(课件讲稿)Coupling.pdf
- 南京大学:《随机算法 Randomized Algorithms》课程教学资源(课件讲稿)Concentration.pdf
- 南京大学:《随机算法 Randomized Algorithms》课程教学资源(课件讲稿)Chernoff.pdf
- 南京大学:《随机算法 Randomized Algorithms》课程教学资源(课件讲稿)Balls and Bins.pdf
- 南京大学:《组合数学 Combinatorics》课程教学资源(课件讲稿)Ramsey Theory.pdf
- 南京大学:《组合数学 Combinatorics》课程教学资源(课件讲稿)The Probabilistic Method.pdf
- 南京大学:《组合数学 Combinatorics》课程教学资源(课件讲稿)Principle of Inclusion-Exclusion(PIE).pdf
- 南京大学:《组合数学 Combinatorics》课程教学资源(课件讲稿)Polya.pdf
- 南京大学:《组合数学 Combinatorics》课程教学资源(课件讲稿)Matching Theory.pdf
- 南京大学:《组合数学 Combinatorics》课程教学资源(课件讲稿)Generating Function.pdf
- 南京大学:《组合数学 Combinatorics》课程教学资源(课件讲稿)Extremal Sets.pdf
- 南京大学:《组合数学 Combinatorics》课程教学资源(课件讲稿)Extremal Combinatorics.pdf
- 南京大学:《组合数学 Combinatorics》课程教学资源(课件讲稿)Existence.pdf
- 南京大学:《组合数学 Combinatorics》课程教学资源(课件讲稿)Cayley.pdf
- 南京大学:《组合数学 Combinatorics》课程教学资源(课件讲稿)Basic Enumeration(主讲:尹一通).pdf
- 南京大学:《高级算法 Advanced Algorithms》课程教学资源(课件讲稿)Exercise Lecture For Advanced Algorithms(2022 Fall).pdf
- 南京大学:《高级算法 Advanced Algorithms》课程教学资源(课件讲稿)SDP-Based Algorithms.pdf
- 南京大学:《随机算法 Randomized Algorithms》课程教学资源(课件讲稿)Min-Cut.pdf
- 南京大学:《随机算法 Randomized Algorithms》课程教学资源(课件讲稿)Mixing.pdf
- 南京大学:《随机算法 Randomized Algorithms》课程教学资源(课件讲稿)Moments.pdf
- 南京大学:《随机算法 Randomized Algorithms》课程教学资源(课件讲稿)Random Rounding.pdf
- 南京大学:《随机算法 Randomized Algorithms》课程教学资源(课件讲稿)Universal Hashing.pdf
- 电子科技大学:《嵌入式系统设计 Embedded Systems Design》课程教学资源(课件讲稿)Chapter 1 Overview(廖勇).pdf
- 电子科技大学:《嵌入式系统设计 Embedded Systems Design》课程教学资源(课件讲稿)Chapter 2 Hardware System.pdf
- 电子科技大学:《嵌入式系统设计 Embedded Systems Design》课程教学资源(课件讲稿)Chapter 3 Software System.pdf
- 电子科技大学:《嵌入式系统设计 Embedded Systems Design》课程教学资源(课件讲稿)Chapter 4 Task Management.pdf
- 电子科技大学:《嵌入式系统设计 Embedded Systems Design》课程教学资源(课件讲稿)Chapter 5 ask Management.pdf
- 电子科技大学:《嵌入式系统设计 Embedded Systems Design》课程教学资源(课件讲稿)Case Analysis - Use DARTS to Design a S/W System of Robot Controller.pdf
- 电子科技大学:《嵌入式系统设计 Embedded Systems Design》课程教学资源(课件讲稿)Case 4.pdf
- 电子科技大学:《嵌入式系统设计 Embedded Systems Design》课程教学资源(课件讲稿)Chapter 3 Hot topics in ES.pdf
- 中国计算机学会学术著作丛书:《对等网络——结构、应用与设计 Peer-to-Peer Network Structure, Application and Design》PDF电子书(正文,共九章).pdf
- 《计算机科学》相关教学资源(参考文献)Dynamic inference in probabilistic graphical models.pdf
- 《计算机科学》相关教学资源(参考文献)Dynamic Sampling from Graphical Models.pdf
- 《计算机科学》相关教学资源(参考文献)On Local Distributed Sampling and Counting.pdf
- 《计算机科学》相关教学资源(参考文献)What can be sampled locally?.pdf
- 《计算机科学》相关教学资源(参考文献)Convergence of MCMC and Loopy BP in the Tree Uniqueness Region for the Hard-Core Model.pdf
- 《计算机科学》相关教学资源(参考文献)Counting hypergraph matchings up to uniqueness threshold.pdf