中国高校课件下载中心 》 教学资源 》 大学文库

《计算机科学》相关教学资源(参考文献)On Local Distributed Sampling and Counting

文档信息
资源类别:文库
文档格式:PDF
文档页数:29
文件大小:6.4MB
团购合买:点击进入团购
内容简介
《计算机科学》相关教学资源(参考文献)On Local Distributed Sampling and Counting
刷新页面文档预览

On Local Distributed Sampling and Counting Yitong Yin Nanjing University Joint work with Weiming Feng (Nanjing University)

On Local Distributed Sampling and Counting Yitong Yin Nanjing University Joint work with Weiming Feng (Nanjing University)

Counting and Sampling RANDOM GENERATION OF COMBINATORIAL STRUCTURES FROM A UNIFORM DISTRIBUTION LGORITHMS Mark R.JERRUM Department f Computer Science,Universiry of Edinburgh,Edinburgh EH93 Unied Kingdom Leslie G.VALIANT* Aiken Computation Laboratory,Harard Universiry,Cambridge,MA 02138,U.S.A V灯jayV.VAZIRANI* Computer Scence Department,Comell Universiry,Ithac,NY 14853,U.S.A. [Jerrum-Valiant-Vazirani86]: (For self-reducible problems) approx.counting (approx.,exact)sampling is tractable is tractable

Counting and Sampling approx. counting is tractable (approx., exact) sampling is tractable (For self-reducible problems) [Jerrum-Valiant-Vazirani ’86]:

Computational Phase Transition Sampling almost-uniform independent set in graphs with maximum degree△: ●[Weitz,STOC'06]:lf△≤5,poly-time. [Sly,best paper in FOCS'lO]:lf△≥6,no poly-time algorithm unless NP=RP. A phase transition occurs when△:5→6. Local Computation?

Computational Phase Transition • [Weitz, STOC’06]: If ∆≤5, poly-time. • [Sly, best paper in FOCS’10]: If ∆≥6, no poly-time algorithm unless NP=RP. Sampling almost-uniform independent set in graphs with maximum degree ∆: A phase transition occurs when ∆: 5→6. Local Computation?

Local Computation "What can be computed locally?"[Naor,Stockmeyer'93] the LOCAL model [Linial '87]: ● Communications are 0 synchronized. 0 In each round,each node can: o exchange unbounded messages with all neighbors 0 o perform unbounded local computation o read/write to unbounded local memory. In t rounds:each node can collect information up to distance t

Local Computation • Communications are synchronized. • In each round, each node can: exchange unbounded messages with all neighbors perform unbounded local computation read/write to unbounded local memory. • In t rounds: each node can collect information up to distance t. the LOCAL model [Linial ’87]: “What can be computed locally?” [Naor, Stockmeyer ’93]

Example:Sample Independent Set u:uniform distribution of independent sets in G. Ye0,1 indicates an independent set Each ve∈Vreturns a Y∈{0,l}, such that Y=(YvEr -u Or:drv(Y,u)<1/poly(n) network G(E)

Example: Sample Independent Set • Each v∈V returns a Yv∈ {0,1}, such that Y = (Yv)v∈V ∼ µ • Or: dTV(Y, µ) < 1/poly(n) µ: uniform distribution of independent sets in G. network G(V,E) Y ∈ {0,1}V indicates an independent set

Inference (Local Counting) w:uniform distribution of independent sets in G. g:marginal distribution at v conditioning on o e0,1)5. y∈{0,1}:g(y)=Pr[Y=y|Ys=o] ●Each y∈S receives ov as input. ● Each v E D returns a marginal distributionsuch that: drv(g,hg)≤poim Z=40-ΠyX.=01<i:X,=0 i=1 network G(E) Z:of independent sets

Inference (Local Counting) network G(V,E) µ: uniform distribution of independent sets in G. • Each v ∈ S receives σv as input. • Each v ∈ V returns a marginal distribution such that: µ ˆ￾ v dTV(ˆµ￾ v , µ￾ v )  1 poly(n) : marginal distribution at v conditioning on σ ∈{0,1}S µ . ￾ v 0 1 1 0 8y 2 {0, 1} : µ￾ v (y) = Pr Y ⇠µ [Yv = y | YS = ￾] 1 Z = µ(;) = Y n i=1 Pr Y ⇠µ [Yvi = 0 | 8j<i : Yvj = 0] Z: # of independent sets

Gibbs Distribution (with pairwise interactions) Each vertex corresponds to a variable with finite domain g. network G,E): ● Each edge e=(u,v)EE has a matrix (binary constraint): Ae:[q]×[q→[0,l] bN ●Each vertex ve∈has a vector (unary constraint): bm:[g小→[0,1] Gibbs distribution u:Voelg] u(o)xIAe(o,o)Ib(ou) e=(u,v)∈E u∈V

Gibbs Distribution network G(V,E): • Each vertex corresponds to a variable with finite domain [q]. • Each edge e=(u,v)∈E has a matrix (binary constraint): • Each vertex v∈V has a vector (unary constraint): µ(￾) / Y e=(u,v)2E Ae(￾u, ￾v) Y v2V bv(￾v) Ae u bv v (with pairwise interactions) Ae: [q] × [q] → [0,1] bv: [q] → [0,1] • Gibbs distribution µ : ∀σ∈[q]V

Gibbs Distribution (with pairwise interactions) Gibbs distribution u:o∈[g]' network G,E): L(a)Ae(ou;ou)bo() e=(u,v)∈E u∈V independent set: bN 4=上日成-日 local conflict colorings: [Fraigniaud,Heinrich,Kosowski,FOCS'16] Ae:[q]×[q]→{0,1} Ae:[q]×[q]→[0,1] b:[q]→{0,1} bm:[q]→[0,1]

Gibbs Distribution • Gibbs distribution µ : ∀σ∈[q]V µ(￾) / Y e=(u,v)2E Ae(￾u, ￾v) Y v2V bv(￾v) • independent set: bv =  1 1 ￾ Ae =  1 1 1 0￾ • local conflict colorings: [Fraigniaud, Heinrich, Kosowski, FOCS’16] network G(V,E): Ae u bv v Ae: [q] × [q] → {0,1} bv: [q] → {0,1} Ae: [q] × [q] → [0,1] bv: [q] → [0,1] (with pairwise interactions)

Gibbs Distribution Gibbs distribution u:o∈[g]' network GE): u(a)f(os) (f,S)∈F each(f,S)∈F is a local constraints(factors): f:[gs→R≥o SC Vwith diamG(S)=O(1)

Gibbs Distribution • Gibbs distribution µ : ∀σ∈[q]V network G(V,E): S µ(￾) / Y (f,S)2F f(￾S) is a local constraints (factors): f : [q] S ! R￾0 S ⊆ V with diamG(S) = O(1) each (f,S) 2 F

A Motivation: Distributed Machine Learning ●Data are stored in a distributed system. Distributed algorithms for: sampling from a joint distribution(specified by a probabilistic graphical model); ● inferring according to a probabilistic graphical model

A Motivation: Distributed Machine Learning • Data are stored in a distributed system. • Distributed algorithms for: • sampling from a joint distribution (specified by a probabilistic graphical model); • inferring according to a probabilistic graphical model

共29页,试读已结束,阅读完整版请下载
刷新页面下载完整文档
VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
注册用户24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
相关文档