中国高校课件下载中心 》 教学资源 》 大学文库

《Simulations Moléculaires》 Cours04VI

文档信息
资源类别:文库
文档格式:PPT
文档页数:20
文件大小:159KB
团购合买:点击进入团购
内容简介
Monte Carlo Integration Hit and miss method:Algorithm: ( 1) generate N uniform random numbers, (i=1, 2,..., N), on(a,b) and N uniform random numbers, (i=1, 2,..., N), on (0,c); (2) calculate g() and count the number of cases, Nhit for whichg() (3)I can be estimated by
刷新页面文档预览

Monte Carlo Method(2

Monte Carlo Method (2)

Monte Carlo Integration Hit and miss method. ndrg(x) g(x) When-d g(x)c, we can al ways first make a shift f(x=g(x)+d so that f(x) 0 algorithm )generate n uniform random numbers, 5i(i=1, 2, .. N), on(a, b) and n uniform random numbers, Si(i-1, 2,..., N), on(0,c) 2)calculate g(si) and count the number of cases, Nhit, for which g(25) 3)I can be estimated by 5-S3, (b-a Hit/N

Monte Carlo Integration Hit and miss method:  = b a I dxg(x) When -d g(x) c, we can always first make a shift f(x)=g(x)+d so that f(x) 0. Algorithm: 1) generate N uniform random numbers, i (i=1,2,…,N), on (a,b) and N uniform random numbers, i (i=1,2,…,N), on (0,c); 2) calculate g(i ) and count the number of cases, Nhit, for which i < g(i ); 3) I can be estimated by Iest=c(b-a)Nhit/N 0 g(x) c

Geometrical illustration glx o mIss N c(b gIX hit a Variance var(L-(b-a]var(p=c(b-a)-I Remark NL has a binomial distribution evar( lest) increases with c(b-a)-I(i.e, the numerical error of the method increases with c(b-a)-1)

Geometrical illustration: c a b x g(x) g(x) miss hit ( ) ( ) ˆ c b a dxg x p b hit a N N − = =  Variance:   c b a I N I c b a p I est var( )= ( − ) var(ˆ)= ( − )− 2 Remark: •Nhit has a binomial distribution. •var(Iest) increases with c(b-a)-I (i.e., the numerical error of the method increases with c(b-a)-I)

Simple Mean-value Method: 1=dxg(x) -(b-a)dxg(x)ba (b-axg(x) Remark is the mean value of g(x) for a uniform random distribution on(a, b) Algorithm .generate N random numbers, X;, distributed uniformly on(a, b) i can be estimated by using In=2x2∑(x) Variance (b-a) (6 var( var 28(x 2 var(>8(x) var(glx )) (b-a]dxg(xI

Simple Mean-Value Method: Remark: is the mean value of g(x) for a uniform random distribution on (a, b). Algorithm: •generate N random numbers, xi , distributed uniformly on (a, b); •I can be estimated by using, ( ) ( ) 1 ( ) ( ) ( ) b a g x b a I dxg x b a dxg x b a b a = − − = = −   = − = N i I est xi g N b a 1 ( ) Variance: ( ) var( ( )) var( ( )) ( ) var( ) var ( ) ( ) 2 1 2 2 1 x N b a x N b a I g x g g N b a N i i N i est i −  −  = =       − = = =       = − −  b a g I b a dx x N 2 2 ( ) ( ) 1

Efficiency of Monte Carlo Integration Smaller is the variance of the estimated result, higher is the efficiency of the method illustration var est 1/N For fixed n. smaller variance leads to smaller calculation error est large variance smal variance

Efficiency of Monte Carlo Integration Smaller is the variance of the estimated result, higher is the efficiency of the method. Illustration: var(Iest)~1/N For fixed N, smaller variance leads to smaller calculation error. N Iest I N Iest I large variance small variance

Comparison of the efficiency of hit-miss method with that of simple mean-value method 1 Hit-miss method rarest)hit-miss c(b-a)]drg(xHI 2) Simple mean-value method: Ⅴa(gs)=N(ba)gg()r Since c gx), var(I est/hit-miss var est simple The simple mean-value method is more efficient than the hit-miss method Remark Different Monte Carlo integration methods do not have the same efficiency

Comparison of the efficiency of hit-miss method with that of simple mean-value method 1) Hit-miss method       = − − I −  I b a hit miss c b a dxg x est N 2 ( ) ( ) 1 var( ) 2) Simple mean-value method:       = − − I  I b a simple b a dxg x g x est N 2 ( ) ( ) ( ) 1 var( ) Since c g(x), var(Iest)hit-miss> var(Iest)simple. The simple mean-value method is more efficient than the hit-miss method. Remark: Different Monte Carlo integration methods do not have the same efficiency

How to improve the efficient of Monte carlo integration method? One widely used method is based on the variance reduction Importance sampling method. Basic ideal 与agx)h8x1x)( f(x)>0 Intuitively, one expects a small variance when g(x)/f(x)is nearly constant Variance var 8(x)d fx g( f When g(x>0 and fxh Ax, var[g(x)/f(x)]=0 In more general cases, when Nx-gtxy, var[g(x)f(x)] reaches a minimum

How to improve the efficient of Monte Carlo integration method? One widely used method is based on the variance reduction. Importance sampling method: Basic ideal: f f x g x f x f x g x I dxg x dx ( ) ( ) ( ) ( ) ( ) = ( )= =   f(x) > 0 Intuitively, one expects a small variance when g(x)/f(x) is nearly constant. Variance: I g f x x dx f x g x 2 2 ( ) ( ) ( ) ( ) var = −        When g(x)>0 and  = ( ) ( ) ( ) dxg x g x f x , var[g(x)/f(x)]=0. In more general cases, when  = ( ) ( ) ( ) dxg x g x f x , var[g(x)/f(x)] reaches a minimum

Geometrical implication Simple Mean-Value Method Important Sampling Method f(x) f(x)g(x) g(x) gt g(x)f(x) ooOOO0000Ⅹ

Geometrical implication: Simple Mean-Value Method Important Sampling Method x g(x) 0 x g(x)/f(x) 0 f(x) g(x) f(x) ( ) ( ) f x g x

MR T2 Algorithm(Metropolis Algorithm): Reference N. Metropolis, A W. Rosenbluth, M N. Rosenbluth, A H. Teller and E. Teller, J. Chem. Phys. 21, 1087, (1953) Typical integrals often encountered in statistical mechanics Example: Internal energy =drNE(rp(r p(rN=exp(-E(rN)/kT)/Q Q=dr exp(-E(rN)/kT) Remark p(r)is distribution function with a narrow peak near This is why a simple mean-value method will fail badly for calculating accurately and Q

MR2T 2 Algorithm (Metropolis Algorithm): Reference: N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller and E. Teller, J. Chem. Phys. 21, 1087, (1953). Typical integrals often encountered in statistical mechanics: Example: Internal energy = dr N E(r N)r(r N) r(r N)=exp(- E(r N)/kT)/Q Q = dr N exp(-E(r N)/kT) Remark: r(r N) is distribution function with a narrow peak near . This is why a simple mean-value method will fail badly for calculating accurately and Q

Basic idea Generate more points in the region where the probability density function, pdf, is large Difficulty Since we do not know completely the distribution function(unknown normalization constant), it is impossible to use the usual techniques e.g, transformation method, rejection method Solution Metropolis et al proposed an iterative method to circumvent the problem

Basic idea: Generate more points in the region where the probability density function, pdf, is large. Difficulty: Since we do not know completely the distribution function (unknown normalization constant), it is impossible to use the usual techniques, e.g., transformation method, rejection method. Solution: Metropolis et al proposed an iterative method to circumvent the problem

共20页,试读已结束,阅读完整版请下载
刷新页面下载完整文档
VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
注册用户24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
相关文档