《理论化学导论》(英文版)Chapter 7 Statistical Mechanics

Chapter 7 Statistical mechanics When one is faced with a condensed-phase system, usually containing many molecules, that is at or near thermal equilibrium, it is not necessary or even wise to try to describe it in terms of quantum wave functions or even classical trajectories of all of the constituent molecules. Instead, the powerful tools of statistical mechanics allow one to focus on quantities that describe the most important features of the manmy-molecule system. In this Chapter, you will learn about these tools and see some important examples of their application I. Collections of Molecules at or Near equilibrium As noted in Chapter 5, the approach one takes in studying a system composed of a very large number of molecules at or near thermal equilibrium can be quite different from how one studies systems containing a few isolated molecules. In principle, it is possible to conceive of computing the quantum energy levels and wave functions of a collection of many molecules, but doing so becomes impractical once the number of atoms in the system reaches a few thousand or if the molecules have significant intermolecular interactions. Also, as noted in Chapter 5, following the time evolution of such a large number of molecules can be confusing if one focuses on the short-time behavior of any single molecule(e.g, one sees"jerky"changes in its energy, momentum, and angular momentum). By examining, instead, the long-time average behavior of each molecule or, alternatively, the average properties of a significantly large number of PAGE 1
PAGE 1 Chapter 7. Statistical Mechanics When one is faced with a condensed-phase system, usually containing many molecules, that is at or near thermal equilibrium, it is not necessary or even wise to try to describe it in terms of quantum wave functions or even classical trajectories of all of the constituent molecules. Instead, the powerful tools of statistical mechanics allow one to focus on quantities that describe the most important features of the many-molecule system. In this Chapter, you will learn about these tools and see some important examples of their application. I. Collections of Molecules at or Near Equilibrium As noted in Chapter 5, the approach one takes in studying a system composed of a very large number of molecules at or near thermal equilibrium can be quite different from how one studies systems containing a few isolated molecules. In principle, it is possible to conceive of computing the quantum energy levels and wave functions of a collection of many molecules, but doing so becomes impractical once the number of atoms in the system reaches a few thousand or if the molecules have significant intermolecular interactions. Also, as noted in Chapter 5, following the time evolution of such a large number of molecules can be “confusing” if one focuses on the short-time behavior of any single molecule (e.g., one sees “jerky” changes in its energy, momentum, and angular momentum). By examining, instead, the long-time average behavior of each molecule or, alternatively, the average properties of a significantly large number of

molecules, one is often better able to understand, interpret, and simulate such condensed media systems. This is where the power of statistical mechanics comes into play A. The Distribution of Energy Among levels One of the most important concepts of statistical mechanics involves how a specified amount of total energy e can be shared among a collection of molecules and among the internal(translational, rotational, vibrational, electronic)degrees of freedom of these molecules. The primary outcome of asking what is the most probable distribution of energy among a large number n of molecules within a container of volume v that is maintained in equilibrium at a specified temperature t is the most important equation in statistical mechanics, the Boltzmann population formula E; /kT)Q This equation expresses the probability Pi of finding the system(which, in the case introduced above, is the whole collection of N interacting molecules)in its quantum state, where E, is the energy of this quantum state, T is the temperature in K, Q2, is the degeneracy of the jth state, and the denominator Q is the so-called partition function Q=2,Q, exp(-E,/kT) The classical mechanical equivalent of the above quantum Boltzmann population formula for a system with M coordinates(collectively denoted q) and M momenta( denoted p)is PAGE 2
PAGE 2 molecules, one is often better able to understand, interpret, and simulate such condensedmedia systems. This is where the power of statistical mechanics comes into play. A. The Distribution of Energy Among Levels One of the most important concepts of statistical mechanics involves how a specified amount of total energy E can be shared among a collection of molecules and among the internal (translational, rotational, vibrational, electronic) degrees of freedom of these molecules. The primary outcome of asking what is the most probable distribution of energy among a large number N of molecules within a container of volume V that is maintained in equilibrium at a specified temperature T is the most important equation in statistical mechanics, the Boltzmann population formula: Pj = Wj exp(- Ej /kT)/Q. This equation expresses the probability Pj of finding the system (which, in the case introduced above, is the whole collection of N interacting molecules) in its jth quantum state, where Ej is the energy of this quantum state, T is the temperature in K, Wj is the degeneracy of the jth state, and the denominator Q is the so-called partition function: Q = Sj Wj exp(- Ej /kT). The classical mechanical equivalent of the above quantum Boltzmann population formula for a system with M coordinates (collectively denoted q) and M momenta (denoted p) is:

P(a, p)=h-Mexp(-H(, p)/kT)Q where H is the classical Hamiltonian, h is Planck's constant, and the classical partition function Q is Q=hMexp(-H(q, p)k) dq dp Notice that the boltzmann formula does not say that only those states of a given energy can be populated; it gives non-zero probabilities for populating all states from the lowest to the highest. However, it does say that states of higher energy E, are disfavored by the exp(e /kT)factor, but if states of higher energy have larger degeneracies Q2, (which they usually do), the overall population of such states may not be low. That is, there is a competition between state degeneracy Q,, which tends to grow as the state's energy grows, and exp (E, /kT)which decreases with increasing energy. If the number of particles N is huge, the degeneracy Q2 grows as a high power (lets denote this power as K)of e because the degeneracy is related to the number of ways the energy can be distributed among the n molecules. In fact, K grows at least as fast as N. As a result of Q2 growing as E, the product function P(E)=E exp(-E/kT)has the form shown in Fig. 7.1 (for K=10) PAGE 3
PAGE 3 P(q,p) = h-M exp (- H(q, p)/kT)/Q, where H is the classical Hamiltonian, h is Planck's constant, and the classical partition function Q is Q = h-M ò exp (- H(q, p)/kT) dq dp . Notice that the Boltzmann formula does not say that only those states of a given energy can be populated; it gives non-zero probabilities for populating all states from the lowest to the highest. However, it does say that states of higher energy Ej are disfavored by the exp (- Ej /kT) factor, but if states of higher energy have larger degeneracies Wj (which they usually do), the overall population of such states may not be low. That is, there is a competition between state degeneracy Wj , which tends to grow as the state's energy grows, and exp (-Ej /kT) which decreases with increasing energy. If the number of particles N is huge, the degeneracy W grows as a high power (let’s denote this power as K) of E because the degeneracy is related to the number of ways the energy can be distributed among the N molecules. In fact, K grows at least as fast as N. As a result of W growing as EK , the product function P(E) = EK exp(-E/kT) has the form shown in Fig. 7.1 (for K=10)

50u000 450000 400000 350000 300o 山 250000 200000 150000 50000 0 0102030405060708090100 E水T Figure 7. 1 Probability Weighting Factor P(E)as a Function of E for K=10 By taking the derivative of this function P(E)with respect to E, and finding the energy at which this derivative vanishes, one can show that this probability function has a peak at E*=kkT, and that at this energy value P(E*)=(KkT) exp(-K) By then asking at what energy e' the function P(E)drops to exp(-1)of this maximum alue P(es) PAGE 4
PAGE 4 Figure 7.1 Probability Weighting Factor P(E) as a Function of E for K = 10. By taking the derivative of this function P(E) with respect to E, and finding the energy at which this derivative vanishes, one can show that this probability function has a peak at E* = K kT, and that at this energy value, P(E*) = (KkT)K exp(-K), By then asking at what energy E' the function P(E) drops to exp(-1) of this maximum value P(E*):

P(E)=exp(-1)P(E*) E=KkT(1+(2/K)) So the width of the P(E) graph, measured as the change in energy needed to cause P(e)to drop to exp(-1)of its maximum value divided by the value of the energy at which P(e) assumes this maximum value is (EE*)E*=(2/K)2. This width gets smaller and smaller as K increases. The primary conclusion is that as the number n of molecules in the sample grows, which, as discussed earlier, causes K to grow, the energy probability function becomes more and more sharply peaked about the most probable energy E*. This, in turn, suggests that we may be able to model, aside from infrequent fluctuations, the behavior of systems with many molecules by focusing on the most probable situation(i.e, having the energy E*)and ignoring deviations from his case It is for the reasons just shown that for so-called macroscopic systems near equilibrium, in which N(and hence K)is extremely large(e.g, N-10 to 10), only the most probable distribution of the total energy among the n molecules need be considered This is the situation in which the equations of statistical mechanics are so useful PAGE 5
PAGE 5 P(E') = exp(-1) P(E*), one finds E' = K kT (1+ (2/K)1/2 ). So the width of the P(E) graph, measured as the change in energy needed to cause P(E) to drop to exp(-1) of its maximum value divided by the value of the energy at which P(E) assumes this maximum value, is (E'-E*)/E* = (2/K)1/2 . This width gets smaller and smaller as K increases. The primary conclusion is that as the number N of molecules in the sample grows, which, as discussed earlier, causes K to grow, the energy probability function becomes more and more sharply peaked about the most probable energy E*. This, in turn, suggests that we may be able to model, aside from infrequent fluctuations, the behavior of systems with many molecules by focusing on the most probable situation (i.e., having the energy E*) and ignoring deviations from this case. It is for the reasons just shown that for so-called macroscopic systems near equilibrium, in which N (and hence K) is extremely large (e.g., N ~ 1010 to 1024), only the most probable distribution of the total energy among the N molecules need be considered. This is the situation in which the equations of statistical mechanics are so useful

Certainly, there are fluctuations(as evidenced by the finite width of the above graph)in the energy content of the N-molecule system about its most probable value. However these fluctuations become less and less important as the system size (i.e, N)becomes larger and larger To understand how this narrow boltzmann distribution of energies arises when the number of molecules n in the sample is large, we consider a system composed of M identical containers each having volume V. and each made out a material that allows for efficient heat transfer to its surroundings but material that does not allow the n molecules in each container to escape. These containers are arranged into a regular lattice as shown Fig. 7.2 in a manner that allows their thermally conducting walls to come into contact Finally, the entire collection of M such containers is surrounded by a perfectly insulating material that assures that the total energy (of all NXM molecules) can not change. So, this collection of M identical containers each containing N molecules constitutes a closed (i.e, with no molecules coming or going) and isolated (i.e, so total energy is constant) system PAGE 6
PAGE 6 Certainly, there are fluctuations (as evidenced by the finite width of the above graph) in the energy content of the N-molecule system about its most probable value. However, these fluctuations become less and less important as the system size (i.e., N) becomes larger and larger. To understand how this narrow Boltzmann distribution of energies arises when the number of molecules N in the sample is large, we consider a system composed of M identical containers, each having volume V, and each made out a material that allows for efficient heat transfer to its surroundings but material that does not allow the N molecules in each container to escape. These containers are arranged into a regular lattice as shown in Fig. 7.2 in a manner that allows their thermally conducting walls to come into contact. Finally, the entire collection of M such containers is surrounded by a perfectly insulating material that assures that the total energy (of all NxM molecules) can not change. So, this collection of M identical containers each containing N molecules constitutes a closed (i.e., with no molecules coming or going) and isolated (i.e., so total energy is constant) system

Each Cell Containsn molecules in Volume v. There are M such Cells and the Total energy of These m Cells is E Figure 7.2 Collection of M Identical Cells Having Energy Conducting Walls That Do Not Allow molecules to pass between Cell One of the fundamental assumptions of statistical mechanics is that, for a closed isolated system at equilibrium, all quantum states of the system having an energy equal to the energy e with which the system is prepared are equally likely to be occupied. This is called the assumption of equal a priori probability for such energy-allowed quantum states. The quantum states relevant to this case are not the states of individual molecules Nor are they the states of n of the molecules in one of the containers of volume V. They are the quantum states of the entire system comprised of NXM molecules Because our system consists of M identical containers, each with N molecules in it, we can describe PAGE 7
PAGE 7 Figure 7.2 Collection of M Identical Cells Having Energy Conducting Walls That Do Not Allow Molecules to Pass Between Cell. One of the fundamental assumptions of statistical mechanics is that, for a closed isolated system at equilibrium, all quantum states of the system having an energy equal to the energy E with which the system is prepared are equally likely to be occupied. This is called the assumption of equal a priori probability for such energy-allowed quantum states. The quantum states relevant to this case are not the states of individual molecules. Nor are they the states of N of the molecules in one of the containers of volume V. They are the quantum states of the entire system comprised of NxM molecules. Because our system consists of M identical containers, each with N molecules in it, we can describe Each Cell Contains N molecules in Volume V. There are M such Cells and the Total Energy of These M Cells is E

the quantum states of the entire system in terms of the quantum states of each such container In particular, lets pretend that we know the quantum states that pertain to N molecules in a container of volume V as shown in Fig. 7. 2, and let's label these states by an index J. That is J=1 labels the first energy state of n molecules in the container of volume V, J=2 labels the second such state, and so on. I understand that it may seem daunting to think of how one actually finds these N-molecule eigenstates. However, we are just deriving a general framework that gives the probabilities of being in each such state. In so doing, we are allowed to pretend that we know these states. In any actual application, we will, of course, have to use approximate expressions for such energies An energy labeling for states of the entire collection of M containers can be realized by giving the number of containers that exist in each single-container J-state This is possible because the energy of each M-container state is a sum of the energies of the M single-container states that comprise that M-container state. For example, if M=9, the label 1, 1, 2, 2, 1, 3, 4, 1, 2 specifies the energy of this 9-container state in terms of the energies (E) of the states of the 9 containers: E=461+3E+E+E4. Notice that this 9-container state has the same energy as several other 9-container states; for example, 1 2, 1, 2, 1, 3, 4, 1, 2 and 4, 1,3, 1, 2, 2, 1, 1, 2 have the same energy although they are different individual states. What differs among these distinct states is which box occupies which single-box quantum state The above example illustrates that an energy level of the M-container system can have a high degree of degeneracy because its total energy can be achieved by having the various single-container states appear in various orders. That is, which container is in page 8
PAGE 8 the quantum states of the entire system in terms of the quantum states of each such container. In particular, let’s pretend that we know the quantum states that pertain to N molecules in a container of volume V as shown in Fig. 7.2, and let’s label these states by an index J. That is J=1 labels the first energy state of N molecules in the container of volume V, J=2 labels the second such state, and so on. I understand that it may seem daunting to think of how one actually finds these N-molecule eigenstates. However, we are just deriving a general framework that gives the probabilities of being in each such state. In so doing, we are allowed to pretend that we know these states. In any actual application, we will, of course, have to use approximate expressions for such energies. An energy labeling for states of the entire collection of M containers can be realized by giving the number of containers that exist in each single-container J-state. This is possible because the energy of each M-container state is a sum of the energies of the M single-container states that comprise that M-container state. For example, if M= 9, the label 1, 1, 2, 2, 1, 3, 4, 1, 2 specifies the energy of this 9-container state in terms of the energies {ej} of the states of the 9 containers: E = 4 e1 + 3 e2 + e3 + e4 . Notice that this 9-container state has the same energy as several other 9-container states; for example, 1, 2, 1, 2, 1, 3, 4, 1, 2 and 4, 1, 3, 1, 2, 2, 1, 1, 2 have the same energy although they are different individual states. What differs among these distinct states is which box occupies which single-box quantum state. The above example illustrates that an energy level of the M-container system can have a high degree of degeneracy because its total energy can be achieved by having the various single-container states appear in various orders. That is, which container is in

which state can be permuted without altering the total energy E. The formula for how many ways the M container states can be permuted such that there are n, containers appearing in single-container state J, with i.a total ofM containers,is 92(n)=M!/{I1n1!} Here n=n,, n,, n3,..,,... denote the number of containers existing in single- container states 1, 2, 3,..J,.. This combinatorial formula reflects the permutational degeneracy arising from placing n, containers into state 1, n, containers into state 2, etc If we imagine an extremely large number of containers and we view M as well as the (n, as being large numbers(n b, we will soon see that this is the case), we can ask for what choices of the variables(n,,n2, n3,.n,,...) is this degeneracy function Q2(n)a maximum. Moreover, we can examine @2(n)at its maximum and compare its value at values of the (n) parameters changed only slightly from the values that maximized Q(n) As we will see, Q2 is very strongly peaked at its maximum and decreases extremely rapidly for values of in that differ only slightly from the "optimal" values. It is this property that gives rise to the very narrow energy distribution discussed earlier in this Section. So, lets take a closer look at how this energy distribution formula arises We want to know what values of the variables (n,,n2, n3,.n,...) make Q2= M!/(IIn !i a maximum. However, all of the n, n,, n3,.,,... variables are not ndependent; they must add up to m, the total number of containers, so we have a constraint Page 9
PAGE 9 which state can be permuted without altering the total energy E. The formula for how many ways the M container states can be permuted such that: i. there are nJ containers appearing in single-container state J, with ii. a total of M containers, is W(n) = M!/{PJnJ !}. Here n = {n1 , n2 , n3 , …nJ , …} denote the number of containers existing in singlecontainer states 1, 2, 3, … J, …. This combinatorial formula reflects the permutational degeneracy arising from placing n1 containers into state 1, n2 containers into state 2, etc. If we imagine an extremely large number of containers and we view M as well as the {nJ} as being large numbers (n.b., we will soon see that this is the case), we can ask for what choices of the variables {n1 , n2 , n3 , …nJ , …} is this degeneracy function W(n) a maximum. Moreover, we can examine W(n) at its maximum and compare its value at values of the {n} parameters changed only slightly from the values that maximized W(n). As we will see, W is very strongly peaked at its maximum and decreases extremely rapidly for values of {n} that differ only slightly from the “optimal” values. It is this property that gives rise to the very narrow energy distribution discussed earlier in this Section. So, let’s take a closer look at how this energy distribution formula arises. We want to know what values of the variables {n1 , n2 , n3 , …nJ , …} make W = M!/{PJnJ !} a maximum. However, all of the {n1 , n2 , n3 , …nJ , …} variables are not independent; they must add up to M, the total number of containers, so we have a constraint

that the variables must obey. The(n,, variables are also constrained to give the total energy E of the M-container system when summed as E We have two problems: 1. how to maximize Q2 and ii. how to impose these constraints Because Q2 takes on values greater than unity for any choice of the in,), Q2 will experience its maximum where InQ2 has its maximum, so we can maximize In Q2 if doing so helps. Because the n, variables are assumed to take on large numbers(when M is large), we can use Sterlings approximation In X! =X In x-X to approximate In Q2 as follows n9=lnM!-∑{ nIn n-n1) This expression will prove useful because we can take its derivative with respect to the n, variables which we need to do to search for the maximum of in Q To impose the constraints 2,n,=Mand >, n, s=e we use the technique of Lagrange multipliers. That is, we seek to find values of (,) that maximize the following function PAGE 10
PAGE 10 SJ nJ = M that the variables must obey. The {nj} variables are also constrained to give the total energy E of the M-container system when summed as SJ nJeJ = E. We have two problems: i. how to maximize W and ii. how to impose these constraints. Because W takes on values greater than unity for any choice of the {nj}, W will experience its maximum where lnW has its maximum, so we can maximize ln W if doing so helps. Because the nJ variables are assumed to take on large numbers (when M is large), we can use Sterling’s approximation ln X! = X ln X – X to approximate ln W as follows: ln W = ln M! - SJ {nJ ln nJ – nJ ). This expression will prove useful because we can take its derivative with respect to the nJ variables, which we need to do to search for the maximum of ln W. To impose the constraints SJ nJ = M and SJ nJ eJ = E we use the technique of Lagrange multipliers. That is, we seek to find values of {nJ} that maximize the following function:
按次数下载不扣除下载券;
注册用户24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
- 《理论化学导论》(英文版)Chapter 6 Electronic Structures.pdf
- 《理论化学导论》(英文版)Chapter 5 An Overview of Theoretical Chemistry.pdf
- 清华大学:《手性合成导论 Introduction to Chiral Synthesis》课程教学资源(PPT课件讲稿)第一章 绪论(沙耀武).ppt
- 清华大学:《手性合成导论 Introduction to Chiral Synthesis》课程教学资源(PPT课件讲稿)第五章 不对称Diels-Alder反应(D-A反应).ppt
- 清华大学:《手性合成导论 Introduction to Chiral Synthesis》课程教学资源(PPT课件讲稿)第四章 烯键的不对称氧化反应.ppt
- 清华大学:《手性合成导论 Introduction to Chiral Synthesis》课程教学资源(PPT课件讲稿)第三章 羟醛缩合和有关的反应.ppt
- 清华大学:《手性合成导论 Introduction to Chiral Synthesis》课程教学资源(PPT课件讲稿)第七章 不对称反应在天然产物合成中的应用.ppt
- 清华大学:《手性合成导论 Introduction to Chiral Synthesis》课程教学资源(PPT课件讲稿)第六章 不对称催化氢化及其它还原反应.ppt
- 清华大学:《手性合成导论 Introduction to Chiral Synthesis》课程教学资源(PPT课件讲稿)第二章 羰基化合物的烷基化和加成反应.ppt
- 《分析化学》课程PPT教学课件(教案讲稿)滴定分析法——滴定分析法应用.ppt
- 《分析化学》课程PPT教学课件(教案讲稿)滴定分析法——滴定分析原理.ppt
- 《分析化学》课程PPT教学课件(教案讲稿)滴定分析法——化学平衡与滴定分析.ppt
- 《分析化学》课程PPT教学课件(教案讲稿)滴定分析法——滴定分析概述.ppt
- 华东理工大学:《高分子科学》课程教程教学资源(练习题)各章习题解答.doc
- 华东理工大学:《高分子科学》课程教程教学资源(PPT课件讲稿)逐步聚合反应的小结.ppt
- 华东理工大学:《高分子科学》课程教程教学资源(PPT课件讲稿)第三章 其它逐步聚合反应.ppt
- 华东理工大学:《高分子科学》课程教程教学资源(PPT课件讲稿)第二章 逐步聚合反应(2-8)逐步聚合方法.ppt
- 华东理工大学:《高分子科学》课程教程教学资源(PPT课件讲稿)第二章 逐步聚合反应(2-7)体型缩聚.ppt
- 华东理工大学:《高分子科学》课程教程教学资源(PPT课件讲稿)第二章 逐步聚合反应(2-6)线型缩聚物的分子量与分子量分布.ppt
- 华东理工大学:《高分子科学》课程教程教学资源(PPT课件讲稿)第二章 逐步聚合反应(2-2)动力学基础.ppt
- 《理论化学导论》(英文版)Chapter 8 Chemical Dynamics.pdf
- 《理论化学导论》(英文版)Problems.pdf
- 《理论化学导论》(英文版)Solutions.pdf
- 《理论化学导论》(英文版)Chapter 1 The Basics of Quantum Mechanics.pdf
- 西华师范大学:《有机化学》课程教学资源(讲义)第一章 绪论.pdf
- 西华师范大学:《有机化学》课程教学资源(讲义)第四章 炔烃与二烯烃.pdf
- 西华师范大学:《有机化学》课程教学资源(讲义)第五章 对映异构.pdf
- 西华师范大学:《有机化学》课程教学资源(讲义)第二章 烷烃.pdf
- 西华师范大学:《有机化学》课程教学资源(讲义)第六章 脂环烃.pdf
- 西华师范大学:《有机化学》课程教学资源(讲义)第三章 烯烃.pdf
- 西华师范大学:《有机化学》课程教学资源(讲义)第七章 芳香烃.pdf
- 西华师范大学:《有机化学》课程教学资源(讲义)第八章 卤代烃.pdf
- 西华师范大学:《有机化学》课程教学资源(讲义)第九章 醇、酚、醚.pdf
- 西华师范大学:《有机化学》课程教学资源(讲义)第十五章 杂环化合物和生物碱.pdf
- 西华师范大学:《有机化学》课程教学资源(讲义)第十二章 羧酸衍生物.pdf
- 西华师范大学:《有机化学》课程教学资源(讲义)第十一章 羧酸.pdf
- 西华师范大学:《有机化学》课程教学资源(讲义)第十七章 氨基酸、蛋白质和核酸.pdf
- 西华师范大学:《有机化学》课程教学资源(讲义)第十八章 萜类和甾族化合物.pdf
- 西华师范大学:《有机化学》课程教学资源(讲义)第十三章 含氮化合物.pdf
- 西华师范大学:《有机化学》课程教学资源(讲义)第十六章 糖类化合物.pdf