《无限随机矩阵理论》课程书籍文献(英文版)Handout 8 Thursday, September

18. J/16.394J: The Mathematics of Infinite Random Matrices Project Ideas Alan edel Handout #8, Thursday, September 30, 2004 A requirement of this course is that the students experiment with a random matrix problem that is of interest to them. Often these explorations take the shape of a little bit of theory and a little bit of computational experimentation. Some of you might already have some ideas that are relevant to your current research. Regardless, we thought we'd put together some ideas for projects. Feel free to adapt them based on your interests: if you want more information about a particular idea, please feel to contact us. Mid-term project presentations are tentatively scheduled for October 28th and November 2nd as indicated on the course calendar. We will provide more details about this in a subsequent email to the class. The basic purpose of this mid-term project is to get your feet wet thinking about a problem that is of interest to you using the tools you' ve learned about so far or to learn new tools for that purpose 1 Covariance Matrices in Signal Processing Sample covariance matrices come up in many signal processing applications such as adaptive filtering. Let G be the nx"Gaussian"random matrix that we've encountered in class. In signal processing language, the wishart matrix W=-GG is a rectangularly windowed estimator. In adaptive filtering applications, the covariance matrix that comes up can be expressed in terms of the Wishart matrix as A=w-l. We've already seen how the Marcenko- Pastur theorem can be used to characterize the density of the eigenvalues of the wishart matrix. The density of A can hence be computed using standard Jacobian tricks the s more general covariance matrix estimator is referred to as being exponentially windowed. To see where the"windowing"comes in, let us first rewrite the Wishart matrix as 1 where g is the i-th column of G. The rectangt endowing comes in from the fact that each of the N vectors g. is weighted equally by 1/N in forming W. Hence, the exponentially weighted estimator is simply where a< l is the weighting factor In many signal processing publications, the consensus is that, loosely spe Wx=(1-)EW- The proofs for this involve perturbation theory arguments. We feel that we can get some insight on this problem using random matrix techniques such as the ones you've learned in class Project Idea Use random matrix arguments to rederive or verify this proof. Indicate what values of A his is valid for. Discuss any convergence issues i.e. what values of n and N is this approximately valid
1 18.338J/16.394J: The Mathematics of Infinite Random Matrices Project Ideas Alan Edelman Handout #8, Thursday, September 30, 2004 A requirement of this course is that the students experiment with a random matrix problem that is of interest to them. Often these explorations take the shape of a little bit of theory and a little bit of computational experimentation. Some of you might already have some ideas that are relevant to your current research. Regardless, we thought we’d put together some ideas for projects. Feel free to adapt them based on your interests; if you want more information about a particular idea, please feel to contact us. Mid-term project presentations are tentatively scheduled for October 28th and November 2nd as indicated on the course calendar. We will provide more details about this in a subsequent email to the class. The basic purpose of this mid-term project is to get your feet wet thinking about a problem that is of interest to you using the tools you’ve learned about so far or to learn new tools for that purpose. Covariance Matrices in Signal Processing Sample covariance matrices come up in many signal processing applications such as adaptive filtering. Let G be the n × N “Gaussian” random matrix that we’ve encountered in class. In signal processing language, the Wishart matrix 1 W ∗ = GG (1) N is a rectangularly windowed estimator. In adaptive filtering applications, the covariance matrix that comes up can be expressed in terms of the Wishart matrix as A = W−1 . We’ve already seen how the MarˇcenkoPastur theorem can be used to characterize the density of the eigenvalues of the Wishart matrix. The density of A can hence be computed using standard Jacobian tricks. A more general covariance matrix estimator is referred to as being exponentially windowed. To see where the “windowing” comes in, let us first rewrite the Wishart matrix as W = 1 N � N i=1 gi g H i (2) gi where gi vectors is the i-th column of G. The rectangular windowing comes in from the fact that each of the N is weighted equally by 1/N in forming W. Hence, the exponentially weighted estimator is simply Wλ = 1 N � N i=1 λN−i gi g H i (3) where λ < 1 is the weighting factor. In many signal processing publications, the consensus is that, loosely speaking, −1 E[W−1] = (1 − λ)E[W ]. (4) λ The proofs for this involve perturbation theory arguments. We feel that we can get some insight on this problem using random matrix techniques such as the ones you’ve learned in class. Project Idea: Use random matrix arguments to rederive or verify this proof. Indicate what values of λ this is valid for. Discuss any convergence issues i.e. what values of n and N is this approximately valid

2 Stochastic Perturbation Theory Allow us to sincerely flatter G. W.(Pete)Stewart by including his words on this subject verbatim from his survey paper on Stochastic Perturbation theory [1 which has been included in the course reader(Random Matrices-Il) Let A be a matrix and let F be a matrix valued function of A. Two principal problems of matrix perturbation theory are the following. Given a matrix E, presumed small 1. Approximate F(A+E) 2. Bound‖F(A+E)-F(A)‖ in terms of‖E‖ Here‖l.‖ is some norm of interest The first problem is usually, but not always, solved by assuming that F is differentiable at A with derivative F(A).Then F(A+E)=F(A)+ FA(E)+O(E D, so that for E sufficiently small FA(E) is the required approximation. The problem then reduces to finding tractable expressions for FA(E)which in itself is often a nontrivial task. The second problem may be treated a variety of ways; the basic idea being that the size of the perturbation is used as a bound even thought it is likely to be an overestimate. Stochastic perturbation theory is different from this approach because it is, in some sense, intermediate of the other two. We let e be a stochastic matrix and compute expectations of quantities derived from the perturbation expansion. roject Idea Pete Stewart's paper was published in 1990. The work of Marcenko and Pastur was ediscovered around that time; hence you will notice that there are no references to their work. An interesting project would be to condense what Pete Stewart has to say and to determine if what we learned in class can strengthen any of the ideas presented in his paper. Thinking of applications of this would be a bont 3 Other ideas These some other ideas based on the survey papers on the website. If you want any clarification feel free to contact us What is the replica method? What is the connection between statistical physics, spin glasses and random matrices? What can random matrix theory tell us about real world graphs Suppose we were given the moments of the semi-circle(numerically ) Could you compute its density using techniques described by what is known as the "Classical Moment Problem"? Construct a bijection between the MANOVA matrix and McKays theorem(ask us for the exact relationship) Derive Jonsson's result in a more direct manner using the bipartite graph bijection(possibly) What is the connection between Jack Polynomials and Free probability. What aspects of random matrix theory can be useful for principal component analysis? References 1 G. W. Stewart. Stochastic perturbation theory. SIAM Rev, 32(4): 579-610, 1990
2 Stochastic Perturbation Theory Allow us to sincerely flatter G. W. (Pete) Stewart by including his words on this subject verbatim from his survey paper on Stochastic Perturbation theory [1] which has been included in the course reader (Random Matrices - II). Let A be a matrix and let F be a matrix valued function of A. Two principal problems of matrix perturbation theory are the following. Given a matrix E, presumed small, 1. Approximate F(A + E), 2. Bound � F(A + E) − F(A) � in terms of � E �. Here � . � is some norm of interest. The first problem is usually, but not always, solved by assuming that F is differentiable at A with ′ derivative F (A). Then ′ F(A + E) = F(A) + FA(E) + o(� E �), (5) ′ so that for E sufficiently small FA(E) is the required approximation. The problem then reduces to finding ′ tractable expressions for FA(E) which in itself is often a nontrivial task. The second problem may be treated a variety of ways; the basic idea being that the size of the perturbation is used as a bound even thought it is likely to be an overestimate. Stochastic perturbation theory is different from this approach because it is, in some sense, intermediate of the other two. We let E be a stochastic matrix and compute expectations of quantities derived from the perturbation expansion. Project Idea: Pete Stewart’s paper was published in 1990. The work of Marˇcenko and Pastur was rediscovered around that time; hence you will notice that there are no references to their work. An interesting project would be to condense what Pete Stewart has to say and to determine if what we learned in class can strengthen any of the ideas presented in his paper. Thinking of applications of this would be a bonus. 3 Other ideas These some other ideas based on the survey papers on the website. If you want any clarification feel free to contact us. • What is the replica method? • What is the connection between statistical physics, spin glasses and random matrices? • What can random matrix theory tell us about real world graphs? • Suppose we were given the moments of the semi-circle (numerically). Could you compute its density using techniques described by what is known as the “Classical Moment Problem”? • Construct a bijection between the MANOVA matrix and McKay’s theorem (ask us for the exact relationship). • Derive Jonsson’s result in a more direct manner using the bipartite graph bijection (possibly). • What is the connection between Jack Polynomials and Free probability. • What aspects of random matrix theory can be useful for principal component analysis? References [1] G. W. Stewart. Stochastic perturbation theory. SIAM Rev., 32(4):579–610, 1990
按次数下载不扣除下载券;
注册用户24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
- 《无限随机矩阵理论》课程书籍文献(英文版)Handout 4 Thursday, September.pdf
- 《无限随机矩阵理论》课程书籍文献(英文版)Handout 3 Tuesday, September.pdf
- 《无限随机矩阵理论》课程书籍文献(英文版)Handout 7 Tuesday, September.pdf
- 《无限随机矩阵理论》课程书籍文献(英文版)Handout 2 Tuesday, September.pdf
- 美国麻省理工大学:《航空航天产业 The Aerospace Industry》教学资源(讲义,英文版)Agenda Continued.pdf
- 美国麻省理工大学:《航空航天产业 The Aerospace Industry》教学资源(讲义,英文版)The Aerospace Industry.pdf
- 美国麻省理工大学:《航空航天产业 The Aerospace Industry》教学资源(讲义,英文版)The Aerospace Industry.pdf
- 美国麻省理工大学:《航空航天产业 The Aerospace Industry》教学资源(讲义,英文版)The Aerospace Industry.pdf
- 美国麻省理工大学:《航空航天产业 The Aerospace Industry》教学资源(讲义,英文版)The Aerospace Industry.pdf
- 美国麻省理工大学:《航空航天产业 The Aerospace Industry》教学资源(讲义,英文版)Lean Thinking.pdf
- 美国麻省理工大学:《航空航天产业 The Aerospace Industry》教学资源(讲义,英文版)Journals in this seminar.pdf
- 南京航空航天大学:《航空航天器供电系统》第八章 飞机直流配电系统.pdf
- 南京航空航天大学:《航空航天器供电系统》第七章 变速恒频交流电源(7-4)PWMAC- DC-AC VSCF.pdf
- 南京航空航天大学:《航空航天器供电系统》第七章 变速恒频交流电源(7-4)PWMAC- DC-AC VSCF.pdf
- 南京航空航天大学:《航空航天器供电系统》第七章 变速恒频交流电源(7-1)概述.pdf
- 南京航空航天大学:《航空航天器供电系统》第六章 恒速恒频交流电源.pdf
- 南京航空航天大学:《航空航天器供电系统》第五章 飞机直流电源l.pdf
- 南京航空航天大学:《航空航天器供电系统》第三章 电力电子变换器.pdf
- 南京航空航天大学:《航空航天器供电系统》第二章 化学电源和特种电源.pdf
- 南京航空航天大学:《航空航天器供电系统》第一章 概述(1-1)航空航天器用电设备及其特性.pdf
- 《无限随机矩阵理论》课程书籍文献(英文版)Handout 5 Thursday, September.pdf
- 《无限随机矩阵理论》课程书籍文献(英文版)Handout 6 Tuesday, September.pdf
- 西北工业大学:《航天器控制原理》课程教学资源(PPT课件)第一章 绪论.ppt
- 西北工业大学:《航天器控制原理》课程教学资源(PPT课件)第十章 航天飞机的制导与控制.ppt
- 西北工业大学:《航天器控制原理》课程教学资源(PPT课件)第十一章 空间站的控制技术.ppt
- 西北工业大学:《航天器控制原理》课程教学资源(PPT课件)第二章 航天器的轨道与轨道力学.ppt
- 西北工业大学:《航天器控制原理》课程教学资源(PPT课件)第三章 航天器姿态运动学和动力学.ppt
- 西北工业大学:《航天器控制原理》课程教学资源(PPT课件)第五章 航天器的被动姿态控制系统.ppt
- 西北工业大学:《航天器控制原理》课程教学资源(PPT课件)第六章 航天器主动姿态稳定系统.ppt
- 西北工业大学:《航天器控制原理》课程教学资源(PPT课件)第七章 航天器姿态机动控制.ppt
- 西北工业大学:《航天器控制原理》课程教学资源(PPT课件)第八章 航天器的导航与制导.ppt
- 西北工业大学:《航天器控制原理》课程教学资源(PPT课件)第九章 载人飞船的控制技术.ppt
- 西北工业大学:《航天器控制原理》课程教学资源(PPT课件)第四章 航天器姿态控制系统的组成与分类.ppt
- 西北工业大学:《航天器控制原理》课程教学资源(PPT课件)第一章 绪论.ppt
- 西北工业大学:《航天器控制原理》课程教学资源(PPT课件)第四章 航天器姿态控制系统的组成与分类.ppt
- 西北工业大学:《航天器控制原理》课程教学资源(PPT课件)第九章 载人飞船的控制技术.ppt
- 西北工业大学:《航天器控制原理》课程教学资源(PPT课件)第六章 航天器主动姿态稳定系统.ppt
- 西北工业大学:《航天器控制原理》课程教学资源(PPT课件)第十一章 空间站的控制技术.ppt
- 西北工业大学:《航天器控制原理》课程教学资源(PPT课件)第十章 航天飞机的制导与控制.ppt
- 西北工业大学:《航天器控制原理》课程教学资源(PPT课件)第五章 航天器的被动姿态控制系统.ppt