中国高校课件下载中心 》 教学资源 》 大学文库

电子科技大学:《贝叶斯学习与随机矩阵及在无线通信中的应用 BI-RM-AWC》课程教学资源(课件讲稿)02 Types of Matrices and Local Non-Asymptotic Results

文档信息
资源类别:文库
文档格式:PDF
文档页数:23
文件大小:715.82KB
团购合买:点击进入团购
内容简介
电子科技大学:《贝叶斯学习与随机矩阵及在无线通信中的应用 BI-RM-AWC》课程教学资源(课件讲稿)02 Types of Matrices and Local Non-Asymptotic Results
刷新页面文档预览

2.Types of Matrices and Local Non-Asymptotic Results 1

2. Types of Matrices and Local Non-Asymptotic Results 1

Overview Classical Types of Random Matrices ▣Gaussian Matrices ▣Wigner Matrices ▣Wishart Matrices Local Non-Asymptotic regime Study the distribution and spectrum of random matrices with small and fixed dimension 2

2 Overview • Classical Types of Random Matrices • Local Non-Asymptotic regime Study the distribution and spectrum of random matrices with small and fixed dimension  Gaussian Matrices  Wigner Matrices  Wishart Matrices

2.1 Gaussian Vectors Definition 2.1:A random vector xeCT in which the real part and the complex part of each entry both satisfy Gaussian distribution, we call the random vector xe CT as a Gaussian vector. Theorem 2.1:A Gaussian vector xeCT,satisfying Ex=u and E(x-)(x-)"=,its joint PDF can be expressed as 人ai e(z-)HΣ'(z-) 3

3 2.1 Gaussian Vectors Definition 2.1: A random vector in which the real part and the complex part of each entry both satisfy Gaussian distribution, we call the random vector as a Gaussian vector. m x m x Theorem 2.1: A Gaussian vector , satisfying and , its joint PDF can be expressed as m x x  μ H   ( )( )    x -μ x -μ Σ H 1 1 ( ) ( ) ( ) det( ) m f e     z -μ Σ z -μ x z Σ

2.1 Gaussian Vectors We can degenerate it into i.i.d case that each entry of x∈C"is independent random variable with zero-mean and unit variance and Ex=0,covariance matrix >=I,joint PDF can be expressed as e and then we can degenerate it into a Gaussian variablexEC,its PDF: L(x)=Ie 元 4

4 2.1 Gaussian Vectors We can degenerate it into i.i.d case that each entry of is independent random variable with zero-mean and unit variance and x 0  ,covariance matrix , joint PDF can be expressed as 1 H ( ) m f e   z z x z Σ  I and then we can degenerate it into a Gaussian variable , its PDF: x 1 2 ( ) x f x e   x m x

2.2 Gaussian Matrices Definition 2.2:A random matrix XeCIRX in which the real part and the complex part of each entry both satisfy Gaussian distribution, we call the random matrixXe Cx"as a Gaussian matrix. Theorem 2.2:A Gaussian matrix X eCTX,satisfying EX=M and covariance matrix (X-M)(X-M)"]=zandE(X-M)"(X-M)]= its PDF can be expressed as 1 ATet四Ydet2eo'z.Wp 5

5 2.2 Gaussian Matrices Definition 2.2: A random matrix in which the real part and the complex part of each entry both satisfy Gaussian distribution, we call the random matrix as a Gaussian matrix. m n X m n X Theorem 2.2: A Gaussian matrix , satisfying and covariance matrix and , its PDF can be expressed as X M m n X H   ( )( )    X - M X - M Σ H   ( ) ( )    X - M X - M Ω H 1 1 1 [( ) ( ) ] ( ) det( ) det( ) tr mn n m f e      Z-M Σ Z-M Ω X Z Σ Ω

2.2 Gaussian Matrices We can degenerate it into i.i.d case that each entry of xecTR is independent random variable with zero-mean and unit variance whose covariance matrix Xx"]=I and X"X]=I its joint PDF can be expressed as 人-点e and then we can degenerate it into a Gaussian vector xeC,its PDF: 人-e 6

6 2.2 Gaussian Matrices We can degenerate it into i.i.d case that each entry of is independent random variable with zero-mean and unit variance whose covariance matrix and , its joint PDF can be expressed as H      XX I H      X X I H 1 ( ) ( ) tr mn f e   Z Z X Z m n X and then we can degenerate it into a Gaussian vector , its PDF: m x 1 H ( ) m f e   z z x z

2.3 Wigner Matrices Definition 2.3:An nXn Hermitian matrix W is a Wigner matrix if its upper-triangular entries are independent zero-mean random variables with identical variance.If the variance is 1/n,then w is a standard Wigner matrix. Theorem 2.3:Let W be an n x n complex Wigner matrix whose (diagonal and upper-triangle)entries are i.i.d.zero-mean Gaussian with unit variance.Then,its PDF is rIw2] 2m2元n2e 2 while the joint PDF of its ordered eigenvalues,≥入2≥…入nis 2 i<i 7

7 2.3 Wigner Matrices Definition 2.3: An n×n Hermitian matrix W is a Wigner matrix if its upper-triangular entries are independent zero-mean random variables with identical variance. If the variance is 1/n, then W is a standard Wigner matrix. Theorem 2.3: Let W be an n × n complex Wigner matrix whose (diagonal and upper-triangle) entries are i.i.d. zero-mean Gaussian with unit variance. Then, its PDF is 2 2 [ ] /2 /2 2 2 tr n n e     W while the joint PDF of its ordered eigenvalues is      1 2 n   2 1 1 1 2 2 / 2 1 1 1 ( ) 2 ! n i i n n n i j i i j e i            

2.4 Wishart Matrices Definition 2.4:If the m x n Gaussian matrix H whose expectation matrix is M and covariance matrix is >then the m X m random matrix A= H,m”is a Wishart matrix,ie.A~Wmn,M,)orn≥m. H'H,m≥n Remark:For a Wishart matrix A=HH',if M=0,we call A is a central Wishart matrix. The PDF of the central complex Wishart matrix A~W,(n,0,>)for n=m iS 元-mm-l)/2 X)= e-txl det xn-m detΣ"Π,(n-i)! 8

8 2.4 Wishart Matrices Definition 2.4: If the m × n Gaussian matrix H whose expectation matrix is M and covariance matrix is Σ, then the m × m random matrix is a Wishart matrix, i.e. A∼Wm (n,M,Σ) for . 1 1)/2 tr[ ] 1 ( ) det det ( )! m(m n-m n m i f e n - i         Σ X A X X Σ Remark: For a Wishart matrix A=HH† , if M=0, we call A is a central Wishart matrix. † † < = m n m n     HH A H H , , The PDF of the central complex Wishart matrix A∼Wm (n,0,Σ) for n ≥ m is n m

2.4 Wishart Matrices Wishart matrices are widely used in wireless communications. ·MIMO capacity C=log,det[I+yHH'] Covariance matrix of samples in spectrum sensing systems YY HX+N Spectrum is occupied N where YN Spectrum is available Linear precoding or detection PZF=(HH)H PMMSE=(HTH+r21)H? 9

9 2.4 Wishart Matrices Wishart matrices are widely used in wireless communications. • MIMO capacity † 2 C    log det[ ] I HH • Covariance matrix of samples in spectrum sensing systems † N  YY W where     HX + N Y N Spectrum is occupied Spectrum is available • Linear precoding or detection PZF= (H†H) -1H† PMMSE= (H†H+r 2 I)-1H†

2.4 Wishart Matrices Theorem 2.4.1:The entry of HE CTRx be i.i.d.Gaussian variable with zero mean and unit variance,with n zm.So Complex central Wishart matrix HH=W.EHH=I.,EH'H=I then we have for ks m and k≤n E[det(H2%)det(H月 k!ifi=vh=uik=VxJk=uk 0 otherwise where det()is a minor determinant ofX 10

10 2.4 Wishart Matrices Theorem 2.4.1: The entry of be i.i.d. Gaussian variable with zero mean and unit variance, with n ≥m. So Complex central Wishart matrix HH†= W, , m n H † HH I  m † H H I  n , then we have for and k m k n  1 2, 1 2, 1 2, 1 2, , , † , , 1 1 1 1 [det( )det( )] ! , , 0 k k k k i i i u u u j j j v v v k k k k k if i v j u i v j u otherwise         H H where is a minor determinant of 1 2, 1 2, X , , det( ) k k i i i Xj j j

共23页,试读已结束,阅读完整版请下载
刷新页面下载完整文档
VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
注册用户24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
相关文档