788 research outputs found
From Random Matrices to Stochastic Operators
We propose that classical random matrix models are properly viewed as finite
difference schemes for stochastic differential operators. Three particular
stochastic operators commonly arise, each associated with a familiar class of
local eigenvalue behavior. The stochastic Airy operator displays soft edge
behavior, associated with the Airy kernel. The stochastic Bessel operator
displays hard edge behavior, associated with the Bessel kernel. The article
concludes with suggestions for a stochastic sine operator, which would display
bulk behavior, associated with the sine kernel.Comment: 41 pages, 5 figures. Submitted to Journal of Statistical Physics.
Changes in this revision: recomputed Monte Carlo simulations, added reference
[19], fit into margins, performed minor editin
Asymptotic behavior of random determinants in the Laguerre, Gram and Jacobi ensembles
We consider properties of determinants of some random symmetric matrices
issued from multivariate statistics: Wishart/Laguerre ensemble (sample
covariance matrices), Uniform Gram ensemble (sample correlation matrices) and
Jacobi ensemble (MANOVA). If is the size of the sample, the
number of variates and such a matrix, a generalization of the
Bartlett-type theorems gives a decomposition of into a product
of independent gamma or beta random variables. For fixed, we study the
evolution as grows, and then take the limit of large and with . We derive limit theorems for the sequence of {\it processes with
independent increments} for .. Since the logarithm of the determinant is a linear
statistic of the empirical spectral distribution, we connect the results for
marginals (fixed ) with those obtained by the spectral method. Actually, all
the results hold true for models, if we define the determinant as the
product of charges.Comment: 51 pages ; it replaces and extends arXiv:math/0607767 and
arXiv:math/0509021 Third version: corrected constants in Theorem 3.
The generalized Cartan decomposition for classical random matrix ensembles
We present a completed classification of the classical random matrix
ensembles: Hermite (Gaussian), Laguerre (Wishart), Jacobi (MANOVA) and Circular
by introducing the concept of the generalized Cartan decomposition and the
double coset space. Previous authors associate a symmetric space with a
random matrix density on the double coset structure . However
this is incomplete. Complete coverage requires the double coset structure , where and are two symmetric spaces.
Furthermore, we show how the matrix factorization obtained by the generalized
Cartan decomposition plays a crucial role in sampling algorithms
and the derivation of the joint probability density of .Comment: 26 page
The stochastic operator approach to random matrix theory
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 2005.Includes bibliographical references (p. 147-150) and index.Classical random matrix models are formed from dense matrices with Gaussian entries. Their eigenvalues have features that have been observed in combinatorics, statistical mechanics, quantum mechanics, and even the zeros of the Riemann zeta function. However, their eigenvectors are Haar-distributed-completely random. Therefore, these classical random matrices are rarely considered as operators. The stochastic operator approach to random matrix theory, introduced here, shows that it is actually quite natural and quite useful to view random matrices as random operators. The first step is to perform a change of basis, replacing the traditional Gaussian random matrix models by carefully chosen distributions on structured, e.g., tridiagonal, matrices. These structured random matrix models were introduced by Dumitriu and Edelman, and of course have the same eigenvalue distributions as the classical models, since they are equivalent up to similarity transformation. This dissertation shows that these structured random matrix models, appropriately rescaled, are finite difference approximations to stochastic differential operators. Specifically, as the size of one of these matrices approaches infinity, it looks more and more like an operator constructed from either the Airy operator, ..., or one of the Bessel operators, ..., plus noise. One of the major advantages to the stochastic operator approach is a new method for working in "general [beta] " random matrix theory. In the stochastic operator approach, there is always a parameter [beta] which is inversely proportional to the variance of the noise.(cont.) In contrast, the traditional Gaussian random matrix models identify the parameter [beta] with the real dimension of the division algebra of elements, limiting much study to the cases [beta] = 1 (real entries), [beta] = 2 (complex entries), and [beta] = 4 (quaternion entries). An application to general [beta] random matrix theory is presented, specifically regarding the universal largest eigenvalue distributions. In the cases [beta] = 1, 2, 4, Tracy and Widom derived exact formulas for these distributions. However, little is known about the general [beta] case. In this dissertation, the stochastic operator approach is used to derive a new asymptotic expansion for the mean, valid near [beta] = [infinity]. The expression is built from the eigendecomposition of the Airy operator, suggesting the intrinsic role of differential operators. This dissertation also introduces a new matrix model for the Jacobi ensemble, solving a problem posed by Dumitriu and Edelman, and enabling the extension of the stochastic operator approach to the Jacobi case.by Brian D. Sutton.Ph.D
Computing the complete CS decomposition
An algorithm is developed to compute the complete CS decomposition (CSD) of a
partitioned unitary matrix. Although the existence of the CSD has been
recognized since 1977, prior algorithms compute only a reduced version (the
2-by-1 CSD) that is equivalent to two simultaneous singular value
decompositions. The algorithm presented here computes the complete 2-by-2 CSD,
which requires the simultaneous diagonalization of all four blocks of a unitary
matrix partitioned into a 2-by-2 block structure. The algorithm appears to be
the only fully specified algorithm available. The computation occurs in two
phases. In the first phase, the unitary matrix is reduced to bidiagonal block
form, as described by Sutton and Edelman. In the second phase, the blocks are
simultaneously diagonalized using techniques from bidiagonal SVD algorithms of
Golub, Kahan, and Demmel. The algorithm has a number of desirable numerical
features.Comment: New in v3: additional discussion on efficiency, Wilkinson shifts,
connection with tridiagonal QR iteration. New in v2: additional figures and a
reorganization of the tex
Angles between subspaces and their tangents
Principal angles between subspaces (PABS) (also called canonical angles)
serve as a classical tool in mathematics, statistics, and applications, e.g.,
data mining. Traditionally, PABS are introduced via their cosines. The cosines
and sines of PABS are commonly defined using the singular value decomposition.
We utilize the same idea for the tangents, i.e., explicitly construct matrices,
such that their singular values are equal to the tangents of PABS, using
several approaches: orthonormal and non-orthonormal bases for subspaces, as
well as projectors. Such a construction has applications, e.g., in analysis of
convergence of subspace iterations for eigenvalue problems.Comment: 15 pages, 1 figure, 2 tables. Accepted to Journal of Numerical
Mathematic
- …