93,775 research outputs found
An Introduction to Wishart Matrix Moments
These lecture notes provide a comprehensive, self-contained introduction to
the analysis of Wishart matrix moments. This study may act as an introduction
to some particular aspects of random matrix theory, or as a self-contained
exposition of Wishart matrix moments. Random matrix theory plays a central role
in statistical physics, computational mathematics and engineering sciences,
including data assimilation, signal processing, combinatorial optimization,
compressed sensing, econometrics and mathematical finance, among numerous
others. The mathematical foundations of the theory of random matrices lies at
the intersection of combinatorics, non-commutative algebra, geometry,
multivariate functional and spectral analysis, and of course statistics and
probability theory. As a result, most of the classical topics in random matrix
theory are technical, and mathematically difficult to penetrate for non-experts
and regular users and practitioners. The technical aim of these notes is to
review and extend some important results in random matrix theory in the
specific context of real random Wishart matrices. This special class of
Gaussian-type sample covariance matrix plays an important role in multivariate
analysis and in statistical theory. We derive non-asymptotic formulae for the
full matrix moments of real valued Wishart random matrices. As a corollary, we
derive and extend a number of spectral and trace-type results for the case of
non-isotropic Wishart random matrices. We also derive the full matrix moment
analogues of some classic spectral and trace-type moment results. For example,
we derive semi-circle and Marchencko-Pastur-type laws in the non-isotropic and
full matrix cases. Laplace matrix transforms and matrix moment estimates are
also studied, along with new spectral and trace concentration-type
inequalities
An introduction to Wishart matrix moments
© 2018 Now Publishers Inc. All rights reserved. These lecture notes provide a comprehensive, self-contained introduction to the analysis of Wishart matrix moments. This study may act as an introduction to some particular aspects of random matrix theory, or as a self-contained exposition of Wishart matrix moments. Random matrix theory plays a central role in statistical physics, computational mathematics and engineering sciences, including data assimilation, signal processing, combinatorial optimization, compressed sensing, econometrics and mathematical finance, among numerous others. The mathematical foundations of the theory of random matrices lies at the intersection of combinatorics, non-commutative algebra, geometry, multivariate functional and spectral analysis, and of course statistics and probability theory. As a result, most of the classical topics in random matrix theory are technical, and mathematically difficult to penetrate for non-experts and regular users and practitioners. The technical aim of these notes is to review and extend some important results in random matrix theory in the specific context of real random Wishart matrices. This special class of Gaussian-type sample covariance matrix plays an important role in multivariate analysis and in statistical theory.We derive non-asymptotic formulae for the full matrix moments of real valued Wishart random matrices. As a corollary, we derive and extend a number of spectral and trace-type results for the case of non-isotropic Wishart random matrices. We also derive the full matrix moment analogues of some classic spectral and trace-type moment results. For example, we derive semi-circle and Marchencko-Pastur-type laws in the non-isotropic and full matrix cases. Laplace matrix transforms and matrix moment estimates are also studied, along with new spectral and trace concentration-type inequalities
The ensemble of random Markov matrices
The ensemble of random Markov matrices is introduced as a set of Markov or
stochastic matrices with the maximal Shannon entropy. The statistical
properties of the stationary distribution pi, the average entropy growth rate
and the second largest eigenvalue nu across the ensemble are studied. It is
shown and heuristically proven that the entropy growth-rate and second largest
eigenvalue of Markov matrices scale in average with dimension of matrices d as
h ~ log(O(d)) and nu ~ d^(-1/2), respectively, yielding the asymptotic relation
h tau_c ~ 1/2 between entropy h and correlation decay time tau_c = -1/log|nu| .
Additionally, the correlation between h and and tau_c is analysed and is
decreasing with increasing dimension d.Comment: 12 pages, 6 figur
Spectral Norm of Random Kernel Matrices with Applications to Privacy
Kernel methods are an extremely popular set of techniques used for many
important machine learning and data analysis applications. In addition to
having good practical performances, these methods are supported by a
well-developed theory. Kernel methods use an implicit mapping of the input data
into a high dimensional feature space defined by a kernel function, i.e., a
function returning the inner product between the images of two data points in
the feature space. Central to any kernel method is the kernel matrix, which is
built by evaluating the kernel function on a given sample dataset.
In this paper, we initiate the study of non-asymptotic spectral theory of
random kernel matrices. These are n x n random matrices whose (i,j)th entry is
obtained by evaluating the kernel function on and , where
are a set of n independent random high-dimensional vectors. Our
main contribution is to obtain tight upper bounds on the spectral norm (largest
eigenvalue) of random kernel matrices constructed by commonly used kernel
functions based on polynomials and Gaussian radial basis.
As an application of these results, we provide lower bounds on the distortion
needed for releasing the coefficients of kernel ridge regression under
attribute privacy, a general privacy notion which captures a large class of
privacy definitions. Kernel ridge regression is standard method for performing
non-parametric regression that regularly outperforms traditional regression
approaches in various domains. Our privacy distortion lower bounds are the
first for any kernel technique, and our analysis assumes realistic scenarios
for the input, unlike all previous lower bounds for other release problems
which only hold under very restrictive input settings.Comment: 16 pages, 1 Figur
- …