2,331 research outputs found

    An Introduction to Wishart Matrix Moments

    Full text link
    These lecture notes provide a comprehensive, self-contained introduction to the analysis of Wishart matrix moments. This study may act as an introduction to some particular aspects of random matrix theory, or as a self-contained exposition of Wishart matrix moments. Random matrix theory plays a central role in statistical physics, computational mathematics and engineering sciences, including data assimilation, signal processing, combinatorial optimization, compressed sensing, econometrics and mathematical finance, among numerous others. The mathematical foundations of the theory of random matrices lies at the intersection of combinatorics, non-commutative algebra, geometry, multivariate functional and spectral analysis, and of course statistics and probability theory. As a result, most of the classical topics in random matrix theory are technical, and mathematically difficult to penetrate for non-experts and regular users and practitioners. The technical aim of these notes is to review and extend some important results in random matrix theory in the specific context of real random Wishart matrices. This special class of Gaussian-type sample covariance matrix plays an important role in multivariate analysis and in statistical theory. We derive non-asymptotic formulae for the full matrix moments of real valued Wishart random matrices. As a corollary, we derive and extend a number of spectral and trace-type results for the case of non-isotropic Wishart random matrices. We also derive the full matrix moment analogues of some classic spectral and trace-type moment results. For example, we derive semi-circle and Marchencko-Pastur-type laws in the non-isotropic and full matrix cases. Laplace matrix transforms and matrix moment estimates are also studied, along with new spectral and trace concentration-type inequalities

    Partially Adaptive Estimation via Maximum Entropy Densities

    Get PDF
    We propose a partially adaptive estimator based on information theoretic maximum entropy estimates of the error distribution. The maximum entropy (maxent) densities have simple yet flexible functional forms to nest most of the mathematical distributions. Unlike the nonparametric fully adaptive estimators, our parametric estimators do not involve choosing a bandwidth or trimming, and only require estimating a small number of nuisance parameters, which is desirable when the sample size is small. Monte Carlo simulations suggest that the proposed estimators fare well with non-normal error distributions. When the errors are normal, the efficiency loss due to redundant nuisance parameters is negligible as the proposed error densities nest the normal. The proposed partially adaptive estimator compares favorably with existing methods, especially when the sample size is small. We apply the estimator to a bio-pharmaceutical example and a stochastic frontier model.

    Approximations of Shannon Mutual Information for Discrete Variables with Applications to Neural Population Coding

    Full text link
    Although Shannon mutual information has been widely used, its effective calculation is often difficult for many practical problems, including those in neural population coding. Asymptotic formulas based on Fisher information sometimes provide accurate approximations to the mutual information but this approach is restricted to continuous variables because the calculation of Fisher information requires derivatives with respect to the encoded variables. In this paper, we consider information-theoretic bounds and approximations of the mutual information based on Kullback--Leibler divergence and R\'{e}nyi divergence. We propose several information metrics to approximate Shannon mutual information in the context of neural population coding. While our asymptotic formulas all work for discrete variables, one of them has consistent performance and high accuracy regardless of whether the encoded variables are discrete or continuous. We performed numerical simulations and confirmed that our approximation formulas were highly accurate for approximating the mutual information between the stimuli and the responses of a large neural population. These approximation formulas may potentially bring convenience to the applications of information theory to many practical and theoretical problems.Comment: 31 pages, 6 figure

    On the Analytic Wavelet Transform

    Full text link
    An exact and general expression for the analytic wavelet transform of a real-valued signal is constructed, resolving the time-dependent effects of non-negligible amplitude and frequency modulation. The analytic signal is first locally represented as a modulated oscillation, demodulated by its own instantaneous frequency, and then Taylor-expanded at each point in time. The terms in this expansion, called the instantaneous modulation functions, are time-varying functions which quantify, at increasingly higher orders, the local departures of the signal from a uniform sinusoidal oscillation. Closed-form expressions for these functions are found in terms of Bell polynomials and derivatives of the signal's instantaneous frequency and bandwidth. The analytic wavelet transform is shown to depend upon the interaction between the signal's instantaneous modulation functions and frequency-domain derivatives of the wavelet, inducing a hierarchy of departures of the transform away from a perfect representation of the signal. The form of these deviation terms suggests a set of conditions for matching the wavelet properties to suit the variability of the signal, in which case our expressions simplify considerably. One may then quantify the time-varying bias associated with signal estimation via wavelet ridge analysis, and choose wavelets to minimize this bias
    • …
    corecore