101,792 research outputs found

    Finite Dimensional Statistical Inference

    Full text link
    In this paper, we derive the explicit series expansion of the eigenvalue distribution of various models, namely the case of non-central Wishart distributions, as well as correlated zero mean Wishart distributions. The tools used extend those of the free probability framework, which have been quite successful for high dimensional statistical inference (when the size of the matrices tends to infinity), also known as free deconvolution. This contribution focuses on the finite Gaussian case and proposes algorithmic methods to compute the moments. Cases where asymptotic results fail to apply are also discussed.Comment: 14 pages, 13 figures. Submitted to IEEE Transactions on Information Theor

    Statistical Mechanics of High-Dimensional Inference

    Full text link
    To model modern large-scale datasets, we need efficient algorithms to infer a set of PP unknown model parameters from NN noisy measurements. What are fundamental limits on the accuracy of parameter inference, given finite signal-to-noise ratios, limited measurements, prior information, and computational tractability requirements? How can we combine prior information with measurements to achieve these limits? Classical statistics gives incisive answers to these questions as the measurement density α=NP\alpha = \frac{N}{P}\rightarrow \infty. However, these classical results are not relevant to modern high-dimensional inference problems, which instead occur at finite α\alpha. We formulate and analyze high-dimensional inference as a problem in the statistical physics of quenched disorder. Our analysis uncovers fundamental limits on the accuracy of inference in high dimensions, and reveals that widely cherished inference algorithms like maximum likelihood (ML) and maximum-a posteriori (MAP) inference cannot achieve these limits. We further find optimal, computationally tractable algorithms that can achieve these limits. Intriguingly, in high dimensions, these optimal algorithms become computationally simpler than MAP and ML, while still outperforming them. For example, such optimal algorithms can lead to as much as a 20% reduction in the amount of data to achieve the same performance relative to MAP. Moreover, our analysis reveals simple relations between optimal high dimensional inference and low dimensional scalar Bayesian inference, insights into the nature of generalization and predictive power in high dimensions, information theoretic limits on compressed sensing, phase transitions in quadratic inference, and connections to central mathematical objects in convex optimization theory and random matrix theory.Comment: See http://ganguli-gang.stanford.edu/pdf/HighDimInf.Supp.pdf for supplementary materia

    On the consistency of Fr\'echet means in deformable models for curve and image analysis

    Get PDF
    A new class of statistical deformable models is introduced to study high-dimensional curves or images. In addition to the standard measurement error term, these deformable models include an extra error term modeling the individual variations in intensity around a mean pattern. It is shown that an appropriate tool for statistical inference in such models is the notion of sample Fr\'echet means, which leads to estimators of the deformation parameters and the mean pattern. The main contribution of this paper is to study how the behavior of these estimators depends on the number n of design points and the number J of observed curves (or images). Numerical experiments are given to illustrate the finite sample performances of the procedure
    corecore