2,518 research outputs found

    Signal Processing in Large Systems: a New Paradigm

    Full text link
    For a long time, detection and parameter estimation methods for signal processing have relied on asymptotic statistics as the number nn of observations of a population grows large comparatively to the population size NN, i.e. n/N→∞n/N\to \infty. Modern technological and societal advances now demand the study of sometimes extremely large populations and simultaneously require fast signal processing due to accelerated system dynamics. This results in not-so-large practical ratios n/Nn/N, sometimes even smaller than one. A disruptive change in classical signal processing methods has therefore been initiated in the past ten years, mostly spurred by the field of large dimensional random matrix theory. The early works in random matrix theory for signal processing applications are however scarce and highly technical. This tutorial provides an accessible methodological introduction to the modern tools of random matrix theory and to the signal processing methods derived from them, with an emphasis on simple illustrative examples

    On the probability that all eigenvalues of Gaussian, Wishart, and double Wishart random matrices lie within an interval

    Full text link
    We derive the probability that all eigenvalues of a random matrix M\bf M lie within an arbitrary interval [a,b][a,b], ψ(a,b)≜Pr⁥{a≀λmin⁥(M),λmax⁥(M)≀b}\psi(a,b)\triangleq\Pr\{a\leq\lambda_{\min}({\bf M}), \lambda_{\max}({\bf M})\leq b\}, when M\bf M is a real or complex finite dimensional Wishart, double Wishart, or Gaussian symmetric/Hermitian matrix. We give efficient recursive formulas allowing the exact evaluation of ψ(a,b)\psi(a,b) for Wishart matrices, even with large number of variates and degrees of freedom. We also prove that the probability that all eigenvalues are within the limiting spectral support (given by the Mar{\v{c}}enko-Pastur or the semicircle laws) tends for large dimensions to the universal values 0.69210.6921 and 0.93970.9397 for the real and complex cases, respectively. Applications include improved bounds for the probability that a Gaussian measurement matrix has a given restricted isometry constant in compressed sensing.Comment: IEEE Transactions on Information Theory, 201

    Eigen-Inference for Energy Estimation of Multiple Sources

    Full text link
    In this paper, a new method is introduced to blindly estimate the transmit power of multiple signal sources in multi-antenna fading channels, when the number of sensing devices and the number of available samples are sufficiently large compared to the number of sources. Recent advances in the field of large dimensional random matrix theory are used that result in a simple and computationally efficient consistent estimator of the power of each source. A criterion to determine the minimum number of sensors and the minimum number of samples required to achieve source separation is then introduced. Simulations are performed that corroborate the theoretical claims and show that the proposed power estimator largely outperforms alternative power inference techniques.Comment: to appear in IEEE Trans. on Information Theory, 17 pages, 13 figure

    Limits on Sparse Data Acquisition: RIC Analysis of Finite Gaussian Matrices

    Full text link
    One of the key issues in the acquisition of sparse data by means of compressed sensing (CS) is the design of the measurement matrix. Gaussian matrices have been proven to be information-theoretically optimal in terms of minimizing the required number of measurements for sparse recovery. In this paper we provide a new approach for the analysis of the restricted isometry constant (RIC) of finite dimensional Gaussian measurement matrices. The proposed method relies on the exact distributions of the extreme eigenvalues for Wishart matrices. First, we derive the probability that the restricted isometry property is satisfied for a given sufficient recovery condition on the RIC, and propose a probabilistic framework to study both the symmetric and asymmetric RICs. Then, we analyze the recovery of compressible signals in noise through the statistical characterization of stability and robustness. The presented framework determines limits on various sparse recovery algorithms for finite size problems. In particular, it provides a tight lower bound on the maximum sparsity order of the acquired data allowing signal recovery with a given target probability. Also, we derive simple approximations for the RICs based on the Tracy-Widom distribution.Comment: 11 pages, 6 figures, accepted for publication in IEEE transactions on information theor
    • 

    corecore