10,038 research outputs found

    Entropy-based covariance determinant estimation

    Get PDF
    © 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.An information-theoretic approach is described to estimate the determinant of the covariance matrix of a random vector sequence (a common task in a wide range of estimation and detection problems in signal processing for communications). The method is based on a prior entropy-based processing of the data using kernels and offers robustness against small-entropy contamination. The trade-off between optimality, accuracy and robustness is analyzed, along with the impact of the relative kernel bandwidth and data size.Peer ReviewedPostprint (published version

    Law of Log Determinant of Sample Covariance Matrix and Optimal Estimation of Differential Entropy for High-Dimensional Gaussian Distributions

    Full text link
    Differential entropy and log determinant of the covariance matrix of a multivariate Gaussian distribution have many applications in coding, communications, signal processing and statistical inference. In this paper we consider in the high dimensional setting optimal estimation of the differential entropy and the log-determinant of the covariance matrix. We first establish a central limit theorem for the log determinant of the sample covariance matrix in the high dimensional setting where the dimension p(n)p(n) can grow with the sample size nn. An estimator of the differential entropy and the log determinant is then considered. Optimal rate of convergence is obtained. It is shown that in the case p(n)/n→0p(n)/n \rightarrow 0 the estimator is asymptotically sharp minimax. The ultra-high dimensional setting where p(n)>np(n) > n is also discussed.Comment: 19 page

    Maximum Entropy Kernels for System Identification

    Full text link
    A new nonparametric approach for system identification has been recently proposed where the impulse response is modeled as the realization of a zero-mean Gaussian process whose covariance (kernel) has to be estimated from data. In this scheme, quality of the estimates crucially depends on the parametrization of the covariance of the Gaussian process. A family of kernels that have been shown to be particularly effective in the system identification framework is the family of Diagonal/Correlated (DC) kernels. Maximum entropy properties of a related family of kernels, the Tuned/Correlated (TC) kernels, have been recently pointed out in the literature. In this paper we show that maximum entropy properties indeed extend to the whole family of DC kernels. The maximum entropy interpretation can be exploited in conjunction with results on matrix completion problems in the graphical models literature to shed light on the structure of the DC kernel. In particular, we prove that the DC kernel admits a closed-form factorization, inverse and determinant. These results can be exploited both to improve the numerical stability and to reduce the computational complexity associated with the computation of the DC estimator.Comment: Extends results of 2014 IEEE MSC Conference Proceedings (arXiv:1406.5706
    • …
    corecore