36,079 research outputs found

    Communications-Inspired Projection Design with Application to Compressive Sensing

    Get PDF
    We consider the recovery of an underlying signal x \in C^m based on projection measurements of the form y=Mx+w, where y \in C^l and w is measurement noise; we are interested in the case l < m. It is assumed that the signal model p(x) is known, and w CN(w;0,S_w), for known S_W. The objective is to design a projection matrix M \in C^(l x m) to maximize key information-theoretic quantities with operational significance, including the mutual information between the signal and the projections I(x;y) or the Renyi entropy of the projections h_a(y) (Shannon entropy is a special case). By capitalizing on explicit characterizations of the gradients of the information measures with respect to the projections matrix, where we also partially extend the well-known results of Palomar and Verdu from the mutual information to the Renyi entropy domain, we unveil the key operations carried out by the optimal projections designs: mode exposure and mode alignment. Experiments are considered for the case of compressive sensing (CS) applied to imagery. In this context, we provide a demonstration of the performance improvement possible through the application of the novel projection designs in relation to conventional ones, as well as justification for a fast online projections design method with which state-of-the-art adaptive CS signal recovery is achieved.Comment: 25 pages, 7 figures, parts of material published in IEEE ICASSP 2012, submitted to SIIM

    Optimal projection of observations in a Bayesian setting

    Full text link
    Optimal dimensionality reduction methods are proposed for the Bayesian inference of a Gaussian linear model with additive noise in presence of overabundant data. Three different optimal projections of the observations are proposed based on information theory: the projection that minimizes the Kullback-Leibler divergence between the posterior distributions of the original and the projected models, the one that minimizes the expected Kullback-Leibler divergence between the same distributions, and the one that maximizes the mutual information between the parameter of interest and the projected observations. The first two optimization problems are formulated as the determination of an optimal subspace and therefore the solution is computed using Riemannian optimization algorithms on the Grassmann manifold. Regarding the maximization of the mutual information, it is shown that there exists an optimal subspace that minimizes the entropy of the posterior distribution of the reduced model; a basis of the subspace can be computed as the solution to a generalized eigenvalue problem; an a priori error estimate on the mutual information is available for this particular solution; and that the dimensionality of the subspace to exactly conserve the mutual information between the input and the output of the models is less than the number of parameters to be inferred. Numerical applications to linear and nonlinear models are used to assess the efficiency of the proposed approaches, and to highlight their advantages compared to standard approaches based on the principal component analysis of the observations

    U-statistics and random subgraph counts: Multivariate normal approximation via exchangeable pairs and embedding

    Full text link
    In a recent paper by the authors, a new approach--called the "embedding method"--was introduced, which allows to make use of exchangeable pairs for normal and multivariate normal approximation with Stein's method in cases where the corresponding couplings do not satisfy a certain linearity condition. The key idea is to embed the problem into a higher dimensional space in such a way that the linearity condition is then satisfied. Here we apply the embedding to U-statistics as well as to subgraph counts in random graphs
    corecore