12,752 research outputs found

    Entropy Message Passing

    Full text link
    The paper proposes a new message passing algorithm for cycle-free factor graphs. The proposed "entropy message passing" (EMP) algorithm may be viewed as sum-product message passing over the entropy semiring, which has previously appeared in automata theory. The primary use of EMP is to compute the entropy of a model. However, EMP can also be used to compute expressions that appear in expectation maximization and in gradient descent algorithms.Comment: 5 pages, 1 figure, to appear in IEEE Transactions on Information Theor

    Distributed Maximum Likelihood for Simultaneous Self-localization and Tracking in Sensor Networks

    Full text link
    We show that the sensor self-localization problem can be cast as a static parameter estimation problem for Hidden Markov Models and we implement fully decentralized versions of the Recursive Maximum Likelihood and on-line Expectation-Maximization algorithms to localize the sensor network simultaneously with target tracking. For linear Gaussian models, our algorithms can be implemented exactly using a distributed version of the Kalman filter and a novel message passing algorithm. The latter allows each node to compute the local derivatives of the likelihood or the sufficient statistics needed for Expectation-Maximization. In the non-linear case, a solution based on local linearization in the spirit of the Extended Kalman Filter is proposed. In numerical examples we demonstrate that the developed algorithms are able to learn the localization parameters.Comment: shorter version is about to appear in IEEE Transactions of Signal Processing; 22 pages, 15 figure

    Message-Passing Algorithms for Channel Estimation and Decoding Using Approximate Inference

    Get PDF
    We design iterative receiver schemes for a generic wireless communication system by treating channel estimation and information decoding as an inference problem in graphical models. We introduce a recently proposed inference framework that combines belief propagation (BP) and the mean field (MF) approximation and includes these algorithms as special cases. We also show that the expectation propagation and expectation maximization algorithms can be embedded in the BP-MF framework with slight modifications. By applying the considered inference algorithms to our probabilistic model, we derive four different message-passing receiver schemes. Our numerical evaluation demonstrates that the receiver based on the BP-MF framework and its variant based on BP-EM yield the best compromise between performance, computational complexity and numerical stability among all candidate algorithms.Comment: Accepted for publication in the Proceedings of 2012 IEEE International Symposium on Information Theor

    Rectified Gaussian Scale Mixtures and the Sparse Non-Negative Least Squares Problem

    Full text link
    In this paper, we develop a Bayesian evidence maximization framework to solve the sparse non-negative least squares (S-NNLS) problem. We introduce a family of probability densities referred to as the Rectified Gaussian Scale Mixture (R- GSM) to model the sparsity enforcing prior distribution for the solution. The R-GSM prior encompasses a variety of heavy-tailed densities such as the rectified Laplacian and rectified Student- t distributions with a proper choice of the mixing density. We utilize the hierarchical representation induced by the R-GSM prior and develop an evidence maximization framework based on the Expectation-Maximization (EM) algorithm. Using the EM based method, we estimate the hyper-parameters and obtain a point estimate for the solution. We refer to the proposed method as rectified sparse Bayesian learning (R-SBL). We provide four R- SBL variants that offer a range of options for computational complexity and the quality of the E-step computation. These methods include the Markov chain Monte Carlo EM, linear minimum mean-square-error estimation, approximate message passing and a diagonal approximation. Using numerical experiments, we show that the proposed R-SBL method outperforms existing S-NNLS solvers in terms of both signal and support recovery performance, and is also very robust against the structure of the design matrix.Comment: Under Review by IEEE Transactions on Signal Processin

    Efficient High-Dimensional Inference in the Multiple Measurement Vector Problem

    Full text link
    In this work, a Bayesian approximate message passing algorithm is proposed for solving the multiple measurement vector (MMV) problem in compressive sensing, in which a collection of sparse signal vectors that share a common support are recovered from undersampled noisy measurements. The algorithm, AMP-MMV, is capable of exploiting temporal correlations in the amplitudes of non-zero coefficients, and provides soft estimates of the signal vectors as well as the underlying support. Central to the proposed approach is an extension of recently developed approximate message passing techniques to the amplitude-correlated MMV setting. Aided by these techniques, AMP-MMV offers a computational complexity that is linear in all problem dimensions. In order to allow for automatic parameter tuning, an expectation-maximization algorithm that complements AMP-MMV is described. Finally, a detailed numerical study demonstrates the power of the proposed approach and its particular suitability for application to high-dimensional problems.Comment: 28 pages, 9 figure

    Expectation-maximization Gaussian-mixture approximate message passing

    Get PDF
    Abstract—When recovering a sparse signal from noisy compressive linear measurements, the distribution of the signal’s non-zero coefficients can have a profound effect on recovery mean-squared error (MSE). If this distribution was aprioriknown, then one could use computationally efficient approximate message passing (AMP) techniques for nearly minimum MSE (MMSE) recovery. In practice, however, the distribution is unknown, motivating the use of robust algorithms like LASSO—which is nearly minimax optimal—at the cost of significantly larger MSE for non-least-favorable distributions. As an alternative, we propose an empirical-Bayesian technique that simultaneously learns the signal distribution while MMSE-recovering the signal—according to the learned distribution—using AMP. In particular, we model the non-zero distribution as a Gaussian mixture and learn its parameters through expectation maximization, using AMP to implement the expectation step. Numerical experiments on a wide range of signal classes confirm the state-of-the-art performance of our approach, in both reconstruction error and runtime, in the high-dimensional regime, for most (but not all) sensing operators. Index Terms—Compressed sensing, belief propagation, expectation maximization algorithms, Gaussian mixture model. I
    • …
    corecore