19 research outputs found

    Fixed Points of Generalized Approximate Message Passing with Arbitrary Matrices

    Get PDF
    The estimation of a random vector with independent components passed through a linear transform followed by a componentwise (possibly nonlinear) output map arises in a range of applications. Approximate message passing (AMP) methods, based on Gaussian approximations of loopy belief propagation, have recently attracted considerable attention for such problems. For large random transforms, these methods exhibit fast convergence and admit precise analytic characterizations with testable conditions for optimality, even for certain non-convex problem instances. However, the behavior of AMP under general transforms is not fully understood. In this paper, we consider the generalized AMP (GAMP) algorithm and relate the method to more common optimization techniques. This analysis enables a precise characterization of the GAMP algorithm fixed-points that applies to arbitrary transforms. In particular, we show that the fixed points of the so-called max-sum GAMP algorithm for MAP estimation are critical points of a constrained maximization of the posterior density. The fixed-points of the sum-product GAMP algorithm for estimation of the posterior marginals can be interpreted as critical points of a certain mean-field variational optimization. Index Terms—Belief propagation, ADMM, variational optimization, message passing

    Precoding via Approximate Message Passing with Instantaneous Signal Constraints

    Full text link
    This paper proposes a low complexity precoding algorithm based on the recently proposed Generalized Least Square Error (GLSE) scheme with generic penalty and support. The algorithm iteratively constructs the transmit vector via Approximate Message Passing (AMP). Using the asymptotic decoupling property of GLSE precoders, we derive closed form fixed point equations to tune the parameters in the proposed algorithm for a general set of instantaneous signal constraints. The tuning strategy is then utilized to construct transmit vectors with restricted peak-to-average power ratios and to efficiently select a subset of transmit antennas. The numerical investigations show that the proposed algorithm tracks the large-system performance of GLSE precoders even for a moderate number of antennas.Comment: 2018 International Zurich Seminar on Information and Communication (IZS) 5 pages and 2 figure

    Binary Linear Classification and Feature Selection via Generalized Approximate Message Passing

    Full text link
    For the problem of binary linear classification and feature selection, we propose algorithmic approaches to classifier design based on the generalized approximate message passing (GAMP) algorithm, recently proposed in the context of compressive sensing. We are particularly motivated by problems where the number of features greatly exceeds the number of training examples, but where only a few features suffice for accurate classification. We show that sum-product GAMP can be used to (approximately) minimize the classification error rate and max-sum GAMP can be used to minimize a wide variety of regularized loss functions. Furthermore, we describe an expectation-maximization (EM)-based scheme to learn the associated model parameters online, as an alternative to cross-validation, and we show that GAMP's state-evolution framework can be used to accurately predict the misclassification rate. Finally, we present a detailed numerical study to confirm the accuracy, speed, and flexibility afforded by our GAMP-based approaches to binary linear classification and feature selection
    corecore