5 research outputs found

    An Overview of Multi-Processor Approximate Message Passing

    Full text link
    Approximate message passing (AMP) is an algorithmic framework for solving linear inverse problems from noisy measurements, with exciting applications such as reconstructing images, audio, hyper spectral images, and various other signals, including those acquired in compressive signal acquisiton systems. The growing prevalence of big data systems has increased interest in large-scale problems, which may involve huge measurement matrices that are unsuitable for conventional computing systems. To address the challenge of large-scale processing, multiprocessor (MP) versions of AMP have been developed. We provide an overview of two such MP-AMP variants. In row-MP-AMP, each computing node stores a subset of the rows of the matrix and processes corresponding measurements. In column- MP-AMP, each node stores a subset of columns, and is solely responsible for reconstructing a portion of the signal. We will discuss pros and cons of both approaches, summarize recent research results for each, and explain when each one may be a viable approach. Aspects that are highlighted include some recent results on state evolution for both MP-AMP algorithms, and the use of data compression to reduce communication in the MP network

    Multiprocessor Approximate Message Passing with Column-Wise Partitioning

    Full text link
    Solving a large-scale regularized linear inverse problem using multiple processors is important in various real-world applications due to the limitations of individual processors and constraints on data sharing policies. This paper focuses on the setting where the matrix is partitioned column-wise. We extend the algorithmic framework and the theoretical analysis of approximate message passing (AMP), an iterative algorithm for solving linear inverse problems, whose asymptotic dynamics are characterized by state evolution (SE). In particular, we show that column-wise multiprocessor AMP (C-MP-AMP) obeys an SE under the same assumptions when the SE for AMP holds. The SE results imply that (i) the SE of C-MP-AMP converges to a state that is no worse than that of AMP and (ii) the asymptotic dynamics of C-MP-AMP and AMP can be identical. Moreover, for a setting that is not covered by SE, numerical results show that damping can improve the convergence performance of C-MP-AMP.Comment: This document contains complete details of the previous version (i.e., arXiv:1701.02578v1), which was accepted for publication in ICASSP 201

    On Sparse Vector Recovery Performance in Structurally Orthogonal Matrices via LASSO

    Get PDF
    In this paper, we consider the compressed sensing problem of reconstructing a sparse signal from an undersampled set of noisy linear measurements. The regularized least squares or least absolute shrinkage and selection operator (LASSO) formulation is used for signal estimation. The measurement matrix is assumed to be constructed by concatenating several randomly orthogonal bases, which we refer to as structurally orthogonal matrices. Such measurement matrix is highly relevant to large-scale compressive sensing applications because it facilitates rapid computation and parallel processing. Using the replica method in statistical physics, we derive the mean-squared-error (MSE) formula of reconstruction over the structurally orthogonal matrix in the large-system regime. Extensive numerical experiments are provided to verify the analytical result. We then consider the analytical result to investigate the MSE behaviors of the LASSO over the structurally orthogonal matrix, with an emphasis on performance comparisons with matrices with independent and identically distributed (i.i.d.) Gaussian entries. We find that structurally orthogonal matrices are at least as good as their i.i.d. Gaussian counterparts. Thus, the use of structurally orthogonal matrices is attractive in practical applications
    corecore