107 research outputs found
Analysis of Approximate Message Passing with a Class of Non-Separable Denoisers
Approximate message passing (AMP) is a class of efficient algorithms for
solving high-dimensional linear regression tasks where one wishes to recover an
unknown signal \beta_0 from noisy, linear measurements y = A \beta_0 + w. When
applying a separable denoiser at each iteration, the performance of AMP (for
example, the mean squared error of its estimates) can be accurately tracked by
a simple, scalar iteration referred to as state evolution. Although separable
denoisers are sufficient if the unknown signal has independent and identically
distributed entries, in many real-world applications, like image or audio
signal reconstruction, the unknown signal contains dependencies between
entries. In these cases, a coordinate-wise independence structure is not a good
approximation to the true prior of the unknown signal. In this paper we assume
the unknown signal has dependent entries, and using a class of non-separable
sliding-window denoisers, we prove that a new form of state evolution still
accurately predicts AMP performance. This is an early step in understanding the
role of non-separable denoisers within AMP, and will lead to a characterization
of more general denoisers in problems including compressive image
reconstruction.Comment: 37 pages, 1 figure. A shorter version of this paper to appear in the
proceedings of ISIT 201
The Error Probability of Sparse Superposition Codes with Approximate Message Passing Decoding
Sparse superposition codes, or sparse regression codes (SPARCs), are a recent class of codes for reliable communication over the AWGN channel at rates approaching the channel capacity. Approximate
message passing (AMP) decoding, a computationally efficient technique for decoding SPARCs, has been proven to be asymptotically capacity-achieving for the AWGN channel. In this paper, we refine the asymptotic result by deriving a large deviations bound on the probability of AMP decoding error. This bound gives insight into the error performance of the AMP decoder for large but finite problem sizes, giving an error exponent as well as guidance on how the code parameters should be chosen at finite block lengths. For an appropriate choice of code parameters, we show that for any fixed rate less than the channel capacity, the decoding error probability decays exponentially in , where , the number of AMP iterations required for successful decoding, is bounded in terms of the gap from capacity
Recommended from our members
Capacity-achieving Sparse Regression Codes via approximate message passing decoding
Sparse superposition codes were recently introduced by Barron and Joseph for reliable communication over the AWGN channel at rates approaching the channel capacity. In this code, the codewords are sparse linear combinations of columns of a design matrix. In this paper, we propose an approximate message passing decoder for sparse superposition codes. The complexity of the decoder scales linearly with the size of the design matrix. The performance of the decoder is rigorously analyzed and it is shown to asymptotically achieve the AWGN capacity. We also provide simulation results to demonstrate the performance of the decoder at finite block lengths, and introduce a power allocation that significantly improves the empirical performance.RV would like to acknowledge support from a Marie Curie Career Integration Grant (GA Number 631489). AG is supported by an EPSRC Doctoral Training Award.This is the author accepted manuscript. The final version is available from IEEE via http://dx.doi.org/10.1109/ISIT.2015.728280
Spatially Coupled Sparse Regression Codes: Design and State Evolution Analysis.
We consider the design and analysis of spatially coupled sparse regression
codes (SC-SPARCs), which were recently introduced by Barbier et al. for
efficient communication over the additive white Gaussian noise channel.
SC-SPARCs can be efficiently decoded using an Approximate Message Passing (AMP)
decoder, whose performance in each iteration can be predicted via a set of
equations called state evolution. In this paper, we give an asymptotic
characterization of the state evolution equations for SC-SPARCs. For any given
base matrix (that defines the coupling structure of the SC-SPARC) and rate,
this characterization can be used to predict whether or not AMP decoding will
succeed in the large system limit. We then consider a simple base matrix
defined by two parameters , and show that AMP decoding
succeeds in the large system limit for all rates . The
asymptotic result also indicates how the parameters of the base matrix affect
the decoding progression. Simulation results are presented to evaluate the
performance of SC-SPARCs defined with the proposed base matrix.Comment: 8 pages, 6 figures. A shorter version of this paper to appear in ISIT
201
Near-Optimal Coding for Many-user Multiple Access Channels
This paper considers the Gaussian multiple-access channel (MAC) in the
asymptotic regime where the number of users grows linearly with the code
length. We propose efficient coding schemes based on random linear models with
approximate message passing (AMP) decoding and derive the asymptotic error rate
achieved for a given user density, user payload (in bits), and user energy. The
tradeoff between energy-per-bit and achievable user density (for a fixed user
payload and target error rate) is studied, and it is demonstrated that in the
large system limit, a spatially coupled coding scheme with AMP decoding
achieves near-optimal tradeoffs for a wide range of user densities.
Furthermore, in the regime where the user payload is large, we also study the
spectral efficiency versus energy-per-bit tradeoff and discuss methods to
reduce decoding complexity at large payload sizes.Comment: 35 pages, 4 figures. A shorter version of this paper appeared in ISIT
202
- …