672 research outputs found
An Iterative Receiver for OFDM With Sparsity-Based Parametric Channel Estimation
In this work we design a receiver that iteratively passes soft information
between the channel estimation and data decoding stages. The receiver
incorporates sparsity-based parametric channel estimation. State-of-the-art
sparsity-based iterative receivers simplify the channel estimation problem by
restricting the multipath delays to a grid. Our receiver does not impose such a
restriction. As a result it does not suffer from the leakage effect, which
destroys sparsity. Communication at near capacity rates in high SNR requires a
large modulation order. Due to the close proximity of modulation symbols in
such systems, the grid-based approximation is of insufficient accuracy. We show
numerically that a state-of-the-art iterative receiver with grid-based sparse
channel estimation exhibits a bit-error-rate floor in the high SNR regime. On
the contrary, our receiver performs very close to the perfect channel state
information bound for all SNR values. We also demonstrate both theoretically
and numerically that parametric channel estimation works well in dense
channels, i.e., when the number of multipath components is large and each
individual component cannot be resolved.Comment: Major revision, accepted for IEEE Transactions on Signal Processin
One-Bit ExpanderSketch for One-Bit Compressed Sensing
Is it possible to obliviously construct a set of hyperplanes H such that you
can approximate a unit vector x when you are given the side on which the vector
lies with respect to every h in H? In the sparse recovery literature, where x
is approximately k-sparse, this problem is called one-bit compressed sensing
and has received a fair amount of attention the last decade. In this paper we
obtain the first scheme that achieves almost optimal measurements and sublinear
decoding time for one-bit compressed sensing in the non-uniform case. For a
large range of parameters, we improve the state of the art in both the number
of measurements and the decoding time
Approximate Message-Passing Decoder and Capacity Achieving Sparse Superposition Codes
We study the approximate message-passing decoder for sparse superposition
coding on the additive white Gaussian noise channel and extend our preliminary
work [1]. We use heuristic statistical-physics-based tools such as the cavity
and the replica methods for the statistical analysis of the scheme. While
superposition codes asymptotically reach the Shannon capacity, we show that our
iterative decoder is limited by a phase transition similar to the one that
happens in Low Density Parity check codes. We consider two solutions to this
problem, that both allow to reach the Shannon capacity: i) a power allocation
strategy and ii) the use of spatial coupling, a novelty for these codes that
appears to be promising. We present in particular simulations suggesting that
spatial coupling is more robust and allows for better reconstruction at finite
code lengths. Finally, we show empirically that the use of a fast
Hadamard-based operator allows for an efficient reconstruction, both in terms
of computational time and memory, and the ability to deal with very large
messages.Comment: 40 pages, 18 figure
Estimating Random Variables from Random Sparse Observations
Let X_1,...., X_n be a collection of iid discrete random variables, and
Y_1,..., Y_m a set of noisy observations of such variables. Assume each
observation Y_a to be a random function of some a random subset of the X_i's,
and consider the conditional distribution of X_i given the observations, namely
\mu_i(x_i)\equiv\prob\{X_i=x_i|Y\} (a posteriori probability).
We establish a general relation between the distribution of \mu_i, and the
fixed points of the associated density evolution operator. Such relation holds
asymptotically in the large system limit, provided the average number of
variables an observation depends on is bounded. We discuss the relevance of our
result to a number of applications, ranging from sparse graph codes, to
multi-user detection, to group testing.Comment: 22 pages, 1 eps figures, invited paper for European Transactions on
Telecommunication
Mutual Information and Optimality of Approximate Message-Passing in Random Linear Estimation
We consider the estimation of a signal from the knowledge of its noisy linear
random Gaussian projections. A few examples where this problem is relevant are
compressed sensing, sparse superposition codes, and code division multiple
access. There has been a number of works considering the mutual information for
this problem using the replica method from statistical physics. Here we put
these considerations on a firm rigorous basis. First, we show, using a
Guerra-Toninelli type interpolation, that the replica formula yields an upper
bound to the exact mutual information. Secondly, for many relevant practical
cases, we present a converse lower bound via a method that uses spatial
coupling, state evolution analysis and the I-MMSE theorem. This yields a single
letter formula for the mutual information and the minimal-mean-square error for
random Gaussian linear estimation of all discrete bounded signals. In addition,
we prove that the low complexity approximate message-passing algorithm is
optimal outside of the so-called hard phase, in the sense that it
asymptotically reaches the minimal-mean-square error. In this work spatial
coupling is used primarily as a proof technique. However our results also prove
two important features of spatially coupled noisy linear random Gaussian
estimation. First there is no algorithmically hard phase. This means that for
such systems approximate message-passing always reaches the minimal-mean-square
error. Secondly, in a proper limit the mutual information associated to such
systems is the same as the one of uncoupled linear random Gaussian estimation
- …