226 research outputs found
Lossy Compression via Sparse Linear Regression: Computationally Efficient Encoding and Decoding
We propose computationally efficient encoders and decoders for lossy
compression using a Sparse Regression Code. The codebook is defined by a design
matrix and codewords are structured linear combinations of columns of this
matrix. The proposed encoding algorithm sequentially chooses columns of the
design matrix to successively approximate the source sequence. It is shown to
achieve the optimal distortion-rate function for i.i.d Gaussian sources under
the squared-error distortion criterion. For a given rate, the parameters of the
design matrix can be varied to trade off distortion performance with encoding
complexity. An example of such a trade-off as a function of the block length n
is the following. With computational resource (space or time) per source sample
of O((n/\log n)^2), for a fixed distortion-level above the Gaussian
distortion-rate function, the probability of excess distortion decays
exponentially in n. The Sparse Regression Code is robust in the following
sense: for any ergodic source, the proposed encoder achieves the optimal
distortion-rate function of an i.i.d Gaussian source with the same variance.
Simulations show that the encoder has good empirical performance, especially at
low and moderate rates.Comment: 14 pages, to appear in IEEE Transactions on Information Theor
Construction of Capacity-Achieving Lattice Codes: Polar Lattices
In this paper, we propose a new class of lattices constructed from polar
codes, namely polar lattices, to achieve the capacity \frac{1}{2}\log(1+\SNR)
of the additive white Gaussian-noise (AWGN) channel. Our construction follows
the multilevel approach of Forney \textit{et al.}, where we construct a
capacity-achieving polar code on each level. The component polar codes are
shown to be naturally nested, thereby fulfilling the requirement of the
multilevel lattice construction. We prove that polar lattices are
\emph{AWGN-good}. Furthermore, using the technique of source polarization, we
propose discrete Gaussian shaping over the polar lattice to satisfy the power
constraint. Both the construction and shaping are explicit, and the overall
complexity of encoding and decoding is for any fixed target error
probability.Comment: full version of the paper to appear in IEEE Trans. Communication
An Iterative Receiver for OFDM With Sparsity-Based Parametric Channel Estimation
In this work we design a receiver that iteratively passes soft information
between the channel estimation and data decoding stages. The receiver
incorporates sparsity-based parametric channel estimation. State-of-the-art
sparsity-based iterative receivers simplify the channel estimation problem by
restricting the multipath delays to a grid. Our receiver does not impose such a
restriction. As a result it does not suffer from the leakage effect, which
destroys sparsity. Communication at near capacity rates in high SNR requires a
large modulation order. Due to the close proximity of modulation symbols in
such systems, the grid-based approximation is of insufficient accuracy. We show
numerically that a state-of-the-art iterative receiver with grid-based sparse
channel estimation exhibits a bit-error-rate floor in the high SNR regime. On
the contrary, our receiver performs very close to the perfect channel state
information bound for all SNR values. We also demonstrate both theoretically
and numerically that parametric channel estimation works well in dense
channels, i.e., when the number of multipath components is large and each
individual component cannot be resolved.Comment: Major revision, accepted for IEEE Transactions on Signal Processin
Recommended from our members
Spatially Coupled Sparse Regression Codes for Single- and Multi-user Communications
Sparse regression codes (SPARCs) are a class of channel codes for efficient communication over the single-user additive white Gaussian noise (AWGN) channel at rates approaching the channel capacity. In a standard SPARC, codewords are sparse linear combinations of columns of an i.i.d. Gaussian design matrix, and the user message is encoded in the indices of those columns. Techniques such as power allocation and spatial coupling have been proposed to improve the performance of low-complexity iterative decoding algorithms such as approximate message passing (AMP).
In this thesis we investigate spatially coupled SPARCs, where the design matrix has a block- wise band-diagonal structure, and modulated SPARCs, which generalise standard SPARCs by introducing modulation to the encoding of user messages. We introduce a base matrix framework which provides a unified way to construct power allocated and spatially coupled design matrices, and propose AMP decoders for modulated SPARCs constructed using base matrices.
We prove that phase shift keying modulated and spatially coupled SPARCs with AMP decoding asymptotically achieve the capacity of the (complex) AWGN channel. We also show via numerical simulations that they can achieve lower error rates than standard coded modulation schemes at finite code lengths. A sliding window AMP decoder is proposed for spatially coupled SPARCs that significantly reduces the decoding latency and complexity.
We then investigate coding schemes based on random linear models and AMP decoding for the multi-user Gaussian multiple access channel in the asymptotic regime where the number of users grows linearly with the code length. For a fixed target error rate and message size per user (in bits), we obtain the exact trade-off between energy-per-bit and the user density achievable in the large system limit. We show that a coding scheme based on spatially coupled Gaussian matrices and AMP decoding achieves near-optimal trade-off for a large range of user densities. To the best of our knowledge, this is the first efficient coding scheme to do so in this multiple access regime. Moreover, the spatially coupled coding scheme has a practical interpretation: it can be viewed as block-wise time-division with overlap.Funded by a Doctoral Training Partnership Award from the Engineering and Physical Sciences Research Council
High-Dimensional Inference on Dense Graphs with Applications to Coding Theory and Machine Learning
We are living in the era of "Big Data", an era characterized by a voluminous amount of available data. Such amount is mainly due to the continuing advances in the computational capabilities for capturing, storing, transmitting and processing data. However, it is not always the volume of data that matters, but rather the "relevant" information that resides in it.
Exactly 70 years ago, Claude Shannon, the father of information theory, was able to quantify the amount of information in a communication scenario based on a probabilistic model of the data. It turns out that Shannon's theory can be adapted to various probability-based information processing fields, ranging from coding theory to machine learning. The computation of some information theoretic quantities, such as the mutual information, can help in setting fundamental limits and devising more efficient algorithms for many inference problems.
This thesis deals with two different, yet intimately related, inference problems in the fields of coding theory and machine learning. We use Bayesian probabilistic formulations for both problems, and we analyse them in the asymptotic high-dimensional regime. The goal of our analysis is to assess the algorithmic performance on the first hand and to predict the Bayes-optimal performance on the second hand, using an information theoretic approach. To this end, we employ powerful analytical tools from statistical physics.
The first problem is a recent forward-error-correction code called sparse superposition code. We consider the extension of such code to a large class of noisy channels by exploiting the similarity with the compressed sensing paradigm. Moreover, we show the amenability of sparse superposition codes to perform
joint distribution matching and channel coding.
In the second problem, we study symmetric rank-one matrix factorization, a prominent model in machine learning and statistics with many applications ranging from community detection to sparse principal component analysis. We provide an explicit expression for the normalized mutual information and the minimum mean-square error of this model in the asymptotic limit. This allows us to prove the optimality of a certain iterative algorithm on a large set of parameters.
A common feature of the two problems stems from the fact that both of them are represented on dense graphical models. Hence, similar message-passing algorithms and analysis tools can be adopted. Furthermore, spatial coupling, a new technique introduced in the context of low-density parity-check (LDPC) codes, can be applied to both problems. Spatial coupling is used in this thesis as a "construction technique" to boost the algorithmic performance and as a "proof technique" to compute some information theoretic quantities.
Moreover, both of our problems retain close connections with spin glass models studied in statistical mechanics of disordered systems. This allows us to use sophisticated techniques developed in statistical physics. In this thesis, we use the potential function predicted by the replica method in order to prove the threshold saturation phenomenon associated with spatially coupled models. Moreover, one of the main contributions of this thesis is proving that the predictions given by the "heuristic" replica method are exact. Hence, our results could be of great interest for the statistical physics community as well, as they help to set a rigorous mathematical foundation of the replica predictions
- …