226 research outputs found

    Lossy Compression via Sparse Linear Regression: Computationally Efficient Encoding and Decoding

    Full text link
    We propose computationally efficient encoders and decoders for lossy compression using a Sparse Regression Code. The codebook is defined by a design matrix and codewords are structured linear combinations of columns of this matrix. The proposed encoding algorithm sequentially chooses columns of the design matrix to successively approximate the source sequence. It is shown to achieve the optimal distortion-rate function for i.i.d Gaussian sources under the squared-error distortion criterion. For a given rate, the parameters of the design matrix can be varied to trade off distortion performance with encoding complexity. An example of such a trade-off as a function of the block length n is the following. With computational resource (space or time) per source sample of O((n/\log n)^2), for a fixed distortion-level above the Gaussian distortion-rate function, the probability of excess distortion decays exponentially in n. The Sparse Regression Code is robust in the following sense: for any ergodic source, the proposed encoder achieves the optimal distortion-rate function of an i.i.d Gaussian source with the same variance. Simulations show that the encoder has good empirical performance, especially at low and moderate rates.Comment: 14 pages, to appear in IEEE Transactions on Information Theor

    Construction of Capacity-Achieving Lattice Codes: Polar Lattices

    Full text link
    In this paper, we propose a new class of lattices constructed from polar codes, namely polar lattices, to achieve the capacity \frac{1}{2}\log(1+\SNR) of the additive white Gaussian-noise (AWGN) channel. Our construction follows the multilevel approach of Forney \textit{et al.}, where we construct a capacity-achieving polar code on each level. The component polar codes are shown to be naturally nested, thereby fulfilling the requirement of the multilevel lattice construction. We prove that polar lattices are \emph{AWGN-good}. Furthermore, using the technique of source polarization, we propose discrete Gaussian shaping over the polar lattice to satisfy the power constraint. Both the construction and shaping are explicit, and the overall complexity of encoding and decoding is O(NlogN)O(N\log N) for any fixed target error probability.Comment: full version of the paper to appear in IEEE Trans. Communication

    An Iterative Receiver for OFDM With Sparsity-Based Parametric Channel Estimation

    Get PDF
    In this work we design a receiver that iteratively passes soft information between the channel estimation and data decoding stages. The receiver incorporates sparsity-based parametric channel estimation. State-of-the-art sparsity-based iterative receivers simplify the channel estimation problem by restricting the multipath delays to a grid. Our receiver does not impose such a restriction. As a result it does not suffer from the leakage effect, which destroys sparsity. Communication at near capacity rates in high SNR requires a large modulation order. Due to the close proximity of modulation symbols in such systems, the grid-based approximation is of insufficient accuracy. We show numerically that a state-of-the-art iterative receiver with grid-based sparse channel estimation exhibits a bit-error-rate floor in the high SNR regime. On the contrary, our receiver performs very close to the perfect channel state information bound for all SNR values. We also demonstrate both theoretically and numerically that parametric channel estimation works well in dense channels, i.e., when the number of multipath components is large and each individual component cannot be resolved.Comment: Major revision, accepted for IEEE Transactions on Signal Processin

    High-Dimensional Inference on Dense Graphs with Applications to Coding Theory and Machine Learning

    Get PDF
    We are living in the era of "Big Data", an era characterized by a voluminous amount of available data. Such amount is mainly due to the continuing advances in the computational capabilities for capturing, storing, transmitting and processing data. However, it is not always the volume of data that matters, but rather the "relevant" information that resides in it. Exactly 70 years ago, Claude Shannon, the father of information theory, was able to quantify the amount of information in a communication scenario based on a probabilistic model of the data. It turns out that Shannon's theory can be adapted to various probability-based information processing fields, ranging from coding theory to machine learning. The computation of some information theoretic quantities, such as the mutual information, can help in setting fundamental limits and devising more efficient algorithms for many inference problems. This thesis deals with two different, yet intimately related, inference problems in the fields of coding theory and machine learning. We use Bayesian probabilistic formulations for both problems, and we analyse them in the asymptotic high-dimensional regime. The goal of our analysis is to assess the algorithmic performance on the first hand and to predict the Bayes-optimal performance on the second hand, using an information theoretic approach. To this end, we employ powerful analytical tools from statistical physics. The first problem is a recent forward-error-correction code called sparse superposition code. We consider the extension of such code to a large class of noisy channels by exploiting the similarity with the compressed sensing paradigm. Moreover, we show the amenability of sparse superposition codes to perform joint distribution matching and channel coding. In the second problem, we study symmetric rank-one matrix factorization, a prominent model in machine learning and statistics with many applications ranging from community detection to sparse principal component analysis. We provide an explicit expression for the normalized mutual information and the minimum mean-square error of this model in the asymptotic limit. This allows us to prove the optimality of a certain iterative algorithm on a large set of parameters. A common feature of the two problems stems from the fact that both of them are represented on dense graphical models. Hence, similar message-passing algorithms and analysis tools can be adopted. Furthermore, spatial coupling, a new technique introduced in the context of low-density parity-check (LDPC) codes, can be applied to both problems. Spatial coupling is used in this thesis as a "construction technique" to boost the algorithmic performance and as a "proof technique" to compute some information theoretic quantities. Moreover, both of our problems retain close connections with spin glass models studied in statistical mechanics of disordered systems. This allows us to use sophisticated techniques developed in statistical physics. In this thesis, we use the potential function predicted by the replica method in order to prove the threshold saturation phenomenon associated with spatially coupled models. Moreover, one of the main contributions of this thesis is proving that the predictions given by the "heuristic" replica method are exact. Hence, our results could be of great interest for the statistical physics community as well, as they help to set a rigorous mathematical foundation of the replica predictions
    corecore