1,203 research outputs found
Approaching the Rate-Distortion Limit with Spatial Coupling, Belief propagation and Decimation
We investigate an encoding scheme for lossy compression of a binary symmetric
source based on simple spatially coupled Low-Density Generator-Matrix codes.
The degree of the check nodes is regular and the one of code-bits is Poisson
distributed with an average depending on the compression rate. The performance
of a low complexity Belief Propagation Guided Decimation algorithm is
excellent. The algorithmic rate-distortion curve approaches the optimal curve
of the ensemble as the width of the coupling window grows. Moreover, as the
check degree grows both curves approach the ultimate Shannon rate-distortion
limit. The Belief Propagation Guided Decimation encoder is based on the
posterior measure of a binary symmetric test-channel. This measure can be
interpreted as a random Gibbs measure at a "temperature" directly related to
the "noise level of the test-channel". We investigate the links between the
algorithmic performance of the Belief Propagation Guided Decimation encoder and
the phase diagram of this Gibbs measure. The phase diagram is investigated
thanks to the cavity method of spin glass theory which predicts a number of
phase transition thresholds. In particular the dynamical and condensation
"phase transition temperatures" (equivalently test-channel noise thresholds)
are computed. We observe that: (i) the dynamical temperature of the spatially
coupled construction saturates towards the condensation temperature; (ii) for
large degrees the condensation temperature approaches the temperature (i.e.
noise level) related to the information theoretic Shannon test-channel noise
parameter of rate-distortion theory. This provides heuristic insight into the
excellent performance of the Belief Propagation Guided Decimation algorithm.
The paper contains an introduction to the cavity method
Analysis of common attacks in LDPCC-based public-key cryptosystems
We analyze the security and reliability of a recently proposed class of
public-key cryptosystems against attacks by unauthorized parties who have
acquired partial knowledge of one or more of the private key components and/or
of the plaintext. Phase diagrams are presented, showing critical partial
knowledge levels required for unauthorized decryptionComment: 14 pages, 6 figure
The Statistical Physics of Regular Low-Density Parity-Check Error-Correcting Codes
A variation of Gallager error-correcting codes is investigated using
statistical mechanics. In codes of this type, a given message is encoded into a
codeword which comprises Boolean sums of message bits selected by two randomly
constructed sparse matrices. The similarity of these codes to Ising spin
systems with random interaction makes it possible to assess their typical
performance by analytical methods developed in the study of disordered systems.
The typical case solutions obtained via the replica method are consistent with
those obtained in simulations using belief propagation (BP) decoding. We
discuss the practical implications of the results obtained and suggest a
computationally efficient construction for one of the more practical
configurations.Comment: 35 pages, 4 figure
Composite CDMA - A statistical mechanics analysis
Code Division Multiple Access (CDMA) in which the spreading code assignment
to users contains a random element has recently become a cornerstone of CDMA
research. The random element in the construction is particular attractive as it
provides robustness and flexibility in utilising multi-access channels, whilst
not making significant sacrifices in terms of transmission power. Random codes
are generated from some ensemble, here we consider the possibility of combining
two standard paradigms, sparsely and densely spread codes, in a single
composite code ensemble. The composite code analysis includes a replica
symmetric calculation of performance in the large system limit, and
investigation of finite systems through a composite belief propagation
algorithm. A variety of codes are examined with a focus on the high
multi-access interference regime. In both the large size limit and finite
systems we demonstrate scenarios in which the composite code has typical
performance exceeding sparse and dense codes at equivalent signal to noise
ratio.Comment: 23 pages, 11 figures, Sigma Phi 2008 conference submission -
submitted to J.Stat.Mec
A Continuous-Time Recurrent Neural Network for Joint Equalization and Decoding – Analog Hardware Implementation Aspects
Equalization and channel decoding are “traditionally” two cascade processes at the receiver side of a digital transmission. They aim to achieve a reliable and efficient transmission. For high data rates, the energy consumption of their corresponding algorithms is expected to become a limiting factor. For mobile devices with limited battery’s size, the energy consumption, mirrored in the lifetime of the battery, becomes even more crucial. Therefore, an energy-efficient implementation of equalization and decoding algorithms is desirable. The prevailing way is by increasing the energy efficiency of the underlying digital circuits. However, we address here promising alternatives offered by mixed (analog/digital) circuits. We are concerned with modeling joint equalization and decoding as a whole in a continuous-time framework. In doing so, continuous-time recurrent neural networks play an essential role because of their nonlinear characteristic and special suitability for analog very-large-scale integration (VLSI). Based on the proposed model, we show that the superiority of joint equalization and decoding (a well-known fact from the discrete-time case) preserves in analog. Additionally, analog circuit design related aspects such as adaptivity, connectivity and accuracy are discussed and linked to theoretical aspects of recurrent neural networks such as Lyapunov stability and simulated annealing
Statistical mechanics of error exponents for error-correcting codes
Error exponents characterize the exponential decay, when increasing message
length, of the probability of error of many error-correcting codes. To tackle
the long standing problem of computing them exactly, we introduce a general,
thermodynamic, formalism that we illustrate with maximum-likelihood decoding of
low-density parity-check (LDPC) codes on the binary erasure channel (BEC) and
the binary symmetric channel (BSC). In this formalism, we apply the cavity
method for large deviations to derive expressions for both the average and
typical error exponents, which differ by the procedure used to select the codes
from specified ensembles. When decreasing the noise intensity, we find that two
phase transitions take place, at two different levels: a glass to ferromagnetic
transition in the space of codewords, and a paramagnetic to glass transition in
the space of codes.Comment: 32 pages, 13 figure
Dynamic Compressive Sensing of Time-Varying Signals via Approximate Message Passing
In this work the dynamic compressive sensing (CS) problem of recovering
sparse, correlated, time-varying signals from sub-Nyquist, non-adaptive, linear
measurements is explored from a Bayesian perspective. While there has been a
handful of previously proposed Bayesian dynamic CS algorithms in the
literature, the ability to perform inference on high-dimensional problems in a
computationally efficient manner remains elusive. In response, we propose a
probabilistic dynamic CS signal model that captures both amplitude and support
correlation structure, and describe an approximate message passing algorithm
that performs soft signal estimation and support detection with a computational
complexity that is linear in all problem dimensions. The algorithm, DCS-AMP,
can perform either causal filtering or non-causal smoothing, and is capable of
learning model parameters adaptively from the data through an
expectation-maximization learning procedure. We provide numerical evidence that
DCS-AMP performs within 3 dB of oracle bounds on synthetic data under a variety
of operating conditions. We further describe the result of applying DCS-AMP to
two real dynamic CS datasets, as well as a frequency estimation task, to
bolster our claim that DCS-AMP is capable of offering state-of-the-art
performance and speed on real-world high-dimensional problems.Comment: 32 pages, 7 figure
- …