1,045 research outputs found
Sharp Bounds for Optimal Decoding of Low Density Parity Check Codes
Consider communication over a binary-input memoryless output-symmetric
channel with low density parity check (LDPC) codes and maximum a posteriori
(MAP) decoding. The replica method of spin glass theory allows to conjecture an
analytic formula for the average input-output conditional entropy per bit in
the infinite block length limit. Montanari proved a lower bound for this
entropy, in the case of LDPC ensembles with convex check degree polynomial,
which matches the replica formula. Here we extend this lower bound to any
irregular LDPC ensemble. The new feature of our work is an analysis of the
second derivative of the conditional input-output entropy with respect to
noise. A close relation arises between this second derivative and correlation
or mutual information of codebits. This allows us to extend the realm of the
interpolation method, in particular we show how channel symmetry allows to
control the fluctuations of the overlap parameters.Comment: 40 Pages, Submitted to IEEE Transactions on Information Theor
Applications of correlation inequalities to low density graphical codes
This contribution is based on the contents of a talk delivered at the
Next-SigmaPhi conference held in Crete in August 2005. It is adressed to an
audience of physicists with diverse horizons and does not assume any background
in communications theory. Capacity approaching error correcting codes for
channel communication known as Low Density Parity Check (LDPC) codes have
attracted considerable attention from coding theorists in the last decade.
Surprisingly strong connections with the theory of diluted spin glasses have
been discovered. In this work we elucidate one new connection, namely that a
class of correlation inequalities valid for gaussian spin glasses can be
applied to the theoretical analysis of LDPC codes. This allows for a rigorous
comparison between the so called (optimal) maximum a posteriori and the
computationaly efficient belief propagation decoders. The main ideas of the
proofs are explained and we refer to recent works for the more lengthy
technical details.Comment: 11 pages, 3 figure
Rank Minimization over Finite Fields: Fundamental Limits and Coding-Theoretic Interpretations
This paper establishes information-theoretic limits in estimating a finite
field low-rank matrix given random linear measurements of it. These linear
measurements are obtained by taking inner products of the low-rank matrix with
random sensing matrices. Necessary and sufficient conditions on the number of
measurements required are provided. It is shown that these conditions are sharp
and the minimum-rank decoder is asymptotically optimal. The reliability
function of this decoder is also derived by appealing to de Caen's lower bound
on the probability of a union. The sufficient condition also holds when the
sensing matrices are sparse - a scenario that may be amenable to efficient
decoding. More precisely, it is shown that if the n\times n-sensing matrices
contain, on average, \Omega(nlog n) entries, the number of measurements
required is the same as that when the sensing matrices are dense and contain
entries drawn uniformly at random from the field. Analogies are drawn between
the above results and rank-metric codes in the coding theory literature. In
fact, we are also strongly motivated by understanding when minimum rank
distance decoding of random rank-metric codes succeeds. To this end, we derive
distance properties of equiprobable and sparse rank-metric codes. These
distance properties provide a precise geometric interpretation of the fact that
the sparse ensemble requires as few measurements as the dense one. Finally, we
provide a non-exhaustive procedure to search for the unknown low-rank matrix.Comment: Accepted to the IEEE Transactions on Information Theory; Presented at
IEEE International Symposium on Information Theory (ISIT) 201
Compressed Sensing Using Binary Matrices of Nearly Optimal Dimensions
In this paper, we study the problem of compressed sensing using binary
measurement matrices and -norm minimization (basis pursuit) as the
recovery algorithm. We derive new upper and lower bounds on the number of
measurements to achieve robust sparse recovery with binary matrices. We
establish sufficient conditions for a column-regular binary matrix to satisfy
the robust null space property (RNSP) and show that the associated sufficient
conditions % sparsity bounds for robust sparse recovery obtained using the RNSP
are better by a factor of compared to the
sufficient conditions obtained using the restricted isometry property (RIP).
Next we derive universal \textit{lower} bounds on the number of measurements
that any binary matrix needs to have in order to satisfy the weaker sufficient
condition based on the RNSP and show that bipartite graphs of girth six are
optimal. Then we display two classes of binary matrices, namely parity check
matrices of array codes and Euler squares, which have girth six and are nearly
optimal in the sense of almost satisfying the lower bound. In principle,
randomly generated Gaussian measurement matrices are "order-optimal". So we
compare the phase transition behavior of the basis pursuit formulation using
binary array codes and Gaussian matrices and show that (i) there is essentially
no difference between the phase transition boundaries in the two cases and (ii)
the CPU time of basis pursuit with binary matrices is hundreds of times faster
than with Gaussian matrices and the storage requirements are less. Therefore it
is suggested that binary matrices are a viable alternative to Gaussian matrices
for compressed sensing using basis pursuit. \end{abstract}Comment: 28 pages, 3 figures, 5 table
Modern Coding Theory: The Statistical Mechanics and Computer Science Point of View
These are the notes for a set of lectures delivered by the two authors at the
Les Houches Summer School on `Complex Systems' in July 2006. They provide an
introduction to the basic concepts in modern (probabilistic) coding theory,
highlighting connections with statistical mechanics. We also stress common
concepts with other disciplines dealing with similar problems that can be
generically referred to as `large graphical models'.
While most of the lectures are devoted to the classical channel coding
problem over simple memoryless channels, we present a discussion of more
complex channel models. We conclude with an overview of the main open
challenges in the field.Comment: Lectures at Les Houches Summer School on `Complex Systems', July
2006, 44 pages, 25 ps figure
- …