3,548 research outputs found
Characterization and Efficient Search of Non-Elementary Trapping Sets of LDPC Codes with Applications to Stopping Sets
In this paper, we propose a characterization for non-elementary trapping sets
(NETSs) of low-density parity-check (LDPC) codes. The characterization is based
on viewing a NETS as a hierarchy of embedded graphs starting from an ETS. The
characterization corresponds to an efficient search algorithm that under
certain conditions is exhaustive. As an application of the proposed
characterization/search, we obtain lower and upper bounds on the stopping
distance of LDPC codes.
We examine a large number of regular and irregular LDPC codes, and
demonstrate the efficiency and versatility of our technique in finding lower
and upper bounds on, and in many cases the exact value of, . Finding
, or establishing search-based lower or upper bounds, for many of the
examined codes are out of the reach of any existing algorithm
Decomposition Methods for Large Scale LP Decoding
When binary linear error-correcting codes are used over symmetric channels, a
relaxed version of the maximum likelihood decoding problem can be stated as a
linear program (LP). This LP decoder can be used to decode error-correcting
codes at bit-error-rates comparable to state-of-the-art belief propagation (BP)
decoders, but with significantly stronger theoretical guarantees. However, LP
decoding when implemented with standard LP solvers does not easily scale to the
block lengths of modern error correcting codes. In this paper we draw on
decomposition methods from optimization theory, specifically the Alternating
Directions Method of Multipliers (ADMM), to develop efficient distributed
algorithms for LP decoding.
The key enabling technical result is a "two-slice" characterization of the
geometry of the parity polytope, which is the convex hull of all codewords of a
single parity check code. This new characterization simplifies the
representation of points in the polytope. Using this simplification, we develop
an efficient algorithm for Euclidean norm projection onto the parity polytope.
This projection is required by ADMM and allows us to use LP decoding, with all
its theoretical guarantees, to decode large-scale error correcting codes
efficiently.
We present numerical results for LDPC codes of lengths more than 1000. The
waterfall region of LP decoding is seen to initiate at a slightly higher
signal-to-noise ratio than for sum-product BP, however an error floor is not
observed for LP decoding, which is not the case for BP. Our implementation of
LP decoding using ADMM executes as fast as our baseline sum-product BP decoder,
is fully parallelizable, and can be seen to implement a type of message-passing
with a particularly simple schedule.Comment: 35 pages, 11 figures. An early version of this work appeared at the
49th Annual Allerton Conference, September 2011. This version to appear in
IEEE Transactions on Information Theor
A survey of QoS-aware web service composition techniques
Web service composition can be briefly described as the process of aggregating services with disparate functionalities into a new composite service in order to meet increasingly complex needs of users. Service composition process has been accurate on dealing with services having disparate functionalities, however, over the years the number of web services in particular that exhibit similar functionalities and varying Quality of Service (QoS) has significantly increased. As such, the problem becomes how to select appropriate web services such that the QoS of the resulting composite service is maximized or, in some cases, minimized. This constitutes an NP-hard problem as it is complicated and difficult to solve. In this paper, a discussion of concepts of web service composition and a holistic review of current service composition techniques proposed in literature is presented. Our review spans several publications in the field that can serve as a road map for future research
HoloTrap: Interactive hologram design for multiple dynamic optical trapping
This work presents an application that generates real-time holograms to be
displayed on a holographic optical tweezers setup; a technique that allows the
manipulation of particles in the range from micrometres to nanometres. The
software is written in Java, and uses random binary masks to generate the
holograms. It allows customization of several parameters that are dependent on
the experimental setup, such as the specific characteristics of the device
displaying the hologram, or the presence of aberrations. We evaluate the
software's performance and conclude that real-time interaction is achieved. We
give our experimental results from manipulating 5 micron-diametre microspheres
using the program.Comment: 17 pages, 6 figure
Design and Analysis of Time-Invariant SC-LDPC Convolutional Codes With Small Constraint Length
In this paper, we deal with time-invariant spatially coupled low-density
parity-check convolutional codes (SC-LDPC-CCs). Classic design approaches
usually start from quasi-cyclic low-density parity-check (QC-LDPC) block codes
and exploit suitable unwrapping procedures to obtain SC-LDPC-CCs. We show that
the direct design of the SC-LDPC-CCs syndrome former matrix or, equivalently,
the symbolic parity-check matrix, leads to codes with smaller syndrome former
constraint lengths with respect to the best solutions available in the
literature. We provide theoretical lower bounds on the syndrome former
constraint length for the most relevant families of SC-LDPC-CCs, under
constraints on the minimum length of cycles in their Tanner graphs. We also
propose new code design techniques that approach or achieve such theoretical
limits.Comment: 30 pages, 5 figures, accepted for publication in IEEE Transactions on
Communication
Instanton-based Techniques for Analysis and Reduction of Error Floors of LDPC Codes
We describe a family of instanton-based optimization methods developed
recently for the analysis of the error floors of low-density parity-check
(LDPC) codes. Instantons are the most probable configurations of the channel
noise which result in decoding failures. We show that the general idea and the
respective optimization technique are applicable broadly to a variety of
channels, discrete or continuous, and variety of sub-optimal decoders.
Specifically, we consider: iterative belief propagation (BP) decoders, Gallager
type decoders, and linear programming (LP) decoders performing over the
additive white Gaussian noise channel (AWGNC) and the binary symmetric channel
(BSC).
The instanton analysis suggests that the underlying topological structures of
the most probable instanton of the same code but different channels and
decoders are related to each other. Armed with this understanding of the
graphical structure of the instanton and its relation to the decoding failures,
we suggest a method to construct codes whose Tanner graphs are free of these
structures, and thus have less significant error floors.Comment: To appear in IEEE JSAC On Capacity Approaching Codes. 11 Pages and 6
Figure
Rank Minimization over Finite Fields: Fundamental Limits and Coding-Theoretic Interpretations
This paper establishes information-theoretic limits in estimating a finite
field low-rank matrix given random linear measurements of it. These linear
measurements are obtained by taking inner products of the low-rank matrix with
random sensing matrices. Necessary and sufficient conditions on the number of
measurements required are provided. It is shown that these conditions are sharp
and the minimum-rank decoder is asymptotically optimal. The reliability
function of this decoder is also derived by appealing to de Caen's lower bound
on the probability of a union. The sufficient condition also holds when the
sensing matrices are sparse - a scenario that may be amenable to efficient
decoding. More precisely, it is shown that if the n\times n-sensing matrices
contain, on average, \Omega(nlog n) entries, the number of measurements
required is the same as that when the sensing matrices are dense and contain
entries drawn uniformly at random from the field. Analogies are drawn between
the above results and rank-metric codes in the coding theory literature. In
fact, we are also strongly motivated by understanding when minimum rank
distance decoding of random rank-metric codes succeeds. To this end, we derive
distance properties of equiprobable and sparse rank-metric codes. These
distance properties provide a precise geometric interpretation of the fact that
the sparse ensemble requires as few measurements as the dense one. Finally, we
provide a non-exhaustive procedure to search for the unknown low-rank matrix.Comment: Accepted to the IEEE Transactions on Information Theory; Presented at
IEEE International Symposium on Information Theory (ISIT) 201
- …