16,140 research outputs found
Lowering the Error Floor of LDPC Codes Using Cyclic Liftings
Cyclic liftings are proposed to lower the error floor of low-density
parity-check (LDPC) codes. The liftings are designed to eliminate dominant
trapping sets of the base code by removing the short cycles which form the
trapping sets. We derive a necessary and sufficient condition for the cyclic
permutations assigned to the edges of a cycle of length in the
base graph such that the inverse image of in the lifted graph consists of
only cycles of length strictly larger than . The proposed method is
universal in the sense that it can be applied to any LDPC code over any channel
and for any iterative decoding algorithm. It also preserves important
properties of the base code such as degree distributions, encoder and decoder
structure, and in some cases, the code rate. The proposed method is applied to
both structured and random codes over the binary symmetric channel (BSC). The
error floor improves consistently by increasing the lifting degree, and the
results show significant improvements in the error floor compared to the base
code, a random code of the same degree distribution and block length, and a
random lifting of the same degree. Similar improvements are also observed when
the codes designed for the BSC are applied to the additive white Gaussian noise
(AWGN) channel
Functional diagnosability and recovery from massive faults in digital systems Quarterly progress reports, 17 May - 16 Nov. 1970 /final/
Diagnosability and recovery from massive faults in digital system
Advanced channel coding for space mission telecommand links
We investigate and compare different options for updating the error
correcting code currently used in space mission telecommand links. Taking as a
reference the solutions recently emerged as the most promising ones, based on
Low-Density Parity-Check codes, we explore the behavior of alternative schemes,
based on parallel concatenated turbo codes and soft-decision decoded BCH codes.
Our analysis shows that these further options can offer similar or even better
performance.Comment: 5 pages, 7 figures, presented at IEEE VTC 2013 Fall, Las Vegas, USA,
Sep. 2013 Proc. IEEE Vehicular Technology Conference (VTC 2013 Fall), ISBN
978-1-6185-9, Las Vegas, USA, Sep. 201
Near-Optimal Straggler Mitigation for Distributed Gradient Methods
Modern learning algorithms use gradient descent updates to train inferential
models that best explain data. Scaling these approaches to massive data sizes
requires proper distributed gradient descent schemes where distributed worker
nodes compute partial gradients based on their partial and local data sets, and
send the results to a master node where all the computations are aggregated
into a full gradient and the learning model is updated. However, a major
performance bottleneck that arises is that some of the worker nodes may run
slow. These nodes a.k.a. stragglers can significantly slow down computation as
the slowest node may dictate the overall computational time. We propose a
distributed computing scheme, called Batched Coupon's Collector (BCC) to
alleviate the effect of stragglers in gradient methods. We prove that our BCC
scheme is robust to a near optimal number of random stragglers. We also
empirically demonstrate that our proposed BCC scheme reduces the run-time by up
to 85.4% over Amazon EC2 clusters when compared with other straggler mitigation
strategies. We also generalize the proposed BCC scheme to minimize the
completion time when implementing gradient descent-based algorithms over
heterogeneous worker nodes
- …