586 research outputs found
Improving Distributed Gradient Descent Using Reed-Solomon Codes
Today's massively-sized datasets have made it necessary to often perform
computations on them in a distributed manner. In principle, a computational
task is divided into subtasks which are distributed over a cluster operated by
a taskmaster. One issue faced in practice is the delay incurred due to the
presence of slow machines, known as \emph{stragglers}. Several schemes,
including those based on replication, have been proposed in the literature to
mitigate the effects of stragglers and more recently, those inspired by coding
theory have begun to gain traction. In this work, we consider a distributed
gradient descent setting suitable for a wide class of machine learning
problems. We adapt the framework of Tandon et al. (arXiv:1612.03301) and
present a deterministic scheme that, for a prescribed per-machine computational
effort, recovers the gradient from the least number of machines
theoretically permissible, via an decoding algorithm. We also provide
a theoretical delay model which can be used to minimize the expected waiting
time per computation by optimally choosing the parameters of the scheme.
Finally, we supplement our theoretical findings with numerical results that
demonstrate the efficacy of the method and its advantages over competing
schemes
Iterative Algebraic Soft-Decision List Decoding of Reed-Solomon Codes
In this paper, we present an iterative soft-decision decoding algorithm for
Reed-Solomon codes offering both complexity and performance advantages over
previously known decoding algorithms. Our algorithm is a list decoding
algorithm which combines two powerful soft decision decoding techniques which
were previously regarded in the literature as competitive, namely, the
Koetter-Vardy algebraic soft-decision decoding algorithm and belief-propagation
based on adaptive parity check matrices, recently proposed by Jiang and
Narayanan. Building on the Jiang-Narayanan algorithm, we present a
belief-propagation based algorithm with a significant reduction in
computational complexity. We introduce the concept of using a
belief-propagation based decoder to enhance the soft-input information prior to
decoding with an algebraic soft-decision decoder. Our algorithm can also be
viewed as an interpolation multiplicity assignment scheme for algebraic
soft-decision decoding of Reed-Solomon codes.Comment: Submitted to IEEE for publication in Jan 200
Decoding of Convolutional Codes over the Erasure Channel
In this paper we study the decoding capabilities of convolutional codes over
the erasure channel. Of special interest will be maximum distance profile (MDP)
convolutional codes. These are codes which have a maximum possible column
distance increase. We show how this strong minimum distance condition of MDP
convolutional codes help us to solve error situations that maximum distance
separable (MDS) block codes fail to solve. Towards this goal, we define two
subclasses of MDP codes: reverse-MDP convolutional codes and complete-MDP
convolutional codes. Reverse-MDP codes have the capability to recover a maximum
number of erasures using an algorithm which runs backward in time. Complete-MDP
convolutional codes are both MDP and reverse-MDP codes. They are capable to
recover the state of the decoder under the mildest condition. We show that
complete-MDP convolutional codes perform in certain sense better than MDS block
codes of the same rate over the erasure channel.Comment: 18 pages, 3 figures, to appear on IEEE Transactions on Information
Theor
New insights on neutral binary representations for evolutionary optimization
This paper studies a family of redundant binary representations NNg(l, k), which are based on the mathematical formulation of error control codes, in particular, on linear block codes, which are used to add redundancy and neutrality to the representations. The analysis of the properties of uniformity, connectivity, synonymity, locality and topology of the NNg(l, k) representations is presented, as well as the way an (1+1)-ES can be modeled using Markov chains and applied to NK fitness landscapes with adjacent neighborhood.The results show that it is possible to design synonymously redundant representations that allow an increase of the connectivity between phenotypes. For easy problems, synonymously NNg(l, k) representations, with high locality, and where it is not necessary to present high values of connectivity are the most suitable for an efficient evolutionary search. On the contrary, for difficult problems, NNg(l, k) representations with low locality, which present connectivity between intermediate to high and with intermediate values of synonymity are the best ones. These results allow to conclude that NNg(l, k) representations with better performance in NK fitness landscapes with adjacent neighborhood do not exhibit extreme values of any of the properties commonly considered in the literature of evolutionary computation. This conclusion is contrary to what one would expect when taking into account the literature recommendations. This may help understand the current difficulty to formulate redundant representations, which are proven to be successful in evolutionary computation. (C) 2016 Elsevier B.V. All rights reserved
A STUDY OF ERASURE CORRECTING CODES
This work focus on erasure codes, particularly those that of high performance,
and the related decoding algorithms, especially with low
computational complexity. The work is composed of different pieces,
but the main components are developed within the following two main
themes.
Ideas of message passing are applied to solve the erasures after the
transmission. Efficient matrix-representation of the belief propagation
(BP) decoding algorithm on the BEG is introduced as the recovery
algorithm. Gallager's bit-flipping algorithm are further developed
into the guess and multi-guess algorithms especially for the
application to recover the unsolved erasures after the recovery algorithm.
A novel maximum-likelihood decoding algorithm, the In-place
algorithm, is proposed with a reduced computational complexity. A
further study on the marginal number of correctable erasures by the
In-place algoritinn determines a lower bound of the average number
of correctable erasures. Following the spirit in search of the most likable
codeword based on the received vector, we propose a new branch-evaluation-
search-on-the-code-tree (BESOT) algorithm, which is powerful
enough to approach the ML performance for all linear block
codes.
To maximise the recovery capability of the In-place algorithm in
network transmissions, we propose the product packetisation structure
to reconcile the computational complexity of the In-place algorithm.
Combined with the proposed product packetisation structure,
the computational complexity is less than the quadratic complexity
bound. We then extend this to application of the Rayleigh fading
channel to solve the errors and erasures. By concatenating an outer
code, such as BCH codes, the product-packetised RS codes have the
performance of the hard-decision In-place algorithm significantly better
than that of the soft-decision iterative algorithms on optimally
designed LDPC codes
- …