419 research outputs found
Re-proving Channel Polarization Theorems: An Extremality and Robustness Analysis
The general subject considered in this thesis is a recently discovered coding
technique, polar coding, which is used to construct a class of error correction
codes with unique properties. In his ground-breaking work, Ar{\i}kan proved
that this class of codes, called polar codes, achieve the symmetric capacity
--- the mutual information evaluated at the uniform input distribution ---of
any stationary binary discrete memoryless channel with low complexity encoders
and decoders requiring in the order of operations in the
block-length . This discovery settled the long standing open problem left by
Shannon of finding low complexity codes achieving the channel capacity.
Polar coding settled an open problem in information theory, yet opened plenty
of challenging problems that need to be addressed. A significant part of this
thesis is dedicated to advancing the knowledge about this technique in two
directions. The first one provides a better understanding of polar coding by
generalizing some of the existing results and discussing their implications,
and the second one studies the robustness of the theory over communication
models introducing various forms of uncertainty or variations into the
probabilistic model of the channel.Comment: Preview of my PhD Thesis, EPFL, Lausanne, 2014. For the full version,
see http://people.epfl.ch/mine.alsan/publication
On the similarities between generalized rank and Hamming weights and their applications to network coding
Rank weights and generalized rank weights have been proven to characterize
error and erasure correction, and information leakage in linear network coding,
in the same way as Hamming weights and generalized Hamming weights describe
classical error and erasure correction, and information leakage in wire-tap
channels of type II and code-based secret sharing. Although many similarities
between both cases have been established and proven in the literature, many
other known results in the Hamming case, such as bounds or characterizations of
weight-preserving maps, have not been translated to the rank case yet, or in
some cases have been proven after developing a different machinery. The aim of
this paper is to further relate both weights and generalized weights, show that
the results and proofs in both cases are usually essentially the same, and see
the significance of these similarities in network coding. Some of the new
results in the rank case also have new consequences in the Hamming case
Sequential Gradient Coding For Straggler Mitigation
In distributed computing, slower nodes (stragglers) usually become a
bottleneck. Gradient Coding (GC), introduced by Tandon et al., is an efficient
technique that uses principles of error-correcting codes to distribute gradient
computation in the presence of stragglers. In this paper, we consider the
distributed computation of a sequence of gradients ,
where processing of each gradient starts in round- and finishes by
round-. Here denotes a delay parameter. For the GC scheme,
coding is only across computing nodes and this results in a solution where
. On the other hand, having allows for designing schemes which
exploit the temporal dimension as well. In this work, we propose two schemes
that demonstrate improved performance compared to GC. Our first scheme combines
GC with selective repetition of previously unfinished tasks and achieves
improved straggler mitigation. In our second scheme, which constitutes our main
contribution, we apply GC to a subset of the tasks and repetition for the
remainder of the tasks. We then multiplex these two classes of tasks across
workers and rounds in an adaptive manner, based on past straggler patterns.
Using theoretical analysis, we demonstrate that our second scheme achieves
significant reduction in the computational load. In our experiments, we study a
practical setting of concurrently training multiple neural networks over an AWS
Lambda cluster involving 256 worker nodes, where our framework naturally
applies. We demonstrate that the latter scheme can yield a 16\% improvement in
runtime over the baseline GC scheme, in the presence of naturally occurring,
non-simulated stragglers
- …