45,480 research outputs found
Universal secure rank-metric coding schemes with optimal communication overheads
We study the problem of reducing the communication overhead from a noisy
wire-tap channel or storage system where data is encoded as a matrix, when more
columns (or their linear combinations) are available. We present its
applications to reducing communication overheads in universal secure linear
network coding and secure distributed storage with crisscross errors and
erasures and in the presence of a wire-tapper. Our main contribution is a
method to transform coding schemes based on linear rank-metric codes, with
certain properties, to schemes with lower communication overheads. By applying
this method to pairs of Gabidulin codes, we obtain coding schemes with optimal
information rate with respect to their security and rank error correction
capability, and with universally optimal communication overheads, when , being and the number of columns and number of rows,
respectively. Moreover, our method can be applied to other families of maximum
rank distance codes when . The downside of the method is generally
expanding the packet length, but some practical instances come at no cost.Comment: 21 pages, LaTeX; parts of this paper have been accepted for
presentation at the IEEE International Symposium on Information Theory,
Aachen, Germany, June 201
A Unified Coded Deep Neural Network Training Strategy Based on Generalized PolyDot Codes for Matrix Multiplication
This paper has two contributions. First, we propose a novel coded matrix
multiplication technique called Generalized PolyDot codes that advances on
existing methods for coded matrix multiplication under storage and
communication constraints. This technique uses "garbage alignment," i.e.,
aligning computations in coded computing that are not a part of the desired
output. Generalized PolyDot codes bridge between Polynomial codes and MatDot
codes, trading off between recovery threshold and communication costs. Second,
we demonstrate that Generalized PolyDot can be used for training large Deep
Neural Networks (DNNs) on unreliable nodes prone to soft-errors. This requires
us to address three additional challenges: (i) prohibitively large overhead of
coding the weight matrices in each layer of the DNN at each iteration; (ii)
nonlinear operations during training, which are incompatible with linear
coding; and (iii) not assuming presence of an error-free master node, requiring
us to architect a fully decentralized implementation without any "single point
of failure." We allow all primary DNN training steps, namely, matrix
multiplication, nonlinear activation, Hadamard product, and update steps as
well as the encoding/decoding to be error-prone. We consider the case of
mini-batch size , as well as , leveraging coded matrix-vector
products, and matrix-matrix products respectively. The problem of DNN training
under soft-errors also motivates an interesting, probabilistic error model
under which a real number MDS code is shown to correct errors
with probability as compared to for the
more conventional, adversarial error model. We also demonstrate that our
proposed strategy can provide unbounded gains in error tolerance over a
competing replication strategy and a preliminary MDS-code-based strategy for
both these error models.Comment: Presented in part at the IEEE International Symposium on Information
Theory 2018 (Submission Date: Jan 12 2018); Currently under review at the
IEEE Transactions on Information Theor
- …