339,816 research outputs found
Hierarchical Coding for Distributed Computing
Coding for distributed computing supports low-latency computation by
relieving the burden of straggling workers. While most existing works assume a
simple master-worker model, we consider a hierarchical computational structure
consisting of groups of workers, motivated by the need to reflect the
architectures of real-world distributed computing systems. In this work, we
propose a hierarchical coding scheme for this model, as well as analyze its
decoding cost and expected computation time. Specifically, we first provide
upper and lower bounds on the expected computing time of the proposed scheme.
We also show that our scheme enables efficient parallel decoding, thus reducing
decoding costs by orders of magnitude over non-hierarchical schemes. When
considering both decoding cost and computing time, the proposed hierarchical
coding is shown to outperform existing schemes in many practical scenarios.Comment: 7 pages, part of the paper is submitted to ISIT201
Network Coding for Computing: Cut-Set Bounds
The following \textit{network computing} problem is considered. Source nodes
in a directed acyclic network generate independent messages and a single
receiver node computes a target function of the messages. The objective is
to maximize the average number of times can be computed per network usage,
i.e., the ``computing capacity''. The \textit{network coding} problem for a
single-receiver network is a special case of the network computing problem in
which all of the source messages must be reproduced at the receiver. For
network coding with a single receiver, routing is known to achieve the capacity
by achieving the network \textit{min-cut} upper bound. We extend the definition
of min-cut to the network computing problem and show that the min-cut is still
an upper bound on the maximum achievable rate and is tight for computing (using
coding) any target function in multi-edge tree networks and for computing
linear target functions in any network. We also study the bound's tightness for
different classes of target functions. In particular, we give a lower bound on
the computing capacity in terms of the Steiner tree packing number and a
different bound for symmetric functions. We also show that for certain networks
and target functions, the computing capacity can be less than an arbitrarily
small fraction of the min-cut bound.Comment: Submitted to the IEEE Transactions on Information Theory (Special
Issue on Facets of Coding Theory: from Algorithms to Networks); Revised on
Aug 9, 201
Leveraging Coding Techniques for Speeding up Distributed Computing
Large scale clusters leveraging distributed computing frameworks such as
MapReduce routinely process data that are on the orders of petabytes or more.
The sheer size of the data precludes the processing of the data on a single
computer. The philosophy in these methods is to partition the overall job into
smaller tasks that are executed on different servers; this is called the map
phase. This is followed by a data shuffling phase where appropriate data is
exchanged between the servers. The final so-called reduce phase, completes the
computation.
One potential approach, explored in prior work for reducing the overall
execution time is to operate on a natural tradeoff between computation and
communication. Specifically, the idea is to run redundant copies of map tasks
that are placed on judiciously chosen servers. The shuffle phase exploits the
location of the nodes and utilizes coded transmission. The main drawback of
this approach is that it requires the original job to be split into a number of
map tasks that grows exponentially in the system parameters. This is
problematic, as we demonstrate that splitting jobs too finely can in fact
adversely affect the overall execution time.
In this work we show that one can simultaneously obtain low communication
loads while ensuring that jobs do not need to be split too finely. Our approach
uncovers a deep relationship between this problem and a class of combinatorial
structures called resolvable designs. Appropriate interpretation of resolvable
designs can allow for the development of coded distributed computing schemes
where the splitting levels are exponentially lower than prior work. We present
experimental results obtained on Amazon EC2 clusters for a widely known
distributed algorithm, namely TeraSort. We obtain over 4.69 improvement
in speedup over the baseline approach and more than 2.6 over current
state of the art
- …