46 research outputs found
Block-Diagonal and LT Codes for Distributed Computing With Straggling Servers
We propose two coded schemes for the distributed computing problem of
multiplying a matrix by a set of vectors. The first scheme is based on
partitioning the matrix into submatrices and applying maximum distance
separable (MDS) codes to each submatrix. For this scheme, we prove that up to a
given number of partitions the communication load and the computational delay
(not including the encoding and decoding delay) are identical to those of the
scheme recently proposed by Li et al., based on a single, long MDS code.
However, due to the use of shorter MDS codes, our scheme yields a significantly
lower overall computational delay when the delay incurred by encoding and
decoding is also considered. We further propose a second coded scheme based on
Luby Transform (LT) codes under inactivation decoding. Interestingly, LT codes
may reduce the delay over the partitioned scheme at the expense of an increased
communication load. We also consider distributed computing under a deadline and
show numerically that the proposed schemes outperform other schemes in the
literature, with the LT code-based scheme yielding the best performance for the
scenarios considered.Comment: To appear in IEEE Transactions on Communication
Straggler-Resilient Distributed Computing
In reference to IEEE copyrighted material which is used with permission in this thesis, the IEEE does not endorse any of University of Bergen's products or services. Internal or personal use of this material is permitted. If interested in reprinting/republishing IEEE copyrighted material for advertising or promotional purposes or for creating new collective works for resale or redistribution, please go to http://www.ieee.org/publications_standards/publications/rights/rights_link.html to learn how to obtain a License from RightsLink.Utbredelsen av distribuerte datasystemer har økt betydelig de siste årene. Dette skyldes først og fremst at behovet for beregningskraft øker raskere enn hastigheten til en enkelt datamaskin, slik at vi må bruke flere datamaskiner for å møte etterspørselen, og at det blir stadig mer vanlig at systemer er spredt over et stort geografisk område. Dette paradigmeskiftet medfører mange tekniske utfordringer. En av disse er knyttet til "straggler"-problemet, som er forårsaket av forsinkelsesvariasjoner i distribuerte systemer, der en beregning forsinkes av noen få langsomme noder slik at andre noder må vente før de kan fortsette. Straggler-problemet kan svekke effektiviteten til distribuerte systemer betydelig i situasjoner der en enkelt node som opplever en midlertidig overbelastning kan låse et helt system.
I denne avhandlingen studerer vi metoder for å gjøre beregninger av forskjellige typer motstandsdyktige mot slike problemer, og dermed gjøre det mulig for et distribuert system å fortsette til tross for at noen noder ikke svarer i tide. Metodene vi foreslår er skreddersydde for spesielle typer beregninger. Vi foreslår metoder tilpasset distribuert matrise-vektor-multiplikasjon (som er en grunnleggende operasjon i mange typer beregninger), distribuert maskinlæring og distribuert sporing av en tilfeldig prosess (for eksempel det å spore plasseringen til kjøretøy for å unngå kollisjon). De foreslåtte metodene utnytter redundans som enten blir introdusert som en del av metoden, eller som naturlig eksisterer i det underliggende problemet, til å kompensere for manglende delberegninger. For en av de foreslåtte metodene utnytter vi redundans for også å øke effektiviteten til kommunikasjonen mellom noder, og dermed redusere mengden data som må kommuniseres over nettverket. I likhet med straggler-problemet kan slik kommunikasjon begrense effektiviteten i distribuerte systemer betydelig. De foreslåtte metodene gir signifikante forbedringer i ventetid og pålitelighet sammenlignet med tidligere metoder.The number and scale of distributed computing systems being built have increased significantly in recent years. Primarily, that is because: i) our computing needs are increasing at a much higher rate than computers are becoming faster, so we need to use more of them to meet demand, and ii) systems that are fundamentally distributed, e.g., because the components that make them up are geographically distributed, are becoming increasingly prevalent. This paradigm shift is the source of many engineering challenges. Among them is the straggler problem, which is a problem caused by latency variations in distributed systems, where faster nodes are held up by slower ones. The straggler problem can significantly impair the effectiveness of distributed systems—a single node experiencing a transient outage (e.g., due to being overloaded) can lock up an entire system.
In this thesis, we consider schemes for making a range of computations resilient against such stragglers, thus allowing a distributed system to proceed in spite of some nodes failing to respond on time. The schemes we propose are tailored for particular computations. We propose schemes designed for distributed matrix-vector multiplication, which is a fundamental operation in many computing applications, distributed machine learning—in the form of a straggler-resilient first-order optimization method—and distributed tracking of a time-varying process (e.g., tracking the location of a set of vehicles for a collision avoidance system). The proposed schemes rely on exploiting redundancy that is either introduced as part of the scheme, or exists naturally in the underlying problem, to compensate for missing results, i.e., they are a form of forward error correction for computations. Further, for one of the proposed schemes we exploit redundancy to also improve the effectiveness of multicasting, thus reducing the amount of data that needs to be communicated over the network. Such inter-node communication, like the straggler problem, can significantly limit the effectiveness of distributed systems. For the schemes we propose, we are able to show significant improvements in latency and reliability compared to previous schemes.Doktorgradsavhandlin
Coded Distributed Tracking
We consider the problem of tracking the state of a process that evolves over
time in a distributed setting, with multiple observers each observing parts of
the state, which is a fundamental information processing problem with a wide
range of applications. We propose a cloud-assisted scheme where the tracking is
performed over the cloud. In particular, to provide timely and accurate
updates, and alleviate the straggler problem of cloud computing, we propose a
coded distributed computing approach where coded observations are distributed
over multiple workers. The proposed scheme is based on a coded version of the
Kalman filter that operates on data encoded with an erasure correcting code,
such that the state can be estimated from partial updates computed by a subset
of the workers. We apply the proposed scheme to the problem of tracking
multiple vehicles. We show that replication achieves significantly higher
accuracy than the corresponding uncoded scheme. The use of maximum distance
separable (MDS) codes further improves accuracy for larger update intervals. In
both cases, the proposed scheme approaches the accuracy of an ideal centralized
scheme when the update interval is large enough. Finally, we observe a
trade-off between age-of-information and estimation accuracy for MDS codes.Comment: Accepted for publication at IEEE GLOBECOM 201
A Droplet Approach Based on Raptor Codes for Distributed Computing with Straggling Servers
We propose a coded distributed computing scheme based on Raptor codes to address the straggler problem. In particular, we consider a scheme where each server computes intermediate values, referred to as droplets, that are either stored locally or sent over the network. Once enough droplets are collected, the computation can be completed. Compared to previous schemes in the literature, our proposed scheme achieves lower computational delay when the decoding time is taken into account
Coded Distributed Tracking
We consider the problem of tracking the state of a process that evolves over time in a distributed setting, with multiple observers each observing parts of the state, which is a fundamental information processing problem with a wide range of applications. We propose a cloud-assisted scheme where the tracking is performed over the cloud. In particular, to provide timely and accurate updates, and alleviate the straggler problem of cloud computing, we propose a coded distributed computing approach where coded observations are distributed over multiple workers. The proposed scheme is based on a coded version of the Kalman filter that operates on data encoded with an erasure correcting code, such that the state can be estimated from partial updates computed by a subset of the workers. We apply the proposed scheme to the problem of tracking multiple vehicles. We show that replication achieves significantly higher accuracy than the corresponding uncoded scheme. The use of maximum distance separable (MDS) codes further improves accuracy for larger update intervals. In both cases, the proposed scheme approaches the accuracy of an ideal centralized scheme when the update interval is large enough. Finally, we observe a trade- off between age-of-information and estimation accuracy for MDS codes
Private Edge Computing for Linear Inference Based on Secret Sharing
We consider an edge computing scenario where users want to perform a linear
computation on local, private data and a network-wide, public matrix. The users
offload computations to edge servers located at the edge of the network, but do
not want the servers, or any other party with access to the wireless links, to
gain any information about their data. We provide a scheme that guarantees
information-theoretic user data privacy against an eavesdropper with access to
a number of edge servers or their corresponding communication links. The
proposed scheme utilizes secret sharing and partial replication to provide
privacy, mitigate the effect of straggling servers, and to allow for joint
beamforming opportunities in the download phase, in order to minimize the
overall latency, consisting of upload, computation, and download latencies.Comment: 6 pages, 4 figures, submitted to the 2020 IEEE Global Communications
Conference (IEEE GLOBECOM
Randomized Polar Codes for Anytime Distributed Machine Learning
We present a novel distributed computing framework that is robust to slow
compute nodes, and is capable of both approximate and exact computation of
linear operations. The proposed mechanism integrates the concepts of randomized
sketching and polar codes in the context of coded computation. We propose a
sequential decoding algorithm designed to handle real valued data while
maintaining low computational complexity for recovery. Additionally, we provide
an anytime estimator that can generate provably accurate estimates even when
the set of available node outputs is not decodable. We demonstrate the
potential applications of this framework in various contexts, such as
large-scale matrix multiplication and black-box optimization. We present the
implementation of these methods on a serverless cloud computing system and
provide numerical results to demonstrate their scalability in practice,
including ImageNet scale computations