67 research outputs found
Latency Analysis of Coded Computation Schemes over Wireless Networks
Large-scale distributed computing systems face two major bottlenecks that
limit their scalability: straggler delay caused by the variability of
computation times at different worker nodes and communication bottlenecks
caused by shuffling data across many nodes in the network. Recently, it has
been shown that codes can provide significant gains in overcoming these
bottlenecks. In particular, optimal coding schemes for minimizing latency in
distributed computation of linear functions and mitigating the effect of
stragglers was proposed for a wired network, where the workers can
simultaneously transmit messages to a master node without interference. In this
paper, we focus on the problem of coded computation over a wireless
master-worker setup with straggling workers, where only one worker can transmit
the result of its local computation back to the master at a time. We consider 3
asymptotic regimes (determined by how the communication and computation times
are scaled with the number of workers) and precisely characterize the total
run-time of the distributed algorithm and optimum coding strategy in each
regime. In particular, for the regime of practical interest where the
computation and communication times of the distributed computing algorithm are
comparable, we show that the total run-time approaches a simple lower bound
that decouples computation and communication, and demonstrate that coded
schemes are times faster than uncoded schemes
The Green Choice: Learning and Influencing Human Decisions on Shared Roads
Autonomous vehicles have the potential to increase the capacity of roads via
platooning, even when human drivers and autonomous vehicles share roads.
However, when users of a road network choose their routes selfishly, the
resulting traffic configuration may be very inefficient. Because of this, we
consider how to influence human decisions so as to decrease congestion on these
roads. We consider a network of parallel roads with two modes of
transportation: (i) human drivers who will choose the quickest route available
to them, and (ii) ride hailing service which provides an array of autonomous
vehicle ride options, each with different prices, to users. In this work, we
seek to design these prices so that when autonomous service users choose from
these options and human drivers selfishly choose their resulting routes, road
usage is maximized and transit delay is minimized. To do so, we formalize a
model of how autonomous service users make choices between routes with
different price/delay values. Developing a preference-based algorithm to learn
the preferences of the users, and using a vehicle flow model related to the
Fundamental Diagram of Traffic, we formulate a planning optimization to
maximize a social objective and demonstrate the benefit of the proposed routing
and learning scheme.Comment: Submitted to CDC 201
Online Coded Caching
We consider a basic content distribution scenario consisting of a single
origin server connected through a shared bottleneck link to a number of users
each equipped with a cache of finite memory. The users issue a sequence of
content requests from a set of popular files, and the goal is to operate the
caches as well as the server such that these requests are satisfied with the
minimum number of bits sent over the shared link. Assuming a basic Markov model
for renewing the set of popular files, we characterize approximately the
optimal long-term average rate of the shared link. We further prove that the
optimal online scheme has approximately the same performance as the optimal
offline scheme, in which the cache contents can be updated based on the entire
set of popular files before each new request. To support these theoretical
results, we propose an online coded caching scheme termed coded least-recently
sent (LRS) and simulate it for a demand time series derived from the dataset
made available by Netflix for the Netflix Prize. For this time series, we show
that the proposed coded LRS algorithm significantly outperforms the popular
least-recently used (LRU) caching algorithm.Comment: 15 page
- …