22 research outputs found
A Simple Boosting Framework for Transshipment
Transshipment, also known under the names of earth mover's distance,
uncapacitated min-cost flow, or Wasserstein's metric, is an important and
well-studied problem that asks to find a flow of minimum cost that routes a
general demand vector. Adding to its importance, recent advancements in our
understanding of algorithms for transshipment have led to breakthroughs for the
fundamental problem of computing shortest paths. Specifically, the recent
near-optimal -approximate single-source shortest path
algorithms in the parallel and distributed settings crucially solve
transshipment as a central step of their approach.
The key property that differentiates transshipment from other similar
problems like shortest path is the so-called \emph{boosting}: one can boost a
(bad) approximate solution to a near-optimal -approximate
solution. This conceptually reduces the problem to finding an approximate
solution. However, not all approximations can be boosted -- there have been
several proposed approaches that were shown to be susceptible to boosting, and
a few others where boosting was left as an open question.
The main takeaway of our paper is that any black-box -approximate
transshipment solver that computes a \emph{dual} solution can be boosted to an
-approximate solver. Moreover, we significantly simplify and
decouple previous approaches to transshipment (in sequential, parallel, and
distributed settings) by showing all of them (implicitly) obtain approximate
dual solutions.
Our analysis is very simple and relies only on the well-known multiplicative
weights framework. Furthermore, to keep the paper completely self-contained, we
provide a new (and arguably much simpler) analysis of multiplicative weights
that leverages well-known optimization tools to bypass the ad-hoc calculations
used in the standard analyses
(Near) Optimal Adaptivity Gaps for Stochastic Multi-Value Probing
Consider a kidney-exchange application where we want to find a max-matching in a random graph. To find whether an edge e exists, we need to perform an expensive test, in which case the edge e appears independently with a known probability p_e. Given a budget on the total cost of the tests, our goal is to find a testing strategy that maximizes the expected maximum matching size.
The above application is an example of the stochastic probing problem. In general the optimal stochastic probing strategy is difficult to find because it is adaptive - decides on the next edge to probe based on the outcomes of the probed edges. An alternate approach is to show the adaptivity gap is small, i.e., the best non-adaptive strategy always has a value close to the best adaptive strategy. This allows us to focus on designing non-adaptive strategies that are much simpler. Previous works, however, have focused on Bernoulli random variables that can only capture whether an edge appears or not. In this work we introduce a multi-value stochastic probing problem, which can also model situations where the weight of an edge has a probability distribution over multiple values.
Our main technical contribution is to obtain (near) optimal bounds for the (worst-case) adaptivity gaps for multi-value stochastic probing over prefix-closed constraints. For a monotone submodular function, we show the adaptivity gap is at most 2 and provide a matching lower bound. For a weighted rank function of a k-extendible system (a generalization of intersection of k matroids), we show the adaptivity gap is between O(k log k) and k. None of these results were known even in the Bernoulli case where both our upper and lower bounds also apply, thereby resolving an open question of Gupta et al. [Gupta et al., 2017]
Robust Algorithms for the Secretary Problem
In classical secretary problems, a sequence of n elements arrive in a uniformly random order, and we want to choose a single item, or a set of size K. The random order model allows us to escape from the strong lower bounds for the adversarial order setting, and excellent algorithms are known in this setting. However, one worrying aspect of these results is that the algorithms overfit to the model: they are not very robust. Indeed, if a few "outlier" arrivals are adversarially placed in the arrival sequence, the algorithms perform poorly. E.g., Dynkin’s popular 1/e-secretary algorithm is sensitive to even a single adversarial arrival: if the adversary gives one large bid at the beginning of the stream, the algorithm does not select any element at all. We investigate a robust version of the secretary problem. In the Byzantine Secretary model, we have two kinds of elements: green (good) and red (rogue). The values of all elements are chosen by the adversary. The green elements arrive at times uniformly randomly drawn from [0,1]. The red elements, however, arrive at adversarially chosen times. Naturally, the algorithm does not see these colors: how well can it solve secretary problems? We show that selecting the highest value red set, or the single largest green element is not possible with even a small fraction of red items. However, on the positive side, we show that these are the only bad cases, by giving algorithms which get value comparable to the value of the optimal green set minus the largest green item. (This benchmark reminds us of regret minimization and digital auctions, where we subtract an additive term depending on the "scale" of the problem.) Specifically, we give an algorithm to pick K elements, which gets within (1-ε) factor of the above benchmark, as long as K ≥ poly(ε^{-1} log n). We extend this to the knapsack secretary problem, for large knapsack size K. For the single-item case, an analogous benchmark is the value of the second-largest green item. For value-maximization, we give a poly log^* n-competitive algorithm, using a multi-layered bucketing scheme that adaptively refines our estimates of second-max over time. For probability-maximization, we show the existence of a good randomized algorithm, using the minimax principle. We hope that this work will spur further research on robust algorithms for the secretary problem, and for other problems in sequential decision-making, where the existing algorithms are not robust and often tend to overfit to the model.ISSN:1868-896
Erasure Correction for Noisy Radio Networks
The radio network model is a well-studied model of wireless, multi-hop networks. However, radio networks make the strong assumption that messages are delivered deterministically. The recently introduced noisy radio network model relaxes this assumption by dropping messages independently at random.
In this work we quantify the relative computational power of noisy radio networks and classic radio networks. In particular, given a non-adaptive protocol for a fixed radio network we show how to reliably simulate this protocol if noise is introduced with a multiplicative cost of poly(log Delta, log log n) rounds where n is the number nodes in the network and Delta is the max degree. Moreover, we demonstrate that, even if the simulated protocol is not non-adaptive, it can be simulated with a multiplicative O(Delta log ^2 Delta) cost in the number of rounds. Lastly, we argue that simulations with a multiplicative overhead of o(log Delta) are unlikely to exist by proving that an Omega(log Delta) multiplicative round overhead is necessary under certain natural assumptions
Minor Excluded Network Families Admit Fast Distributed Algorithms
Distributed network optimization algorithms, such as minimum spanning tree,
minimum cut, and shortest path, are an active research area in distributed
computing. This paper presents a fast distributed algorithm for such problems
in the CONGEST model, on networks that exclude a fixed minor.
On general graphs, many optimization problems, including the ones mentioned
above, require rounds of communication in the CONGEST
model, even if the network graph has a much smaller diameter. Naturally, the
next step in algorithm design is to design efficient algorithms which bypass
this lower bound on a restricted class of graphs. Currently, the only known
method of doing so uses the low-congestion shortcut framework of Ghaffari and
Haeupler [SODA'16]. Building off of their work, this paper proves that excluded
minor graphs admit high-quality shortcuts, leading to an round
algorithm for the aforementioned problems, where is the diameter of the
network graph. To work with excluded minor graph families, we utilize the Graph
Structure Theorem of Robertson and Seymour. To the best of our knowledge, this
is the first time the Graph Structure Theorem has been used for an algorithmic
result in the distributed setting.
Even though the proof is involved, merely showing the existence of good
shortcuts is sufficient to obtain simple, efficient distributed algorithms. In
particular, the shortcut framework can efficiently construct near-optimal
shortcuts and then use them to solve the optimization problems. This, combined
with the very general family of excluded minor graphs, which includes most
other important graph classes, makes this result of significant interest
Hop-Constrained Oblivious Routing
We prove the existence of an oblivious routing scheme that is
-competitive in terms of , thus
resolving a well-known question in oblivious routing.
Concretely, consider an undirected network and a set of packets each with its
own source and destination. The objective is to choose a path for each packet,
from its source to its destination, so as to minimize , defined as follows: The dilation is the maximum path hop-length,
and the congestion is the maximum number of paths that include any single edge.
The routing scheme obliviously and randomly selects a path for each packet
independent of (the existence of) the other packets. Despite this
obliviousness, the selected paths have within a
factor of the best possible value. More precisely, for
any integer hop-bound , this oblivious routing scheme selects paths of
length at most and is -competitive in terms of in comparison to the best possible
achievable via paths of length at most hops. These paths can
be sampled in polynomial time.
This result can be viewed as an analogue of the celebrated oblivious routing
results of R\"{a}cke [FOCS 2002, STOC 2008], which are -competitive
in terms of , but are not competitive in terms of
Adaptive-Adversary-Robust Algorithms via Small Copy Tree Embeddings
Embeddings of graphs into distributions of trees that preserve distances in expectation are a cornerstone of many optimization algorithms. Unfortunately, online or dynamic algorithms which use these embeddings seem inherently randomized and ill-suited against adaptive adversaries. In this paper we provide a new tree embedding which addresses these issues by deterministically embedding a graph into a single tree containing O(log n) copies of each vertex while preserving the connectivity structure of every subgraph and O(log2 n)-approximating the cost of every subgraph. Using this embedding we obtain the first deterministic bicriteria approximation algorithm for the online covering Steiner problem as well as the first poly-log approximations for demand-robust Steiner forest, group Steiner tree and group Steiner forest.ISSN:1868-896
Undirected -Shortest Paths via Minor-Aggregates: Near-Optimal Deterministic Parallel & Distributed Algorithms
This paper presents near-optimal deterministic parallel and distributed
algorithms for computing -approximate single-source shortest
paths in any undirected weighted graph.
On a high level, we deterministically reduce this and other shortest-path
problems to Minor-Aggregations. A Minor-Aggregation computes an
aggregate (e.g., max or sum) of node-values for every connected component of
some subgraph.
Our reduction immediately implies:
Optimal deterministic parallel (PRAM) algorithms with depth
and near-linear work.
Universally-optimal deterministic distributed (CONGEST) algorithms, whenever
deterministic Minor-Aggregate algorithms exist. For example, an optimal
-round deterministic CONGEST algorithm for
excluded-minor networks.
Several novel tools developed for the above results are interesting in their
own right:
A local iterative approach for reducing shortest path computations "up to
distance " to computing low-diameter decompositions "up to distance
". Compared to the recursive vertex-reduction approach of [Li20],
our approach is simpler, suitable for distributed algorithms, and eliminates
many derandomization barriers.
A simple graph-based -competitive -oblivious routing
based on low-diameter decompositions that can be evaluated in near-linear work.
The previous such routing [ZGY+20] was -competitive and required
more work.
A deterministic algorithm to round any fractional single-source transshipment
flow into an integral tree solution.
The first distributed algorithms for computing Eulerian orientations