1,016 research outputs found
-Approximation Algorithm for Directed Steiner Tree: A Tight Quasi-Polynomial-Time Algorithm
In the Directed Steiner Tree (DST) problem we are given an -vertex
directed edge-weighted graph, a root , and a collection of terminal
nodes. Our goal is to find a minimum-cost arborescence that contains a directed
path from to every terminal. We present an -approximation algorithm for DST that runs in
quasi-polynomial-time. By adjusting the parameters in the hardness result of
Halperin and Krauthgamer, we show the matching lower bound of
for the class of quasi-polynomial-time
algorithms. This is the first improvement on the DST problem since the
classical quasi-polynomial-time approximation algorithm by
Charikar et al. (The paper erroneously claims an approximation due
to a mistake in prior work.)
Our approach is based on two main ingredients. First, we derive an
approximation preserving reduction to the Label-Consistent Subtree (LCST)
problem. The LCST instance has quasi-polynomial size and logarithmic height. We
remark that, in contrast, Zelikovsky's heigh-reduction theorem used in all
prior work on DST achieves a reduction to a tree instance of the related Group
Steiner Tree (GST) problem of similar height, however losing a logarithmic
factor in the approximation ratio. Our second ingredient is an LP-rounding
algorithm to approximately solve LCST instances, which is inspired by the
framework developed by Rothvo{\ss}. We consider a Sherali-Adams lifting of a
proper LP relaxation of LCST. Our rounding algorithm proceeds level by level
from the root to the leaves, rounding and conditioning each time on a proper
subset of label variables. A small enough (namely, polylogarithmic) number of
Sherali-Adams lifting levels is sufficient to condition up to the leaves
Tighter Connections Between Formula-SAT and Shaving Logs
A noticeable fraction of Algorithms papers in the last few decades improve the running time of well-known algorithms for fundamental problems by logarithmic factors. For example, the dynamic programming solution to the Longest Common Subsequence problem (LCS) was improved to in several ways and using a variety of ingenious tricks. This line of research, also known as "the art of shaving log factors", lacks a tool for proving negative results. Specifically, how can we show that it is unlikely that LCS can be solved in time ? Perhaps the only approach for such results was suggested in a recent paper of Abboud, Hansen, Vassilevska W. and Williams (STOC'16). The authors blame the hardness of shaving logs on the hardness of solving satisfiability on Boolean formulas (Formula-SAT) faster than exhaustive search. They show that an algorithm for LCS would imply a major advance in circuit lower bounds. Whether this approach can lead to tighter barriers was unclear. In this paper, we push this approach to its limit and, in particular, prove that a well-known barrier from complexity theory stands in the way for shaving five additional log factors for fundamental combinatorial problems. For LCS, regular expression pattern matching, as well as the Fr\'echet distance problem from Computational Geometry, we show that an runtime would imply new Formula-SAT algorithms. Our main result is a reduction from SAT on formulas of size over variables to LCS on sequences of length . Our reduction is essentially as efficient as possible, and it greatly improves the previously known reduction for LCS with , for some
Fine-grained Complexity Meets IP = PSPACE
In this paper we study the fine-grained complexity of finding exact and
approximate solutions to problems in P. Our main contribution is showing
reductions from exact to approximate solution for a host of such problems.
As one (notable) example, we show that the Closest-LCS-Pair problem (Given
two sets of strings and , compute exactly the maximum with ) is equivalent to its approximation version
(under near-linear time reductions, and with a constant approximation factor).
More generally, we identify a class of problems, which we call BP-Pair-Class,
comprising both exact and approximate solutions, and show that they are all
equivalent under near-linear time reductions.
Exploring this class and its properties, we also show:
Under the NC-SETH assumption (a significantly more relaxed
assumption than SETH), solving any of the problems in this class requires
essentially quadratic time.
Modest improvements on the running time of known algorithms
(shaving log factors) would imply that NEXP is not in non-uniform
.
Finally, we leverage our techniques to show new barriers for
deterministic approximation algorithms for LCS.
At the heart of these new results is a deep connection between interactive
proof systems for bounded-space computations and the fine-grained complexity of
exact and approximate solutions to problems in P. In particular, our results
build on the proof techniques from the classical IP = PSPACE result
Liveness of Randomised Parameterised Systems under Arbitrary Schedulers (Technical Report)
We consider the problem of verifying liveness for systems with a finite, but
unbounded, number of processes, commonly known as parameterised systems.
Typical examples of such systems include distributed protocols (e.g. for the
dining philosopher problem). Unlike the case of verifying safety, proving
liveness is still considered extremely challenging, especially in the presence
of randomness in the system. In this paper we consider liveness under arbitrary
(including unfair) schedulers, which is often considered a desirable property
in the literature of self-stabilising systems. We introduce an automatic method
of proving liveness for randomised parameterised systems under arbitrary
schedulers. Viewing liveness as a two-player reachability game (between
Scheduler and Process), our method is a CEGAR approach that synthesises a
progress relation for Process that can be symbolically represented as a
finite-state automaton. The method is incremental and exploits both
Angluin-style L*-learning and SAT-solvers. Our experiments show that our
algorithm is able to prove liveness automatically for well-known randomised
distributed protocols, including Lehmann-Rabin Randomised Dining Philosopher
Protocol and randomised self-stabilising protocols (such as the Israeli-Jalfon
Protocol). To the best of our knowledge, this is the first fully-automatic
method that can prove liveness for randomised protocols.Comment: Full version of CAV'16 pape
Decision making under uncertainty
Almost all important decision problems are inevitably subject to some level of uncertainty either about data measurements, the parameters, or predictions describing future evolution. The significance of handling uncertainty is further amplified by the large volume of uncertain data automatically generated by modern data gathering or integration systems. Various types of problems of decision making under uncertainty have been subject to extensive research in computer science, economics and social science. In this dissertation, I study three major problems in this context, ranking, utility maximization, and matching, all involving uncertain datasets.
First, we consider the problem of ranking and top-k query processing over probabilistic datasets. By illustrating the diverse and conflicting behaviors of the prior proposals, we contend that a single, specific ranking function may not suffice for probabilistic datasets. Instead we propose the notion of parameterized ranking functions, that generalize or can approximate many of the previously proposed ranking functions. We present novel exact or approximate algorithms for efficiently ranking large datasets according to these ranking functions, even if the datasets exhibit complex correlations or the probability distributions are continuous.
The second problem concerns with the stochastic versions of a broad class of combinatorial optimization problems. We observe that the expected value is inadequate in capturing different types of risk-averse or risk-prone behaviors, and instead we consider a more general objective which is to maximize the expected utility of the solution for some given utility function. We present a polynomial time approximation algorithm with additive error ε for any ε > 0, under certain conditions. Our result generalizes and improves several prior results on stochastic shortest path, stochastic spanning tree, and stochastic knapsack.
The third is the stochastic matching problem which finds interesting applications in online dating, kidney exchange and online ad assignment. In this problem, the existence of each edge is uncertain and can be only found out by probing the edge. The goal is to design a probing strategy to maximize the expected weight of the matching. We give linear programming based constant-factor approximation algorithms for weighted stochastic matching, which answer an open question raised in prior work
- …