68 research outputs found
Dual Half-Integrality for Uncrossable Cut Cover and Its Application to Maximum Half-Integral Flow
Given an edge weighted graph and a forest F, the 2-edge connectivity augmentation problem is to pick a minimum weighted set of edges, E\u27, such that every connected component of E\u27 ? F is 2-edge connected. Williamson et al. gave a 2-approximation algorithm (WGMV) for this problem using the primal-dual schema. We show that when edge weights are integral, the WGMV procedure can be modified to obtain a half-integral dual. The 2-edge connectivity augmentation problem has an interesting connection to routing flow in graphs where the union of supply and demand is planar. The half-integrality of the dual leads to a tight 2-approximate max-half-integral-flow min-multicut theorem
When the Optimum is also Blind: a New Perspective on Universal Optimization
Consider the following variant of the set cover problem. We are given a universe U={1,...,n} and a collection of subsets C = {S_1,...,S_m} where each S_i is a subset of U. For every element u from U we need to find a set phi(u) from collection C such that u belongs to phi(u). Once we construct and fix the mapping phi from U to C a subset X from the universe U is revealed, and we need to cover all elements from X with exactly phi(X), that is {phi(u)}_{all u from X}. The goal is to find a mapping such that the cover phi(X) is as cheap as possible.
This is an example of a universal problem where the solution has to be created before the actual instance to deal with is revealed. Such problems appear naturally in some settings when we need to optimize under uncertainty and it may be actually too expensive to begin finding a good solution once the input starts being revealed. A rich body of work was devoted to investigate such problems under the regime of worst case analysis, i.e., when we measure how good the solution is by looking at the worst-case ratio: universal solution for a given instance vs optimum solution for the same instance.
As the universal solution is significantly more constrained, it is typical that such a worst-case ratio is actually quite big. One way to give a viewpoint on the problem that would be less vulnerable to such extreme worst-cases is to assume that the instance, for which we will have to create a solution, will be drawn randomly from some probability distribution. In this case one wants to minimize the expected value of the ratio: universal solution vs optimum solution. Here the bounds obtained are indeed smaller than when we compare to the worst-case ratio.
But even in this case we still compare apples to oranges as no universal solution is able to construct the optimum solution for every possible instance. What if we would compare our approximate universal solution against an optimal universal solution that obeys the same rules as we do? We show that under this viewpoint, but still in the stochastic variant, we can indeed obtain better bounds than in the expected ratio model. For example, for the set cover problem we obtain approximation which matches the approximation ratio from the classic deterministic setup. Moreover, we show this for all possible probability distributions over that have a polynomially large carrier, while all previous results pertained to a model in which elements were sampled independently. Our result is based on rounding a proper configuration IP that captures the optimal universal solution, and using tools from submodular optimization.
The same basic approach leads to improved approximation algorithms for other related problems, including Vertex Cover, Edge Cover, Directed Steiner Tree, Multicut, and Facility Location
From valid inequalities to heuristics : a unified view of primal-dual approximation algortithms [sic] in covering problems
Includes bibliographical references (p. 26-27).Supported by a Presidential Young Investigator Award. DDM-9158118 Partially supported by Draper Laboratory and the National University of Singapore.Dimitris Bertsimas, Chung-Piaw Teo
Analyzing Massive Graphs in the Semi-streaming Model
Massive graphs arise in a many scenarios, for example,
traffic data analysis in large networks, large scale scientific
experiments, and clustering of large data sets.
The semi-streaming model was proposed for processing massive graphs. In the semi-streaming model, we have a random
accessible memory which is near-linear in the number of vertices.
The input graph (or equivalently, edges in the graph)
is presented as a sequential list of edges (insertion-only model)
or edge insertions and deletions (dynamic model). The list
is read-only but we may make multiple passes over the list.
There has been a few results in the insertion-only model
such as computing distance spanners and approximating
the maximum matching.
In this thesis, we present some algorithms and techniques
for (i) solving more complex problems in the semi-streaming model,
(for example, problems in the dynamic model) and (ii) having
better solutions for the problems which have been studied
(for example, the maximum matching problem). In course of both
of these, we develop new techniques with broad applications and
explore the rich trade-offs between the complexity of models
(insertion-only streams vs. dynamic streams), the number
of passes, space, accuracy, and running time.
1. We initiate the study of dynamic graph streams.
We start with basic problems such as the connectivity
problem and computing the minimum spanning tree.
These problems are
trivial in the insertion-only model. However, they require
non-trivial (and multiple passes for computing the exact minimum
spanning tree) algorithms in the
dynamic model.
2. Second, we present a graph sparsification algorithm in the
semi-streaming model. A graph sparsification
is a sparse graph that approximately preserves
all the cut values of a graph.
Such a graph acts as an oracle for solving cut-related problems,
for example, the minimum cut problem and the multicut problem.
Our algorithm produce a graph sparsification with high probability
in one pass.
3. Third, we use the primal-dual algorithms
to develop the semi-streaming algorithms.
The primal-dual algorithms have been widely accepted
as a framework for solving linear programs
and semidefinite programs faster.
In contrast, we apply the method for reducing space and
number of passes in addition to reducing the running time.
We also present some examples that arise in applications
and show how to apply the techniques:
the multicut problem, the correlation clustering problem,
and the maximum matching problem. As a consequence,
we also develop near-linear time algorithms for the -matching
problems which were not known before
From valid inequalities to heuristics : a unified view of primal-dual approximation algortithms [sic] in covering problems
Includes bibliographical references (p. 26-27).Supported by a Presidential Young Investigator Award. DDM-9158118 Partially supported by Draper Laboratory and the National University of Singapore.Dimitris Bertsimas, Chung-Piaw Teo
Half-integrality, LP-branching and FPT Algorithms
A recent trend in parameterized algorithms is the application of polytope
tools (specifically, LP-branching) to FPT algorithms (e.g., Cygan et al., 2011;
Narayanaswamy et al., 2012). However, although interesting results have been
achieved, the methods require the underlying polytope to have very restrictive
properties (half-integrality and persistence), which are known only for few
problems (essentially Vertex Cover (Nemhauser and Trotter, 1975) and Node
Multiway Cut (Garg et al., 1994)). Taking a slightly different approach, we
view half-integrality as a \emph{discrete} relaxation of a problem, e.g., a
relaxation of the search space from to such that
the new problem admits a polynomial-time exact solution. Using tools from CSP
(in particular Thapper and \v{Z}ivn\'y, 2012) to study the existence of such
relaxations, we provide a much broader class of half-integral polytopes with
the required properties, unifying and extending previously known cases.
In addition to the insight into problems with half-integral relaxations, our
results yield a range of new and improved FPT algorithms, including an
-time algorithm for node-deletion Unique Label Cover with
label set and an -time algorithm for Group Feedback Vertex
Set, including the setting where the group is only given by oracle access. All
these significantly improve on previous results. The latter result also implies
the first single-exponential time FPT algorithm for Subset Feedback Vertex Set,
answering an open question of Cygan et al. (2012).
Additionally, we propose a network flow-based approach to solve some cases of
the relaxation problem. This gives the first linear-time FPT algorithm to
edge-deletion Unique Label Cover.Comment: Added results on linear-time FPT algorithms (not present in SODA
paper
- …