3,897 research outputs found
Parameterized Complexity of Critical Node Cuts
We consider the following natural graph cut problem called Critical Node Cut
(CNC): Given a graph on vertices, and two positive integers and
, determine whether has a set of vertices whose removal leaves
with at most connected pairs of vertices. We analyze this problem in the
framework of parameterized complexity. That is, we are interested in whether or
not this problem is solvable in time (i.e., whether
or not it is fixed-parameter tractable), for various natural parameters
. We consider four such parameters:
- The size of the required cut.
- The upper bound on the number of remaining connected pairs.
- The lower bound on the number of connected pairs to be removed.
- The treewidth of .
We determine whether or not CNC is fixed-parameter tractable for each of
these parameters. We determine this also for all possible aggregations of these
four parameters, apart from . Moreover, we also determine whether or not
CNC admits a polynomial kernel for all these parameterizations. That is,
whether or not there is an algorithm that reduces each instance of CNC in
polynomial time to an equivalent instance of size , where
is the given parameter
Taming Numbers and Durations in the Model Checking Integrated Planning System
The Model Checking Integrated Planning System (MIPS) is a temporal least
commitment heuristic search planner based on a flexible object-oriented
workbench architecture. Its design clearly separates explicit and symbolic
directed exploration algorithms from the set of on-line and off-line computed
estimates and associated data structures. MIPS has shown distinguished
performance in the last two international planning competitions. In the last
event the description language was extended from pure propositional planning to
include numerical state variables, action durations, and plan quality objective
functions. Plans were no longer sequences of actions but time-stamped
schedules. As a participant of the fully automated track of the competition,
MIPS has proven to be a general system; in each track and every benchmark
domain it efficiently computed plans of remarkable quality. This article
introduces and analyzes the most important algorithmic novelties that were
necessary to tackle the new layers of expressiveness in the benchmark problems
and to achieve a high level of performance. The extensions include critical
path analysis of sequentially generated plans to generate corresponding optimal
parallel plans. The linear time algorithm to compute the parallel plan bypasses
known NP hardness results for partial ordering by scheduling plans with respect
to the set of actions and the imposed precedence relations. The efficiency of
this algorithm also allows us to improve the exploration guidance: for each
encountered planning state the corresponding approximate sequential plan is
scheduled. One major strength of MIPS is its static analysis phase that grounds
and simplifies parameterized predicates, functions and operators, that infers
knowledge to minimize the state description length, and that detects domain
object symmetries. The latter aspect is analyzed in detail. MIPS has been
developed to serve as a complete and optimal state space planner, with
admissible estimates, exploration engines and branching cuts. In the
competition version, however, certain performance compromises had to be made,
including floating point arithmetic, weighted heuristic search exploration
according to an inadmissible estimate and parameterized optimization
Half-integrality, LP-branching and FPT Algorithms
A recent trend in parameterized algorithms is the application of polytope
tools (specifically, LP-branching) to FPT algorithms (e.g., Cygan et al., 2011;
Narayanaswamy et al., 2012). However, although interesting results have been
achieved, the methods require the underlying polytope to have very restrictive
properties (half-integrality and persistence), which are known only for few
problems (essentially Vertex Cover (Nemhauser and Trotter, 1975) and Node
Multiway Cut (Garg et al., 1994)). Taking a slightly different approach, we
view half-integrality as a \emph{discrete} relaxation of a problem, e.g., a
relaxation of the search space from to such that
the new problem admits a polynomial-time exact solution. Using tools from CSP
(in particular Thapper and \v{Z}ivn\'y, 2012) to study the existence of such
relaxations, we provide a much broader class of half-integral polytopes with
the required properties, unifying and extending previously known cases.
In addition to the insight into problems with half-integral relaxations, our
results yield a range of new and improved FPT algorithms, including an
-time algorithm for node-deletion Unique Label Cover with
label set and an -time algorithm for Group Feedback Vertex
Set, including the setting where the group is only given by oracle access. All
these significantly improve on previous results. The latter result also implies
the first single-exponential time FPT algorithm for Subset Feedback Vertex Set,
answering an open question of Cygan et al. (2012).
Additionally, we propose a network flow-based approach to solve some cases of
the relaxation problem. This gives the first linear-time FPT algorithm to
edge-deletion Unique Label Cover.Comment: Added results on linear-time FPT algorithms (not present in SODA
paper
Compression via Matroids: A Randomized Polynomial Kernel for Odd Cycle Transversal
The Odd Cycle Transversal problem (OCT) asks whether a given graph can be
made bipartite by deleting at most of its vertices. In a breakthrough
result Reed, Smith, and Vetta (Operations Research Letters, 2004) gave a
\BigOh(4^kkmn) time algorithm for it, the first algorithm with polynomial
runtime of uniform degree for every fixed . It is known that this implies a
polynomial-time compression algorithm that turns OCT instances into equivalent
instances of size at most \BigOh(4^k), a so-called kernelization. Since then
the existence of a polynomial kernel for OCT, i.e., a kernelization with size
bounded polynomially in , has turned into one of the main open questions in
the study of kernelization.
This work provides the first (randomized) polynomial kernelization for OCT.
We introduce a novel kernelization approach based on matroid theory, where we
encode all relevant information about a problem instance into a matroid with a
representation of size polynomial in . For OCT, the matroid is built to
allow us to simulate the computation of the iterative compression step of the
algorithm of Reed, Smith, and Vetta, applied (for only one round) to an
approximate odd cycle transversal which it is aiming to shrink to size . The
process is randomized with one-sided error exponentially small in , where
the result can contain false positives but no false negatives, and the size
guarantee is cubic in the size of the approximate solution. Combined with an
\BigOh(\sqrt{\log n})-approximation (Agarwal et al., STOC 2005), we get a
reduction of the instance to size \BigOh(k^{4.5}), implying a randomized
polynomial kernelization.Comment: Minor changes to agree with SODA 2012 version of the pape
Fast and Deterministic Approximations for k-Cut
In an undirected graph, a k-cut is a set of edges whose removal breaks the graph into at least k connected components. The minimum weight k-cut can be computed in n^O(k) time, but when k is treated as part of the input, computing the minimum weight k-cut is NP-Hard [Goldschmidt and Hochbaum, 1994]. For poly(m,n,k)-time algorithms, the best possible approximation factor is essentially 2 under the small set expansion hypothesis [Manurangsi, 2017]. Saran and Vazirani [1995] showed that a (2 - 2/k)-approximately minimum weight k-cut can be computed via O(k) minimum cuts, which implies a O~(km) randomized running time via the nearly linear time randomized min-cut algorithm of Karger [2000]. Nagamochi and Kamidoi [2007] showed that a (2 - 2/k)-approximately minimum weight k-cut can be computed deterministically in O(mn + n^2 log n) time. These results prompt two basic questions. The first concerns the role of randomization. Is there a deterministic algorithm for 2-approximate k-cuts matching the randomized running time of O~(km)? The second question qualitatively compares minimum cut to 2-approximate minimum k-cut. Can 2-approximate k-cuts be computed as fast as the minimum cut - in O~(m) randomized time?
We give a deterministic approximation algorithm that computes (2 + eps)-minimum k-cuts in O(m log^3 n / eps^2) time, via a (1 + eps)-approximation for an LP relaxation of k-cut
On the Linear MIM-width of Trees
Linear MIM-width, and its generalization MIM-width, is a graph width parameter that has become noted for having bounded value on several important graph classes, e.g. interval graphs and permutation graphs. The linear MIM-width of a graph G measures a min-max relation on all maximum induced matchings in bipartite graphs given by a linear layout of the vertices in G, over all possible linear layouts. In this thesis we give an overlook of some of the research that has been done on this parameter, and provide a new result, computing the linear MIM-width of trees in n log n time.Masteroppgåve i informatikkINF399MAMN-PROGMAMN-IN
- …