845 research outputs found
On Polynomial Kernels for Integer Linear Programs: Covering, Packing and Feasibility
We study the existence of polynomial kernels for the problem of deciding
feasibility of integer linear programs (ILPs), and for finding good solutions
for covering and packing ILPs. Our main results are as follows: First, we show
that the ILP Feasibility problem admits no polynomial kernelization when
parameterized by both the number of variables and the number of constraints,
unless NP \subseteq coNP/poly. This extends to the restricted cases of bounded
variable degree and bounded number of variables per constraint, and to covering
and packing ILPs. Second, we give a polynomial kernelization for the Cover ILP
problem, asking for a solution to Ax >= b with c^Tx <= k, parameterized by k,
when A is row-sparse; this generalizes a known polynomial kernelization for the
special case with 0/1-variables and coefficients (d-Hitting Set)
Efficient Parameterized Algorithms for Computing All-Pairs Shortest Paths
Computing all-pairs shortest paths is a fundamental and much-studied problem
with many applications. Unfortunately, despite intense study, there are still
no significantly faster algorithms for it than the time
algorithm due to Floyd and Warshall (1962). Somewhat faster algorithms exist
for the vertex-weighted version if fast matrix multiplication may be used.
Yuster (SODA 2009) gave an algorithm running in time ,
but no combinatorial, truly subcubic algorithm is known.
Motivated by the recent framework of efficient parameterized algorithms (or
"FPT in P"), we investigate the influence of the graph parameters clique-width
() and modular-width () on the running times of algorithms for solving
All-Pairs Shortest Paths. We obtain efficient (and combinatorial) parameterized
algorithms on non-negative vertex-weighted graphs of times
, resp. . If fast matrix
multiplication is allowed then the latter can be improved to
using the algorithm of Yuster as a black box.
The algorithm relative to modular-width is adaptive, meaning that the running
time matches the best unparameterized algorithm for parameter value equal
to , and they outperform them already for for any
Compression via Matroids: A Randomized Polynomial Kernel for Odd Cycle Transversal
The Odd Cycle Transversal problem (OCT) asks whether a given graph can be
made bipartite by deleting at most of its vertices. In a breakthrough
result Reed, Smith, and Vetta (Operations Research Letters, 2004) gave a
\BigOh(4^kkmn) time algorithm for it, the first algorithm with polynomial
runtime of uniform degree for every fixed . It is known that this implies a
polynomial-time compression algorithm that turns OCT instances into equivalent
instances of size at most \BigOh(4^k), a so-called kernelization. Since then
the existence of a polynomial kernel for OCT, i.e., a kernelization with size
bounded polynomially in , has turned into one of the main open questions in
the study of kernelization.
This work provides the first (randomized) polynomial kernelization for OCT.
We introduce a novel kernelization approach based on matroid theory, where we
encode all relevant information about a problem instance into a matroid with a
representation of size polynomial in . For OCT, the matroid is built to
allow us to simulate the computation of the iterative compression step of the
algorithm of Reed, Smith, and Vetta, applied (for only one round) to an
approximate odd cycle transversal which it is aiming to shrink to size . The
process is randomized with one-sided error exponentially small in , where
the result can contain false positives but no false negatives, and the size
guarantee is cubic in the size of the approximate solution. Combined with an
\BigOh(\sqrt{\log n})-approximation (Agarwal et al., STOC 2005), we get a
reduction of the instance to size \BigOh(k^{4.5}), implying a randomized
polynomial kernelization.Comment: Minor changes to agree with SODA 2012 version of the pape
Solving Connectivity Problems Parameterized by Treedepth in Single-Exponential Time and Polynomial Space
A breakthrough result of Cygan et al. (FOCS 2011) showed that connectivity problems parameterized by treewidth can be solved much faster than the previously best known time ?^*(2^{?(twlog tw)}). Using their inspired Cut&Count technique, they obtained ?^*(?^tw) time algorithms for many such problems. Moreover, they proved these running times to be optimal assuming the Strong Exponential-Time Hypothesis. Unfortunately, like other dynamic programming algorithms on tree decompositions, these algorithms also require exponential space, and this is widely believed to be unavoidable. In contrast, for the slightly larger parameter called treedepth, there are already several examples of matching the time bounds obtained for treewidth, but using only polynomial space. Nevertheless, this has remained open for connectivity problems.
In the present work, we close this knowledge gap by applying the Cut&Count technique to graphs of small treedepth. While the general idea is unchanged, we have to design novel procedures for counting consistently cut solution candidates using only polynomial space. Concretely, we obtain time ?^*(3^d) and polynomial space for Connected Vertex Cover, Feedback Vertex Set, and Steiner Tree on graphs of treedepth d. Similarly, we obtain time ?^*(4^d) and polynomial space for Connected Dominating Set and Connected Odd Cycle Transversal
Space-Efficient Biconnected Components and Recognition of Outerplanar Graphs
We present space-efficient algorithms for computing cut vertices in a given
graph with vertices and edges in linear time using bits. With the same time and using bits, we can compute the
biconnected components of a graph. We use this result to show an algorithm for
the recognition of (maximal) outerplanar graphs in time using
bits
Point Line Cover: The Easy Kernel is Essentially Tight
The input to the NP-hard Point Line Cover problem (PLC) consists of a set
of points on the plane and a positive integer , and the question is
whether there exists a set of at most lines which pass through all points
in . A simple polynomial-time reduction reduces any input to one with at
most points. We show that this is essentially tight under standard
assumptions. More precisely, unless the polynomial hierarchy collapses to its
third level, there is no polynomial-time algorithm that reduces every instance
of PLC to an equivalent instance with points, for
any . This answers, in the negative, an open problem posed by
Lokshtanov (PhD Thesis, 2009).
Our proof uses the machinery for deriving lower bounds on the size of kernels
developed by Dell and van Melkebeek (STOC 2010). It has two main ingredients:
We first show, by reduction from Vertex Cover, that PLC---conditionally---has
no kernel of total size bits. This does not directly imply
the claimed lower bound on the number of points, since the best known
polynomial-time encoding of a PLC instance with points requires
bits. To get around this we build on work of Goodman et al.
(STOC 1989) and devise an oracle communication protocol of cost
for PLC; its main building block is a bound of for the order
types of points that are not necessarily in general position, and an
explicit algorithm that enumerates all possible order types of n points. This
protocol and the lower bound on total size together yield the stated lower
bound on the number of points.
While a number of essentially tight polynomial lower bounds on total sizes of
kernels are known, our result is---to the best of our knowledge---the first to
show a nontrivial lower bound for structural/secondary parameters
- …