27 research outputs found
Fractals for Kernelization Lower Bounds, With an Application to Length-Bounded Cut Problems
Bodlaender et al.\u27s [Bodlaender/Jansen/Kratsch,2014] cross-composition technique is a popular method for excluding polynomial-size problem kernels for NP-hard parameterized problems. We present a new technique exploiting triangle-based fractal structures for extending the range of applicability of cross-compositions. Our technique makes it possible to prove new no-polynomial-kernel results for a number of problems dealing with length-bounded cuts. Roughly speaking, our new technique combines the advantages of serial and parallel composition. In particular, answering an open question of Golovach and Thilikos [Golovach/Thilikos,2011], we show that, unless NP subseteq coNP/poly, the NP-hard Length-Bounded Edge-Cut problem (delete at most k edges such that the resulting graph has no s-t path of length shorter than l) parameterized by the combination of k and l has no polynomial-size problem kernel. Our framework applies to planar as well as directed variants of the basic problems and also applies to both edge and vertex deletion problems
A survey of parameterized algorithms and the complexity of edge modification
The survey is a comprehensive overview of the developing area of parameterized algorithms for graph modification problems. It describes state of the art in kernelization, subexponential algorithms, and parameterized complexity of graph modification. The main focus is on edge modification problems, where the task is to change some adjacencies in a graph to satisfy some required properties. To facilitate further research, we list many open problems in the area.publishedVersio
Parameterized Complexity of Critical Node Cuts
We consider the following natural graph cut problem called Critical Node Cut
(CNC): Given a graph on vertices, and two positive integers and
, determine whether has a set of vertices whose removal leaves
with at most connected pairs of vertices. We analyze this problem in the
framework of parameterized complexity. That is, we are interested in whether or
not this problem is solvable in time (i.e., whether
or not it is fixed-parameter tractable), for various natural parameters
. We consider four such parameters:
- The size of the required cut.
- The upper bound on the number of remaining connected pairs.
- The lower bound on the number of connected pairs to be removed.
- The treewidth of .
We determine whether or not CNC is fixed-parameter tractable for each of
these parameters. We determine this also for all possible aggregations of these
four parameters, apart from . Moreover, we also determine whether or not
CNC admits a polynomial kernel for all these parameterizations. That is,
whether or not there is an algorithm that reduces each instance of CNC in
polynomial time to an equivalent instance of size , where
is the given parameter
07411 Abstracts Collection -- Algebraic Methods in Computational Complexity
From 07.10. to 12.10., the Dagstuhl Seminar 07411 ``Algebraic Methods in Computational Complexity\u27\u27 was held in the International Conference and Research Center (IBFI),
Schloss Dagstuhl.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
Efficient Classification for Metric Data
Recent advances in large-margin classification of data residing in general
metric spaces (rather than Hilbert spaces) enable classification under various
natural metrics, such as string edit and earthmover distance. A general
framework developed for this purpose by von Luxburg and Bousquet [JMLR, 2004]
left open the questions of computational efficiency and of providing direct
bounds on generalization error.
We design a new algorithm for classification in general metric spaces, whose
runtime and accuracy depend on the doubling dimension of the data points, and
can thus achieve superior classification performance in many common scenarios.
The algorithmic core of our approach is an approximate (rather than exact)
solution to the classical problems of Lipschitz extension and of Nearest
Neighbor Search. The algorithm's generalization performance is guaranteed via
the fat-shattering dimension of Lipschitz classifiers, and we present
experimental evidence of its superiority to some common kernel methods. As a
by-product, we offer a new perspective on the nearest neighbor classifier,
which yields significantly sharper risk asymptotics than the classic analysis
of Cover and Hart [IEEE Trans. Info. Theory, 1967].Comment: This is the full version of an extended abstract that appeared in
Proceedings of the 23rd COLT, 201