31 research outputs found

    Polynomial Kernelizations for MIN F^+Pi_1 and MAX NP

    Get PDF
    The relation of constant-factor approximability to fixed-parameter tractability and kernelization is a long-standing open question. We prove that two large classes of constant-factor approximable problems, namely~textscMINF+Pi1textsc{MIN F}^+Pi_1 and~textscMAXNPtextsc{MAX NP}, including the well-known subclass~textscMAXSNPtextsc{MAX SNP}, admit polynomial kernelizations for their natural decision versions. This extends results of Cai and Chen (JCSS 1997), stating that the standard parameterizations of problems in~textscMAXSNPtextsc{MAX SNP} and~textscMINF+Pi1textsc{MIN F}^+Pi_1 are fixed-parameter tractable, and complements recent research on problems that do not admit polynomial kernelizations (Bodlaender et al. ICALP 2008)

    Kernelization of generic problems : upper and lower bounds

    Get PDF
    This thesis addresses the kernelization properties of generic problems, defined via syntactical restrictions or by a problem framework. Polynomial kernelization is a formalization of data reduction, aimed at combinatorially hard problems, which allows a rigorous study of this important and fundamental concept. The thesis is organized into two main parts. In the first part we prove that all problems from two syntactically defined classes of constant-factor approximable problems admit polynomial kernelizations. The problems must be expressible via optimization over first-order formulas with restricted quantification; when relaxing these restrictions we find problems that do not admit polynomial kernelizations. Next, we consider edge modification problems, and we show that they do not generally admit polynomial kernelizations. In the second part we consider three types of Boolean constraint satisfaction problems.We completely characterize whether these problems admit polynomial kernelizations, i.e.,given such a problem our results either provide a polynomial kernelization, or they show that the problem does not admit a polynomial kernelization. These dichotomies are characterized by properties of the permitted constraints.Diese Dissertation beschĂ€ftigt sich mit der Kernelisierbarkeit von generischen Problemen, definiert durch syntaktische BeschrĂ€nkungen oder als Problemsystem. Polynomielle Kernelisierung ist eine Formalisierung des Konzepts der Datenreduktion fĂŒr kombinatorisch schwierige Probleme. Sie erlaubt eine grĂŒdliche Untersuchung dieses wichtigen und fundamentalen Begriffs. Die Dissertation gliedert sich in zwei Hauptteile. Im ersten Teil beweisen wir, dass alle Probleme aus zwei syntaktischen Teilklassen der Menge aller konstantfaktor-approximierbaren Probleme polynomielle Kernelisierungen haben. Die Probleme mĂŒssen durch Optimierung ĂŒber Formeln in PrĂ€dikatenlogik erster Stufe mit beschrĂ€nkter Quantifizierung beschreibbar sein. Eine Relaxierung dieser BeschrĂ€nkungen gestattet bereits Probleme, die keine polynomielle Kernelisierung erlauben. Im Anschluss betrachten wir Kantenmodifizierungsprobleme und zeigen, dass diese im Allgemeinen keine polynomielle Kernelisierung haben. Im zweiten Teil betrachten wir drei Arten von booleschen Constraint-Satisfaction-Problemen. Wir charakterisieren vollstĂ€ndig welche dieser Probleme polynomielle Kernelisierungen erlauben. FĂŒr jedes gegebene Problem zeigen unsere Resultate entweder eine polynomielle Kernelisierung oder sie zeigen, dass das Problem keine polynomielle Kernelisierung hat. Die Dichotomien sind durch Eigenschaften der erlaubten Constraints charakterisiert

    A shortcut to (sun)flowers: Kernels in logarithmic space or linear time

    Full text link
    We investigate whether kernelization results can be obtained if we restrict kernelization algorithms to run in logarithmic space. This restriction for kernelization is motivated by the question of what results are attainable for preprocessing via simple and/or local reduction rules. We find kernelizations for d-Hitting Set(k), d-Set Packing(k), Edge Dominating Set(k) and a number of hitting and packing problems in graphs, each running in logspace. Additionally, we return to the question of linear-time kernelization. For d-Hitting Set(k) a linear-time kernelization was given by van Bevern [Algorithmica (2014)]. We give a simpler procedure and save a large constant factor in the size bound. Furthermore, we show that we can obtain a linear-time kernel for d-Set Packing(k) as well.Comment: 18 page

    On the complexity of finding and counting solution-free sets of integers

    Get PDF
    Given a linear equation L\mathcal{L}, a set AA of integers is L\mathcal{L}-free if AA does not contain any `non-trivial' solutions to L\mathcal{L}. This notion incorporates many central topics in combinatorial number theory such as sum-free and progression-free sets. In this paper we initiate the study of (parameterised) complexity questions involving L\mathcal{L}-free sets of integers. The main questions we consider involve deciding whether a finite set of integers AA has an L\mathcal{L}-free subset of a given size, and counting all such L\mathcal{L}-free subsets. We also raise a number of open problems.Comment: 27 page

    The Parameterized Complexity of Degree Constrained Editing Problems

    Get PDF
    This thesis examines degree constrained editing problems within the framework of parameterized complexity. A degree constrained editing problem takes as input a graph and a set of constraints and asks whether the graph can be altered in at most k editing steps such that the degrees of the remaining vertices are within the given constraints. Parameterized complexity gives a framework for examining problems that are traditionally considered intractable and developing efficient exact algorithms for them, or showing that it is unlikely that they have such algorithms, by introducing an additional component to the input, the parameter, which gives additional information about the structure of the problem. If the problem has an algorithm that is exponential in the parameter, but polynomial, with constant degree, in the size of the input, then it is considered to be fixed-parameter tractable. Parameterized complexity also provides an intractability framework for identifying problems that are likely to not have such an algorithm. Degree constrained editing problems provide natural parameterizations in terms of the total cost k of vertex deletions, edge deletions and edge additions allowed, and the upper bound r on the degree of the vertices remaining after editing. We define a class of degree constrained editing problems, WDCE, which generalises several well know problems, such as Degree r Deletion, Cubic Subgraph, r-Regular Subgraph, f-Factor and General Factor. We show that in general if both k and r are part of the parameter, problems in the WDCE class are fixed-parameter tractable, and if parameterized by k or r alone, the problems are intractable in a parameterized sense. We further show cases of WDCE that have polynomial time kernelizations, and in particular when all the degree constraints are a single number and the editing operations include vertex deletion and edge deletion we show that there is a kernel with at most O(kr(k + r)) vertices. If we allow vertex deletion and edge addition, we show that despite remaining fixed-parameter tractable when parameterized by k and r together, the problems are unlikely to have polynomial sized kernelizations, or polynomial time kernelizations of a certain form, under certain complexity theoretic assumptions. We also examine a more general case where given an input graph the question is whether with at most k deletions the graph can be made r-degenerate. We show that in this case the problems are intractable, even when r is a constant

    Graph editing problems with extended regularity constraints

    Full text link
    © 2017 Graph editing problems offer an interesting perspective on sub- and supergraph identification problems for a large variety of target properties. They have also attracted significant attention in recent years, particularly in the area of parameterized complexity as the problems have rich parameter ecologies. In this paper we examine generalisations of the notion of editing a graph to obtain a regular subgraph. In particular we extend the notion of regularity to include two variants of edge-regularity along with the unifying constraint of strong regularity. We present a number of results, with the central observation that these problems retain the general complexity profile of their regularity-based inspiration: when the number of edits k and the maximum degree r are taken together as a combined parameter, the problems are tractable (i.e. in FPT), but are otherwise intractable. We also examine variants of the basic editing to obtain a regular subgraph problem from the perspective of parameterizing by the treewidth of the input graph. In this case the treewidth of the input graph essentially becomes a limiting parameter on the natural k+r parameterization

    Assessing the Computational Complexity of Multi-Layer Subgraph Detection

    Get PDF
    Multi-layer graphs consist of several graphs (layers) over the same vertex set. They are motivated by real-world problems where entities (vertices) are associated via multiple types of relationships (edges in different layers). We chart the border of computational (in)tractability for the class of subgraph detection problems on multi-layer graphs, including fundamental problems such as maximum matching, finding certain clique relaxations (motivated by community detection), or path problems. Mostly encountering hardness results, sometimes even for two or three layers, we can also spot some islands of tractability
    corecore