70 research outputs found
Parameterized Complexity and Kernelizability of Max Ones and Exact Ones Problems
For a finite set Gamma of Boolean relations, MAX ONES SAT(Gamma) and EXACT ONES SAT(Gamma) are generalized satisfiability problems where every constraint relation is from Gamma, and the task is to find a satisfying assignment with at least/exactly k variables set to 1, respectively. We study the parameterized complexity of these problems, including the question whether they admit polynomial kernels. For MAX ONES SAT(Gamma), we give a classification into five different complexity levels: polynomial-time solvable, admits a polynomial kernel, fixed-parameter tractable, solvable in polynomial time for fixed k, and NP-hard already for k = 1. For EXACT ONES SAT(Gamma), we refine the classification obtained earlier by taking a closer look at the fixed-parameter tractable cases and classifying the sets Gamma for which EXACT ONES SAT(Gamma) admits a polynomial kernel
Kernelization of generic problems : upper and lower bounds
This thesis addresses the kernelization properties of generic problems, defined via syntactical restrictions or by a problem framework. Polynomial kernelization is a formalization of data reduction, aimed at combinatorially hard problems, which allows a rigorous study of this important and fundamental concept. The thesis is organized into two main parts. In the first part we prove that all problems from two syntactically defined classes of constant-factor approximable problems admit polynomial kernelizations. The problems must be expressible via optimization over first-order formulas with restricted quantification; when relaxing these restrictions we find problems that do not admit polynomial kernelizations. Next, we consider edge modification problems, and we show that they do not generally admit polynomial kernelizations. In the second part we consider three types of Boolean constraint satisfaction problems.We completely characterize whether these problems admit polynomial kernelizations, i.e.,given such a problem our results either provide a polynomial kernelization, or they show that the problem does not admit a polynomial kernelization. These dichotomies are characterized by properties of the permitted constraints.Diese Dissertation beschäftigt sich mit der Kernelisierbarkeit von generischen Problemen, definiert durch syntaktische Beschränkungen oder als Problemsystem. Polynomielle Kernelisierung ist eine Formalisierung des Konzepts der Datenreduktion für kombinatorisch schwierige Probleme. Sie erlaubt eine grüdliche Untersuchung dieses wichtigen und fundamentalen Begriffs. Die Dissertation gliedert sich in zwei Hauptteile. Im ersten Teil beweisen wir, dass alle Probleme aus zwei syntaktischen Teilklassen der Menge aller konstantfaktor-approximierbaren Probleme polynomielle Kernelisierungen haben. Die Probleme müssen durch Optimierung über Formeln in Prädikatenlogik erster Stufe mit beschränkter Quantifizierung beschreibbar sein. Eine Relaxierung dieser Beschränkungen gestattet bereits Probleme, die keine polynomielle Kernelisierung erlauben. Im Anschluss betrachten wir Kantenmodifizierungsprobleme und zeigen, dass diese im Allgemeinen keine polynomielle Kernelisierung haben. Im zweiten Teil betrachten wir drei Arten von booleschen Constraint-Satisfaction-Problemen. Wir charakterisieren vollständig welche dieser Probleme polynomielle Kernelisierungen erlauben. Für jedes gegebene Problem zeigen unsere Resultate entweder eine polynomielle Kernelisierung oder sie zeigen, dass das Problem keine polynomielle Kernelisierung hat. Die Dichotomien sind durch Eigenschaften der erlaubten Constraints charakterisiert
Control of mixing in a Stokes' fluid flow
Accepted versio
Recommended from our members
Unconditional Lower Bounds in Complexity Theory
This work investigates the hardness of solving natural computational problems according to different complexity measures. Our results and techniques span several areas in theoretical computer science and discrete mathematics. They have in common the following aspects: (i) the results are unconditional, i.e., they rely on no unproven hardness assumption from complexity theory; (ii) the corresponding lower bounds are essentially optimal. Among our contributions, we highlight the following results.
Constraint Satisfaction Problems and Monotone Complexity. We introduce a natural formulation of the satisfiability problem as a monotone function, and prove a near-optimal 2^{Ω (n/log n)} lower bound on the size of monotone formulas solving k-SAT on n-variable instances (for a large enough k ∈ ℕ). More generally, we investigate constraint satisfaction problems according to the geometry of their constraints, i.e., as a function of the hypergraph describing which variables appear in each constraint. Our results show in a certain technical sense that the monotone circuit depth complexity of the satisfiability problem is polynomially related to the tree-width of the corresponding graphs.
Interactive Protocols and Communication Complexity. We investigate interactive compression protocols, a hybrid model between computational complexity and communication complexity. We prove that the communication complexity of the Majority function on n-bit inputs with respect to Boolean circuits of size s and depth d extended with modulo p gates is precisely n/log^{ϴ(d)} s, where p is a fixed prime number, and d ∈ ℕ. Further, we establish a strong round-separation theorem for bounded-depth circuits, showing that (r+1)-round protocols can be substantially more efficient than r-round protocols, for every r ∈ ℕ.
Negations in Computational Learning Theory. We study the learnability of circuits containing a given number of negation gates, a measure that interpolates between monotone functions, and the class of all functions. Let C^t_n be the class of Boolean functions on n input variables that can be computed by Boolean circuits with at most t negations. We prove that any algorithm that learns every f ∈ C^t_n with membership queries according to the uniform distribution to accuracy ε has query complexity 2^{Ω (2^t sqrt(n)/ε)} (for a large range of these parameters). Moreover, we give an algorithm that learns C^t_n from random examples only, and with a running time that essentially matches this information-theoretic lower bound.
Negations in Theory of Cryptography. We investigate the power of negation gates in cryptography and related areas, and prove that many basic cryptographic primitives require essentially the maximum number of negations among all Boolean functions. In other words, cryptography is highly non-monotone. Our results rely on a variety of techniques, and give near-optimal lower bounds for pseudorandom functions, error-correcting codes, hardcore predicates, randomness extractors, and small-bias generators.
Algorithms versus Circuit Lower Bounds. We strengthen a few connections between algorithms and circuit lower bounds. We show that the design of faster algorithms in some widely investigated learning models would imply new unconditional lower bounds in complexity theory. In addition, we prove that the existence of non-trivial satisfiability algorithms for certain classes of Boolean circuits of depth d+2 leads to lower bounds for the corresponding class of circuits of depth d. These results show that either there are no faster algorithms for some computational tasks, or certain circuit lower bounds hold
- …