17 research outputs found

    Towards Work-Efficient Parallel Parameterized Algorithms

    Full text link
    Parallel parameterized complexity theory studies how fixed-parameter tractable (fpt) problems can be solved in parallel. Previous theoretical work focused on parallel algorithms that are very fast in principle, but did not take into account that when we only have a small number of processors (between 2 and, say, 1024), it is more important that the parallel algorithms are work-efficient. In the present paper we investigate how work-efficient fpt algorithms can be designed. We review standard methods from fpt theory, like kernelization, search trees, and interleaving, and prove trade-offs for them between work efficiency and runtime improvements. This results in a toolbox for developing work-efficient parallel fpt algorithms.Comment: Prior full version of the paper that will appear in Proceedings of the 13th International Conference and Workshops on Algorithms and Computation (WALCOM 2019), February 27 - March 02, 2019, Guwahati, India. The final authenticated version is available online at https://doi.org/10.1007/978-3-030-10564-8_2

    Dynamic Complexity Meets Parameterised Algorithms

    Get PDF
    Dynamic Complexity studies the maintainability of queries with logical formulas in a setting where the underlying structure or database changes over time. Most often, these formulas are from first-order logic, giving rise to the dynamic complexity class DynFO. This paper investigates extensions of DynFO in the spirit of parameterised algorithms. In this setting structures come with a parameter k and the extensions allow additional "space" of size f(k) (in the form of an additional structure of this size) or additional time f(k) (in the form of iterations of formulas) or both. The resulting classes are compared with their non-dynamic counterparts and other classes. The main part of the paper explores the applicability of methods for parameterised algorithms to this setting through case studies for various well-known parameterised problems

    Fast Parallel Fixed-Parameter Algorithms via Color Coding

    Get PDF
    Fixed-parameter algorithms have been successfully applied to solve numerous difficult problems within acceptable time bounds on large inputs. However, most fixed-parameter algorithms are inherently \emph{sequential} and, thus, make no use of the parallel hardware present in modern computers. We show that parallel fixed-parameter algorithms do not only exist for numerous parameterized problems from the literature -- including vertex cover, packing problems, cluster editing, cutting vertices, finding embeddings, or finding matchings -- but that there are parallel algorithms working in \emph{constant} time or at least in time \emph{depending only on the parameter} (and not on the size of the input) for these problems. Phrased in terms of complexity classes, we place numerous natural parameterized problems in parameterized versions of AC0^0. On a more technical level, we show how the \emph{color coding} method can be implemented in constant time and apply it to embedding problems for graphs of bounded tree-width or tree-depth and to model checking first-order formulas in graphs of bounded degree

    Speeding up Networks Mining via Neighborhood Diversity

    Get PDF
    Parameterized complexity was classically used to efficiently solve NP-hard problems for small values of a fixed parameter. Then it has also been used as a tool to speed up algorithms for tractable problems. Following this line of research, we design algorithms parameterized by neighborhood diversity (nd) for several graph theoretic problems in P (e.g., Maximum Matching, Triangle counting and listing, Girth and Global minimum vertex cut). Such problems are known to admit algorithms parameterized by modular-width (mw) and consequently - being the nd a "special case" of mw - by nd. However, the proposed novel algorithms allow to improve the computational complexity from a time O(f(mw)? n +m) - where n and m denote, respectively, the number of vertices and edges in the input graph - which is multiplicative in n to a time O(g(nd)+n +m) which is additive only in the size of the input

    Computing Hitting Set Kernels By AC^0-Circuits

    Get PDF
    Given a hypergraph H = (V,E), what is the smallest subset X of V such that e and X are not disjoint for all e in E? This problem, known as the hitting set problem, is a basic problem in parameterized complexity theory. There are well-known kernelization algorithms for it, which get a hypergraph H and a number k as input and output a hypergraph H\u27 such that (1) H has a hitting set of size k if, and only if, H\u27 has such a hitting set and (2) the size of H\u27 depends only on k and on the maximum cardinality d of edges in H. The algorithms run in polynomial time, but are highly sequential. Recently, it has been shown that one of them can be parallelized to a certain degree: one can compute hitting set kernels in parallel time O(d) - but it was conjectured that this is the best parallel algorithm possible. We refute this conjecture and show how hitting set kernels can be computed in constant parallel time. For our proof, we introduce a new, generalized notion of hypergraph sunflowers and show how iterated applications of the color coding technique can sometimes be collapsed into a single application

    Parallel Multivariate Meta-Theorems

    Get PDF
    Fixed-parameter tractability is based on the observation that many hard problems become tractable even on large inputs as long as certain input parameters are small. Originally, "tractable" just meant "solvable in polynomial time," but especially modern hardware raises the question of whether we can also achieve "solvable in polylogarithmic parallel time." A framework for this study of parallel fixed-parameter tractability is available and a number of isolated algorithmic results have been obtained in recent years, but one of the unifying core tools of classical FPT theory has been missing: algorithmic meta-theorems. We establish two such theorems by giving new upper bounds on the circuit depth necessary to solve the model checking problem for monadic second-order logic, once parameterized by the tree width and the formula (this is a parallel version of Courcelle\u27s Theorem) and once by the tree depth and the formula. For our proofs we refine the analysis of earlier algorithms, especially of Bodlaender\u27s, but also need to add new ideas, especially in the context where the parallel runtime is bounded by a function of the parameter and does not depend on the length of the input

    Kernelizing the Hitting Set Problem in Linear Sequential and Constant Parallel Time

    Get PDF
    We analyze a reduction rule for computing kernels for the hitting set problem: In a hypergraph, the link of a set c of vertices consists of all edges that are supersets of c. We call such a set critical if its link has certain easy-to-check size properties. The rule states that the link of a critical c can be replaced by c. It is known that a simple linear-time algorithm for computing hitting set kernels (number of edges) at most k^d (k is the hitting set size, d is the maximum edge size) can be derived from this rule. We parallelize this algorithm and obtain the first AC? kernel algorithm that outputs polynomial-size kernels. Previously, such algorithms were not even known for artificial problems. An interesting application of our methods lies in traditional, non-parameterized approximation theory: Our results imply that uniform AC?-circuits can compute a hitting set whose size is polynomial in the size of an optimal hitting set

    Parallelism with limited nondeterminism

    Get PDF
    Computational complexity theory studies which computational problems can be solved with limited access to resources. The past fifty years have seen a focus on the relationship between intractable problems and efficient algorithms. However, the relationship between inherently sequential problems and highly parallel algorithms has not been as well studied. Are there efficient but inherently sequential problems that admit some relaxed form of highly parallel algorithm? In this dissertation, we develop the theory of structural complexity around this relationship for three common types of computational problems. Specifically, we show tradeoffs between time, nondeterminism, and parallelizability. By clearly defining the notions and complexity classes that capture our intuition for parallelizable and sequential problems, we create a comprehensive framework for rigorously proving parallelizability and non-parallelizability of computational problems. This framework provides the means to prove whether otherwise tractable problems can be effectively parallelized, a need highlighted by the current growth of multiprocessor systems. The views adopted by this dissertation—alternate approaches to solving sequential problems using approximation, limited nondeterminism, and parameterization—can be applied practically throughout computer science
    corecore