4,068 research outputs found

    On space efficiency of algorithms working on structural decompositions of graphs

    Get PDF
    Dynamic programming on path and tree decompositions of graphs is a technique that is ubiquitous in the field of parameterized and exponential-time algorithms. However, one of its drawbacks is that the space usage is exponential in the decomposition's width. Following the work of Allender et al. [Theory of Computing, '14], we investigate whether this space complexity explosion is unavoidable. Using the idea of reparameterization of Cai and Juedes [J. Comput. Syst. Sci., '03], we prove that the question is closely related to a conjecture that the Longest Common Subsequence problem parameterized by the number of input strings does not admit an algorithm that simultaneously uses XP time and FPT space. Moreover, we complete the complexity landscape sketched for pathwidth and treewidth by Allender et al. by considering the parameter tree-depth. We prove that computations on tree-depth decompositions correspond to a model of non-deterministic machines that work in polynomial time and logarithmic space, with access to an auxiliary stack of maximum height equal to the decomposition's depth. Together with the results of Allender et al., this describes a hierarchy of complexity classes for polynomial-time non-deterministic machines with different restrictions on the access to working space, which mirrors the classic relations between treewidth, pathwidth, and tree-depth.Comment: An extended abstract appeared in the proceedings of STACS'16. The new version is augmented with a space-efficient algorithm for Dominating Set using the Chinese remainder theore

    Solving weighted and counting variants of connectivity problems parameterized by treewidth deterministically in single exponential time

    Full text link
    It is well known that many local graph problems, like Vertex Cover and Dominating Set, can be solved in 2^{O(tw)}|V|^{O(1)} time for graphs G=(V,E) with a given tree decomposition of width tw. However, for nonlocal problems, like the fundamental class of connectivity problems, for a long time we did not know how to do this faster than tw^{O(tw)}|V|^{O(1)}. Recently, Cygan et al. (FOCS 2011) presented Monte Carlo algorithms for a wide range of connectivity problems running in time $c^{tw}|V|^{O(1)} for a small constant c, e.g., for Hamiltonian Cycle and Steiner tree. Naturally, this raises the question whether randomization is necessary to achieve this runtime; furthermore, it is desirable to also solve counting and weighted versions (the latter without incurring a pseudo-polynomial cost in terms of the weights). We present two new approaches rooted in linear algebra, based on matrix rank and determinants, which provide deterministic c^{tw}|V|^{O(1)} time algorithms, also for weighted and counting versions. For example, in this time we can solve the traveling salesman problem or count the number of Hamiltonian cycles. The rank-based ideas provide a rather general approach for speeding up even straightforward dynamic programming formulations by identifying "small" sets of representative partial solutions; we focus on the case of expressing connectivity via sets of partitions, but the essential ideas should have further applications. The determinant-based approach uses the matrix tree theorem for deriving closed formulas for counting versions of connectivity problems; we show how to evaluate those formulas via dynamic programming.Comment: 36 page

    A structural approach to kernels for ILPs: Treewidth and Total Unimodularity

    Get PDF
    Kernelization is a theoretical formalization of efficient preprocessing for NP-hard problems. Empirically, preprocessing is highly successful in practice, for example in state-of-the-art ILP-solvers like CPLEX. Motivated by this, previous work studied the existence of kernelizations for ILP related problems, e.g., for testing feasibility of Ax <= b. In contrast to the observed success of CPLEX, however, the results were largely negative. Intuitively, practical instances have far more useful structure than the worst-case instances used to prove these lower bounds. In the present paper, we study the effect that subsystems with (Gaifman graph of) bounded treewidth or totally unimodularity have on the kernelizability of the ILP feasibility problem. We show that, on the positive side, if these subsystems have a small number of variables on which they interact with the remaining instance, then we can efficiently replace them by smaller subsystems of size polynomial in the domain without changing feasibility. Thus, if large parts of an instance consist of such subsystems, then this yields a substantial size reduction. We complement this by proving that relaxations to the considered structures, e.g., larger boundaries of the subsystems, allow worst-case lower bounds against kernelization. Thus, these relaxed structures can be used to build instance families that cannot be efficiently reduced, by any approach.Comment: Extended abstract in the Proceedings of the 23rd European Symposium on Algorithms (ESA 2015

    Algorithms for detecting dependencies and rigid subsystems for CAD

    Get PDF
    Geometric constraint systems underly popular Computer Aided Design soft- ware. Automated approaches for detecting dependencies in a design are critical for developing robust solvers and providing informative user feedback, and we provide algorithms for two types of dependencies. First, we give a pebble game algorithm for detecting generic dependencies. Then, we focus on identifying the "special positions" of a design in which generically independent constraints become dependent. We present combinatorial algorithms for identifying subgraphs associated to factors of a particular polynomial, whose vanishing indicates a special position and resulting dependency. Further factoring in the Grassmann- Cayley algebra may allow a geometric interpretation giving conditions (e.g., "these two lines being parallel cause a dependency") determining the special position.Comment: 37 pages, 14 figures (v2 is an expanded version of an AGD'14 abstract based on v1

    Approximate Inference in Graphical Models using Tensor Decompositions

    Get PDF
    Contains fulltext : 35136.pdf (publisher's version ) (Open Access)20 p

    Sublinear Time and Space Algorithms for Correlation Clustering via Sparse-Dense Decompositions

    Get PDF
    We present a new approach for solving (minimum disagreement) correlation clustering that results in sublinear algorithms with highly efficient time and space complexity for this problem. In particular, we obtain the following algorithms for nn-vertex (+/)(+/-)-labeled graphs GG: -- A sublinear-time algorithm that with high probability returns a constant approximation clustering of GG in O(nlog2n)O(n\log^2{n}) time assuming access to the adjacency list of the (+)(+)-labeled edges of GG (this is almost quadratically faster than even reading the input once). Previously, no sublinear-time algorithm was known for this problem with any multiplicative approximation guarantee. -- A semi-streaming algorithm that with high probability returns a constant approximation clustering of GG in O(nlogn)O(n\log{n}) space and a single pass over the edges of the graph GG (this memory is almost quadratically smaller than input size). Previously, no single-pass algorithm with o(n2)o(n^2) space was known for this problem with any approximation guarantee. The main ingredient of our approach is a novel connection to sparse-dense graph decompositions that are used extensively in the graph coloring literature. To our knowledge, this connection is the first application of these decompositions beyond graph coloring, and in particular for the correlation clustering problem, and can be of independent interest

    Compositional Algorithms on Compositional Data: Deciding Sheaves on Presheaves

    Full text link
    Algorithmicists are well-aware that fast dynamic programming algorithms are very often the correct choice when computing on compositional (or even recursive) graphs. Here we initiate the study of how to generalize this folklore intuition to mathematical structures writ large. We achieve this horizontal generality by adopting a categorial perspective which allows us to show that: (1) structured decompositions (a recent, abstract generalization of many graph decompositions) define Grothendieck topologies on categories of data (adhesive categories) and that (2) any computational problem which can be represented as a sheaf with respect to these topologies can be decided in linear time on classes of inputs which admit decompositions of bounded width and whose decomposition shapes have bounded feedback vertex number. This immediately leads to algorithms on objects of any C-set category; these include -- to name but a few examples -- structures such as: symmetric graphs, directed graphs, directed multigraphs, hypergraphs, directed hypergraphs, databases, simplicial complexes, circular port graphs and half-edge graphs. Thus we initiate the bridging of tools from sheaf theory, structural graph theory and parameterized complexity theory; we believe this to be a very fruitful approach for a general, algebraic theory of dynamic programming algorithms. Finally we pair our theoretical results with concrete implementations of our main algorithmic contribution in the AlgebraicJulia ecosystem.Comment: Revised and simplified notation and improved exposition. The companion code can be found here: https://github.com/AlgebraicJulia/StructuredDecompositions.j
    corecore