731 research outputs found

    Graph Sparsification, Spectral Sketches, and Faster Resistance Computation, via Short Cycle Decompositions

    Get PDF
    We develop a framework for graph sparsification and sketching, based on a new tool, short cycle decomposition -- a decomposition of an unweighted graph into an edge-disjoint collection of short cycles, plus few extra edges. A simple observation gives that every graph G on n vertices with m edges can be decomposed in O(mn)O(mn) time into cycles of length at most 2logn2\log n, and at most 2n2n extra edges. We give an m1+o(1)m^{1+o(1)} time algorithm for constructing a short cycle decomposition, with cycles of length no(1)n^{o(1)}, and n1+o(1)n^{1+o(1)} extra edges. These decompositions enable us to make progress on several open questions: * We give an algorithm to find (1±ϵ)(1\pm\epsilon)-approximations to effective resistances of all edges in time m1+o(1)ϵ1.5m^{1+o(1)}\epsilon^{-1.5}, improving over the previous best of O~(min{mϵ2,n2ϵ1})\tilde{O}(\min\{m\epsilon^{-2},n^2 \epsilon^{-1}\}). This gives an algorithm to approximate the determinant of a Laplacian up to (1±ϵ)(1\pm\epsilon) in m1+o(1)+n15/8+o(1)ϵ7/4m^{1 + o(1)} + n^{15/8+o(1)}\epsilon^{-7/4} time. * We show existence and efficient algorithms for constructing graphical spectral sketches -- a distribution over sparse graphs H such that for a fixed vector xx, we have w.h.p. xLHx=(1±ϵ)xLGxx'L_Hx=(1\pm\epsilon)x'L_Gx and xLH+x=(1±ϵ)xLG+xx'L_H^+x=(1\pm\epsilon)x'L_G^+x. This implies the existence of resistance-sparsifiers with about nϵ1n\epsilon^{-1} edges that preserve the effective resistances between every pair of vertices up to (1±ϵ).(1\pm\epsilon). * By combining short cycle decompositions with known tools in graph sparsification, we show the existence of nearly-linear sized degree-preserving spectral sparsifiers, as well as significantly sparser approximations of directed graphs. The latter is critical to recent breakthroughs on faster algorithms for solving linear systems in directed Laplacians. Improved algorithms for constructing short cycle decompositions will lead to improvements for each of the above results.Comment: 80 page

    Lower Bounds and Approximation Algorithms for Search Space Sizes in Contraction Hierarchies

    Get PDF
    Contraction hierarchies (CH) is a prominent preprocessing-based technique that accelerates the computation of shortest paths in road networks by reducing the search space size of a bidirectional Dijkstra run. To explain the practical success of CH, several theoretical upper bounds for the maximum search space size were derived in previous work. For example, it was shown that in minor-closed graph families search space sizes in ?(?n) can be achieved (with n denoting the number of nodes in the graph), and search space sizes in ?(h log D) in graphs of highway dimension h and diameter D. In this paper, we primarily focus on lower bounds. We prove that the average search space size in a so called weak CH is in ?(b_?) for ? ? 2/3 where b_? is the size of a smallest ?-balanced node separator. This discovery allows us to describe the first approximation algorithm for the average search space size. Our new lower bound also shows that the ?(?n) bound for minor-closed graph families is tight. Furthermore, we deeper investigate the relationship of CH and the highway dimension and skeleton dimension of the graph, and prove new lower bound and incomparability results. Finally, we discuss how lower bounds for strong CH can be obtained from solving a HittingSet problem defined on a set of carefully chosen subgraphs of the input network

    An Algorithmic Theory of Integer Programming

    Full text link
    We study the general integer programming problem where the number of variables nn is a variable part of the input. We consider two natural parameters of the constraint matrix AA: its numeric measure aa and its sparsity measure dd. We show that integer programming can be solved in time g(a,d)poly(n,L)g(a,d)\textrm{poly}(n,L), where gg is some computable function of the parameters aa and dd, and LL is the binary encoding length of the input. In particular, integer programming is fixed-parameter tractable parameterized by aa and dd, and is solvable in polynomial time for every fixed aa and dd. Our results also extend to nonlinear separable convex objective functions. Moreover, for linear objectives, we derive a strongly-polynomial algorithm, that is, with running time g(a,d)poly(n)g(a,d)\textrm{poly}(n), independent of the rest of the input data. We obtain these results by developing an algorithmic framework based on the idea of iterative augmentation: starting from an initial feasible solution, we show how to quickly find augmenting steps which rapidly converge to an optimum. A central notion in this framework is the Graver basis of the matrix AA, which constitutes a set of fundamental augmenting steps. The iterative augmentation idea is then enhanced via the use of other techniques such as new and improved bounds on the Graver basis, rapid solution of integer programs with bounded variables, proximity theorems and a new proximity-scaling algorithm, the notion of a reduced objective function, and others. As a consequence of our work, we advance the state of the art of solving block-structured integer programs. In particular, we develop near-linear time algorithms for nn-fold, tree-fold, and 22-stage stochastic integer programs. We also discuss some of the many applications of these classes.Comment: Revision 2: - strengthened dual treedepth lower bound - simplified proximity-scaling algorith

    IST Austria Thesis

    Get PDF
    This dissertation focuses on algorithmic aspects of program verification, and presents modeling and complexity advances on several problems related to the static analysis of programs, the stateless model checking of concurrent programs, and the competitive analysis of real-time scheduling algorithms. Our contributions can be broadly grouped into five categories. Our first contribution is a set of new algorithms and data structures for the quantitative and data-flow analysis of programs, based on the graph-theoretic notion of treewidth. It has been observed that the control-flow graphs of typical programs have special structure, and are characterized as graphs of small treewidth. We utilize this structural property to provide faster algorithms for the quantitative and data-flow analysis of recursive and concurrent programs. In most cases we make an algebraic treatment of the considered problem, where several interesting analyses, such as the reachability, shortest path, and certain kind of data-flow analysis problems follow as special cases. We exploit the constant-treewidth property to obtain algorithmic improvements for on-demand versions of the problems, and provide data structures with various tradeoffs between the resources spent in the preprocessing and querying phase. We also improve on the algorithmic complexity of quantitative problems outside the algebraic path framework, namely of the minimum mean-payoff, minimum ratio, and minimum initial credit for energy problems. Our second contribution is a set of algorithms for Dyck reachability with applications to data-dependence analysis and alias analysis. In particular, we develop an optimal algorithm for Dyck reachability on bidirected graphs, which are ubiquitous in context-insensitive, field-sensitive points-to analysis. Additionally, we develop an efficient algorithm for context-sensitive data-dependence analysis via Dyck reachability, where the task is to obtain analysis summaries of library code in the presence of callbacks. Our algorithm preprocesses libraries in almost linear time, after which the contribution of the library in the complexity of the client analysis is (i)~linear in the number of call sites and (ii)~only logarithmic in the size of the whole library, as opposed to linear in the size of the whole library. Finally, we prove that Dyck reachability is Boolean Matrix Multiplication-hard in general, and the hardness also holds for graphs of constant treewidth. This hardness result strongly indicates that there exist no combinatorial algorithms for Dyck reachability with truly subcubic complexity. Our third contribution is the formalization and algorithmic treatment of the Quantitative Interprocedural Analysis framework. In this framework, the transitions of a recursive program are annotated as good, bad or neutral, and receive a weight which measures the magnitude of their respective effect. The Quantitative Interprocedural Analysis problem asks to determine whether there exists an infinite run of the program where the long-run ratio of the bad weights over the good weights is above a given threshold. We illustrate how several quantitative problems related to static analysis of recursive programs can be instantiated in this framework, and present some case studies to this direction. Our fourth contribution is a new dynamic partial-order reduction for the stateless model checking of concurrent programs. Traditional approaches rely on the standard Mazurkiewicz equivalence between traces, by means of partitioning the trace space into equivalence classes, and attempting to explore a few representatives from each class. We present a new dynamic partial-order reduction method called the Data-centric Partial Order Reduction (DC-DPOR). Our algorithm is based on a new equivalence between traces, called the observation equivalence. DC-DPOR explores a coarser partitioning of the trace space than any exploration method based on the standard Mazurkiewicz equivalence. Depending on the program, the new partitioning can be even exponentially coarser. Additionally, DC-DPOR spends only polynomial time in each explored class. Our fifth contribution is the use of automata and game-theoretic verification techniques in the competitive analysis and synthesis of real-time scheduling algorithms for firm-deadline tasks. On the analysis side, we leverage automata on infinite words to compute the competitive ratio of real-time schedulers subject to various environmental constraints. On the synthesis side, we introduce a new instance of two-player mean-payoff partial-information games, and show how the synthesis of an optimal real-time scheduler can be reduced to computing winning strategies in this new type of games

    The Complexity of Bisimulation and Simulation on Finite Systems

    Full text link
    In this paper the computational complexity of the (bi)simulation problem over restricted graph classes is studied. For trees given as pointer structures or terms the (bi)simulation problem is complete for logarithmic space or NC1^1, respectively. This solves an open problem from Balc\'azar, Gabarr\'o, and S\'antha. Furthermore, if only one of the input graphs is required to be a tree, the bisimulation (simulation) problem is contained in AC1^1 (LogCFL). In contrast, it is also shown that the simulation problem is P-complete already for graphs of bounded path-width
    corecore