14 research outputs found
Algorithms for Algebraic Path Properties in Concurrent Systems of Constant Treewidth Components
We study algorithmic questions wrt algebraic path properties in concurrent systems, where the transitions of the system are labeled from a complete, closed semiring. The algebraic path properties can model dataflow analysis problems, the shortest path problem, and many other natural problems that arise in program analysis. We consider that each component of the concurrent system is a graph with constant treewidth, a property satisfied by the controlflow graphs of most programs. We allow for multiple possible queries, which arise naturally in demand driven dataflow analysis. The study of multiple queries allows us to consider the tradeoff between the resource usage of the one-time preprocessing and for each individual query. The traditional approach constructs the product graph of all components and applies the best-known graph algorithm on the product. In this approach, even the answer to a single query requires the transitive closure (i.e., the results of all possible queries), which provides no room for tradeoff between preprocessing and query time.
Our main contributions are algorithms that significantly improve the worst-case running time of the traditional approach, and provide various tradeoffs depending on the number of queries. For example, in a concurrent system of two components, the traditional approach requires hexic time in the worst case for answering one query as well as computing the transitive closure, whereas we show that with one-time preprocessing in almost cubic time, each subsequent query can be answered in at most linear time, and even the transitive closure can be computed in almost quartic time. Furthermore, we establish conditional optimality results showing that the worst-case running time of our algorithms cannot be improved without achieving major breakthroughs in graph algorithms (i.e., improving the worst-case bound for the shortest path problem in general graphs). Preliminary experimental results show that our algorithms perform favorably on several benchmarks
An Efficient Algorithm for Computing Network Reliability in Small Treewidth
We consider the classic problem of Network Reliability. A network is given
together with a source vertex, one or more target vertices, and probabilities
assigned to each of the edges. Each edge appears in the network with its
associated probability and the problem is to determine the probability of
having at least one source-to-target path. This problem is known to be NP-hard.
We present a linear-time fixed-parameter algorithm based on a parameter
called treewidth, which is a measure of tree-likeness of graphs. Network
Reliability was already known to be solvable in polynomial time for bounded
treewidth, but there were no concrete algorithms and the known methods used
complicated structures and were not easy to implement. We provide a
significantly simpler and more intuitive algorithm that is much easier to
implement.
We also report on an implementation of our algorithm and establish the
applicability of our approach by providing experimental results on the graphs
of subway and transit systems of several major cities, such as London and
Tokyo. To the best of our knowledge, this is the first exact algorithm for
Network Reliability that can scale to handle real-world instances of the
problem.Comment: 14 page
Improved Algorithms for Parity and Streett objectives
The computation of the winning set for parity objectives and for Streett
objectives in graphs as well as in game graphs are central problems in
computer-aided verification, with application to the verification of closed
systems with strong fairness conditions, the verification of open systems,
checking interface compatibility, well-formedness of specifications, and the
synthesis of reactive systems. We show how to compute the winning set on
vertices for (1) parity-3 (aka one-pair Streett) objectives in game graphs in
time and for (2) k-pair Streett objectives in graphs in time
. For both problems this gives faster algorithms for dense
graphs and represents the first improvement in asymptotic running time in 15
years
LNCS
Discrete-time Markov Chains (MCs) and Markov Decision Processes (MDPs) are two standard formalisms in system analysis. Their main associated quantitative objectives are hitting probabilities, discounted sum, and mean payoff. Although there are many techniques for computing these objectives in general MCs/MDPs, they have not been thoroughly studied in terms of parameterized algorithms, particularly when treewidth is used as the parameter. This is in sharp contrast to qualitative objectives for MCs, MDPs and graph games, for which treewidth-based algorithms yield significant complexity improvements. In this work, we show that treewidth can also be used to obtain faster algorithms for the quantitative problems. For an MC with n states and m transitions, we show that each of the classical quantitative objectives can be computed in O((n+m)⋅t2) time, given a tree decomposition of the MC with width t. Our results also imply a bound of O(κ⋅(n+m)⋅t2) for each objective on MDPs, where κ is the number of strategy-iteration refinements required for the given input and objective. Finally, we make an experimental evaluation of our new algorithms on low-treewidth MCs and MDPs obtained from the DaCapo benchmark suite. Our experiments show that on low-treewidth MCs and MDPs, our algorithms outperform existing well-established methods by one or more orders of magnitude
Conditionally Optimal Algorithms for Generalized B\"uchi Games
Games on graphs provide the appropriate framework to study several central
problems in computer science, such as the verification and synthesis of
reactive systems. One of the most basic objectives for games on graphs is the
liveness (or B\"uchi) objective that given a target set of vertices requires
that some vertex in the target set is visited infinitely often. We study
generalized B\"uchi objectives (i.e., conjunction of liveness objectives), and
implications between two generalized B\"uchi objectives (known as GR(1)
objectives), that arise in numerous applications in computer-aided
verification. We present improved algorithms and conditional super-linear lower
bounds based on widely believed assumptions about the complexity of (A1)
combinatorial Boolean matrix multiplication and (A2) CNF-SAT. We consider graph
games with vertices, edges, and generalized B\"uchi objectives with
conjunctions. First, we present an algorithm with running time , improving the previously known and worst-case bounds. Our algorithm is optimal for dense graphs under (A1).
Second, we show that the basic algorithm for the problem is optimal for sparse
graphs when the target sets have constant size under (A2). Finally, we consider
GR(1) objectives, with conjunctions in the antecedent and
conjunctions in the consequent, and present an -time algorithm, improving the previously known -time algorithm for
Efficient parameterized algorithms for data packing
There is a huge gap between the speeds of modern caches and main memories, and therefore cache misses account for a considerable loss of efficiency in programs. The predominant technique to address this issue has been Data Packing: data elements that are frequently accessed within time proximity are packed into the same cache block, thereby minimizing accesses to the main memory. We consider the algorithmic problem of Data Packing on a two-level memory system. Given a reference sequence R of accesses to data elements, the task is to partition the elements into cache blocks such that the number of cache misses on R is minimized. The problem is notoriously difficult: it is NP-hard even when the cache has size 1, and is hard to approximate for any cache size larger than 4. Therefore, all existing techniques for Data Packing are based on heuristics and lack theoretical guarantees. In this work, we present the first positive theoretical results for Data Packing, along with new and stronger negative results. We consider the problem under the lens of the underlying access hypergraphs, which are hypergraphs of affinities between the data elements, where the order of an access hypergraph corresponds to the size of the affinity group. We study the problem parameterized by the treewidth of access hypergraphs, which is a standard notion in graph theory to measure the closeness of a graph to a tree. Our main results are as follows: We show there is a number q* depending on the cache parameters such that (a) if the access hypergraph of order q* has constant treewidth, then there is a linear-time algorithm for Data Packing; (b)the Data Packing problem remains NP-hard even if the access hypergraph of order q*-1 has constant treewidth. Thus, we establish a fine-grained dichotomy depending on a single parameter, namely, the highest order among access hypegraphs that have constant treewidth; and establish the optimal value q* of this parameter. Finally, we present an experimental evaluation of a prototype implementation of our algorithm. Our results demonstrate that, in practice, access hypergraphs of many commonly-used algorithms have small treewidth. We compare our approach with several state-of-the-art heuristic-based algorithms and show that our algorithm leads to significantly fewer cache-misses
Optimal Dyck reachability for data-dependence and Alias analysis
A fundamental algorithmic problem at the heart of static analysis is Dyck reachability. The input is a graph where the edges are labeled with different types of opening and closing parentheses, and the reachability information is computed via paths whose parentheses are properly matched. We present new results for Dyck reachability problems with applications to alias analysis and data-dependence analysis. Our main contributions, that include improved upper bounds as well as lower bounds that establish optimality guarantees, are as follows: First, we consider Dyck reachability on bidirected graphs, which is the standard way of performing field-sensitive points-to analysis. Given a bidirected graph with n nodes and m edges, we present: (i) an algorithm with worst-case running time O(m + n · α(n)), where α(n) is the inverse Ackermann function, improving the previously known O(n2) time bound; (ii) a matching lower bound that shows that our algorithm is optimal wrt to worst-case complexity; and (iii) an optimal average-case upper bound of O(m) time, improving the previously known O(m · logn) bound. Second, we consider the problem of context-sensitive data-dependence analysis, where the task is to obtain analysis summaries of library code in the presence of callbacks. Our algorithm preprocesses libraries in almost linear time, after which the contribution of the library in the complexity of the client analysis is only linear, and only wrt the number of call sites. Third, we prove that combinatorial algorithms for Dyck reachability on general graphs with truly sub-cubic bounds cannot be obtained without obtaining sub-cubic combinatorial algorithms for Boolean Matrix Multiplication, which is a long-standing open problem. Thus we establish that the existing combinatorial algorithms for Dyck reachability are (conditionally) optimal for general graphs. We also show that the same hardness holds for graphs of constant treewidth. Finally, we provide a prototype implementation of our algorithms for both alias analysis and data-dependence analysis. Our experimental evaluation demonstrates that the new algorithms significantly outperform all existing methods on the two problems, over real-world benchmarks