7 research outputs found

    Weighted pushdown systems and their application to interprocedural dataflow analysis

    Get PDF
    AbstractRecently, pushdown systems (PDSs) have been extended to weighted PDSs, in which each transition is labeled with a value, and the goal is to determine the meet-over-all-paths value (for paths that meet a certain criterion). This paper shows how weighted PDSs yield new algorithms for certain classes of interprocedural dataflow-analysis problems

    Faster Algorithms for Weighted Recursive State Machines

    Full text link
    Pushdown systems (PDSs) and recursive state machines (RSMs), which are linearly equivalent, are standard models for interprocedural analysis. Yet RSMs are more convenient as they (a) explicitly model function calls and returns, and (b) specify many natural parameters for algorithmic analysis, e.g., the number of entries and exits. We consider a general framework where RSM transitions are labeled from a semiring and path properties are algebraic with semiring operations, which can model, e.g., interprocedural reachability and dataflow analysis problems. Our main contributions are new algorithms for several fundamental problems. As compared to a direct translation of RSMs to PDSs and the best-known existing bounds of PDSs, our analysis algorithm improves the complexity for finite-height semirings (that subsumes reachability and standard dataflow properties). We further consider the problem of extracting distance values from the representation structures computed by our algorithm, and give efficient algorithms that distinguish the complexity of a one-time preprocessing from the complexity of each individual query. Another advantage of our algorithm is that our improvements carry over to the concurrent setting, where we improve the best-known complexity for the context-bounded analysis of concurrent RSMs. Finally, we provide a prototype implementation that gives a significant speed-up on several benchmarks from the SLAM/SDV project

    IST Austria Thesis

    Get PDF
    This dissertation focuses on algorithmic aspects of program verification, and presents modeling and complexity advances on several problems related to the static analysis of programs, the stateless model checking of concurrent programs, and the competitive analysis of real-time scheduling algorithms. Our contributions can be broadly grouped into five categories. Our first contribution is a set of new algorithms and data structures for the quantitative and data-flow analysis of programs, based on the graph-theoretic notion of treewidth. It has been observed that the control-flow graphs of typical programs have special structure, and are characterized as graphs of small treewidth. We utilize this structural property to provide faster algorithms for the quantitative and data-flow analysis of recursive and concurrent programs. In most cases we make an algebraic treatment of the considered problem, where several interesting analyses, such as the reachability, shortest path, and certain kind of data-flow analysis problems follow as special cases. We exploit the constant-treewidth property to obtain algorithmic improvements for on-demand versions of the problems, and provide data structures with various tradeoffs between the resources spent in the preprocessing and querying phase. We also improve on the algorithmic complexity of quantitative problems outside the algebraic path framework, namely of the minimum mean-payoff, minimum ratio, and minimum initial credit for energy problems. Our second contribution is a set of algorithms for Dyck reachability with applications to data-dependence analysis and alias analysis. In particular, we develop an optimal algorithm for Dyck reachability on bidirected graphs, which are ubiquitous in context-insensitive, field-sensitive points-to analysis. Additionally, we develop an efficient algorithm for context-sensitive data-dependence analysis via Dyck reachability, where the task is to obtain analysis summaries of library code in the presence of callbacks. Our algorithm preprocesses libraries in almost linear time, after which the contribution of the library in the complexity of the client analysis is (i)~linear in the number of call sites and (ii)~only logarithmic in the size of the whole library, as opposed to linear in the size of the whole library. Finally, we prove that Dyck reachability is Boolean Matrix Multiplication-hard in general, and the hardness also holds for graphs of constant treewidth. This hardness result strongly indicates that there exist no combinatorial algorithms for Dyck reachability with truly subcubic complexity. Our third contribution is the formalization and algorithmic treatment of the Quantitative Interprocedural Analysis framework. In this framework, the transitions of a recursive program are annotated as good, bad or neutral, and receive a weight which measures the magnitude of their respective effect. The Quantitative Interprocedural Analysis problem asks to determine whether there exists an infinite run of the program where the long-run ratio of the bad weights over the good weights is above a given threshold. We illustrate how several quantitative problems related to static analysis of recursive programs can be instantiated in this framework, and present some case studies to this direction. Our fourth contribution is a new dynamic partial-order reduction for the stateless model checking of concurrent programs. Traditional approaches rely on the standard Mazurkiewicz equivalence between traces, by means of partitioning the trace space into equivalence classes, and attempting to explore a few representatives from each class. We present a new dynamic partial-order reduction method called the Data-centric Partial Order Reduction (DC-DPOR). Our algorithm is based on a new equivalence between traces, called the observation equivalence. DC-DPOR explores a coarser partitioning of the trace space than any exploration method based on the standard Mazurkiewicz equivalence. Depending on the program, the new partitioning can be even exponentially coarser. Additionally, DC-DPOR spends only polynomial time in each explored class. Our fifth contribution is the use of automata and game-theoretic verification techniques in the competitive analysis and synthesis of real-time scheduling algorithms for firm-deadline tasks. On the analysis side, we leverage automata on infinite words to compute the competitive ratio of real-time schedulers subject to various environmental constraints. On the synthesis side, we introduce a new instance of two-player mean-payoff partial-information games, and show how the synthesis of an optimal real-time scheduler can be reduced to computing winning strategies in this new type of games

    Ernst Denert Award for Software Engineering 2019

    Get PDF
    This open access book provides an overview of the dissertations of the five nominees for the Ernst Denert Award for Software Engineering in 2019. The prize, kindly sponsored by the Gerlind & Ernst Denert Stiftung, is awarded for excellent work within the discipline of Software Engineering, which includes methods, tools and procedures for better and efficient development of high quality software. An essential requirement for the nominated work is its applicability and usability in industrial practice. The book contains five papers describing the works by Sebastian Baltes (U Trier) on Software Developers’Work Habits and Expertise, Timo Greifenberg’s thesis on Artefaktbasierte Analyse modellgetriebener Softwareentwicklungsprojekte, Marco Konersmann’s (U Duisburg-Essen) work on Explicitly Integrated Architecture, Marija Selakovic’s (TU Darmstadt) research about Actionable Program Analyses for Improving Software Performance, and Johannes Späth’s (Paderborn U) thesis on Synchronized Pushdown Systems for Pointer and Data-Flow Analysis – which actually won the award. The chapters describe key findings of the respective works, show their relevance and applicability to practice and industrial software engineering projects, and provide additional information and findings that have only been discovered afterwards, e.g. when applying the results in industry. This way, the book is not only interesting to other researchers, but also to industrial software professionals who would like to learn about the application of state-of-the-art methods in their daily work

    Verification, slicing, and visualization of programs with contracts

    Get PDF
    Tese de doutoramento em Informática (área de especialização em Ciências da Computação)As a specification carries out relevant information concerning the behaviour of a program, why not explore this fact to slice a program in a semantic sense aiming at optimizing it or easing its verification? It was this idea that Comuzzi, in 1996, introduced with the notion of postcondition-based slicing | slice a program using the information contained in the postcondition (the condition Q that is guaranteed to hold at the exit of a program). After him, several advances were made and different extensions were proposed, bridging the two areas of Program Verification and Program Slicing: specifically precondition-based slicing and specification-based slicing. The work reported in this Ph.D. dissertation explores further relations between these two areas aiming at discovering mutual benefits. A deep study of specification-based slicing has shown that the original algorithm is not efficient and does not produce minimal slices. In this dissertation, traditional specification-based slicing algorithms are revisited and improved (their formalization is proposed under the name of assertion-based slicing), in a new framework that is appropriate for reasoning about imperative programs annotated with contracts and loop invariants. In the same theoretical framework, the semantic slicing algorithms are extended to work at the program level through a new concept called contract based slicing. Contract-based slicing, constituting another contribution of this work, allows for the study of a program at an interprocedural level, enabling optimizations in the context of code reuse. Motivated by the lack of tools to prove that the proposed algorithms work in practice, a tool (GamaSlicer) was also developed. It implements all the existing semantic slicing algorithms, in addition to the ones introduced in this dissertation. This third contribution is based on generic graph visualization and animation algorithms that were adapted to work with verification and slice graphs, two specific cases of labelled control low graphs.Tendo em conta que uma especificação contém informação relevante no que diz respeito ao comportamento de um programa, faz sentido explorar este facto para o cortar em fatias (slice) com o objectivo de o optimizar ou de facilitar a sua verificação. Foi precisamente esta ideia que Comuzzi introduziu, em 1996, apresentando o conceito de postcondition-based slicing que consiste em cortar um programa usando a informação contida na pos-condicão (a condição Q que se assegura ser verdadeira no final da execução do programa). Depois da introdução deste conceito, vários avanços foram feitos e diferentes extensões foram propostas, aproximando desta forma duas áreas que até então pareciam desligadas: Program Verification e Program Slicing. Entre estes conceitos interessa-nos destacar as noções de precondition-based slicing e specification-based slicing, que serão revisitadas neste trabalho. Um estudo aprofundado do conceito de specification-based slicing relevou que o algoritmo original não é eficiente e não produz slices mínimos. O trabalho reportado nesta dissertação de doutoramento explora a ideia de tornar mais próximas essas duas áreas visando obter benefícios mútuos. Assim, estabelecendo uma nova base teórica matemática, os algoritmos originais de specification-based slicing são revistos e aperfeiçoados | a sua formalizacão é proposta com o nome de assertion-based slicing. Ainda sobre a mesma base teórica, os algoritmos de slicing são extendidos, de forma a funcionarem ao nível do programa; alem disso introduz-se um novo conceito: contract-based slicing. Este conceito, contract-based slicing, sendo mais um dos contributos do trabalho aqui descrito, possibilita o estudo de um programa ao nível externo de um procedimento, permitindo, por um lado, otimizações no contexto do seu uso, e por outro, a sua reutilização segura. Devido à falta de ferramentas que provem que os algoritmos propostos de facto funcionam na prática, foi desenvolvida uma, com o nome GamaSlicer, que implementa todos os algoritmos existentes de slicing semântico e os novos propostos. Uma terceira contribuição é baseada nos algoritmos genéricos de visualização e animação de grafos que foram adaptados para funcionar com os grafos de controlo de fluxo etiquetados e os grafos de verificação e slicing.Fundação para a Ciência e a Tecnologia (FCT) através da Bolsa de Doutoramento SFRH/BD/33231/2007Projecto RESCUE (contrato FCT sob a referência PTDC / EIA / 65862 /2006)Projecto CROSS (contrato FCT sob a referência PTDC / EIACCO / 108995 / 2008

    Solving Multiple Dataflow Queries Using WPDSs

    No full text
    A dataflow query asks for the set of reachable (abstract) states, given a starting set of states. In this paper, we show how to optimize multiple queries on the same program (each with a different starting set of states) for better overall running time. After a preprocessing phase, we obtain an asymptotic improvement in answering dataflow queries. We use weighted pushdown systems as the abstract model of a program. Our techniques are interprocedural. They are general, yet provide an impressive speedup. We applied our algorithm to three very different applications, one based on finding affine relations using linear algebra, and others for model checking Boolean programs, and obtained 1.5-fold to 7-fold speedups
    corecore