35 research outputs found

    Parallel bug-finding in concurrent programs via reduced interleaving instances

    Get PDF
    Concurrency poses a major challenge for program verification, but it can also offer an opportunity to scale when subproblems can be analysed in parallel. We exploit this opportunity here and use a parametrizable code-to-code translation to generate a set of simpler program instances, each capturing a reduced set of the original program’s interleavings. These instances can then be checked independently in parallel. Our approach does not depend on the tool that is chosen for the final analysis, is compatible with weak memory models, and amplifies the effectiveness of existing tools, making them find bugs faster and with fewer resources. We use Lazy-CSeq as an off-the-shelf final verifier to demonstrate that our approach is able, already with a small number of cores, to find bugs in the hardest known concurrency benchmarks in a matter of minutes, whereas other dynamic and static tools fail to do so in hours

    A Light-Weight Approach for Verifying Multi-Threaded Programs with CPAchecker

    Get PDF
    Verifying multi-threaded programs is becoming more and more important, because of the strong trend to increase the number of processing units per CPU socket. We introduce a new configurable program analysis for verifying multi-threaded programs with a bounded number of threads. We present a simple and yet efficient implementation as component of the existing program-verification framework CPACHECKER. While CPACHECKER is already competitive on a large benchmark set of sequential verification tasks, our extension enhances the overall applicability of the framework. Our implementation of handling multiple threads is orthogonal to the abstract domain of the data-flow analysis, and thus, can be combined with several existing analyses in CPACHECKER, like value analysis, interval analysis, and BDD analysis. The new analysis is modular and can be used, for example, to verify reachability properties as well as to detect deadlocks in the program. This paper includes an evaluation of the benefit of some optimization steps (e.g., changing the iteration order of the reachability algorithm or applying partial-order reduction) as well as the comparison with other state-of-the-art tools for verifying multi-threaded programs

    Abstract Interpretation with Unfoldings

    Full text link
    We present and evaluate a technique for computing path-sensitive interference conditions during abstract interpretation of concurrent programs. In lieu of fixed point computation, we use prime event structures to compactly represent causal dependence and interference between sequences of transformers. Our main contribution is an unfolding algorithm that uses a new notion of independence to avoid redundant transformer application, thread-local fixed points to reduce the size of the unfolding, and a novel cutoff criterion based on subsumption to guarantee termination of the analysis. Our experiments show that the abstract unfolding produces an order of magnitude fewer false alarms than a mature abstract interpreter, while being several orders of magnitude faster than solver-based tools that have the same precision.Comment: Extended version of the paper (with the same title and authors) to appear at CAV 201

    Learning-based inductive invariant synthesis

    Get PDF
    The problem of synthesizing adequate inductive invariants to prove a program correct lies at the heart of automated program verification. We investigate, herein, learning approaches to synthesize inductive invariants of sequential programs towards automatically verifying them. To this end, we identify that prior learning approaches were unduly influenced by traditional machine learning models that learned concepts from positive and negative counterexamples. We argue that these models are not robust for invariant synthesis and, consequently, introduce ICE, a robust learning paradigm for synthesizing invariants that learns using positive, negative and implication counterexamples, and show that it admits honest teachers and strongly convergent mechanisms for invariant synthesis. We develop the first learning algorithms in this model with implication counterexamples for two domains, one for learning arbitrary Boolean combinations of numerical invariants over scalar variables and one for quantified invariants of linear data-structures including arrays and dynamic lists. We implement the ICE learners and an appropriate teacher, and show that the resulting invariant synthesis is robust, practical, convergent, and efficient. In order to deductively verify shared-memory concurrent programs, we present a sequentialization result and show that synthesizing rely-guarantee annotations for them can be reduced to invariant synthesis for sequential programs. Further, for verifying asynchronous event-driven systems, we develop a new invariant synthesis technique that constructs almost-synchronous invariants over concrete system configurations. These invariants, for most systems, are finitely representable, and can be thereby constructed, including for the USB driver that ships with Microsoft Windows phone

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems

    RVSDG: An Intermediate Representation for Optimizing Compilers

    Full text link
    Intermediate Representations (IRs) are central to optimizing compilers as the way the program is represented may enhance or limit analyses and transformations. Suitable IRs focus on exposing the most relevant information and establish invariants that different compiler passes can rely on. While control-flow centric IRs appear to be a natural fit for imperative programming languages, analyses required by compilers have increasingly shifted to understand data dependencies and work at multiple abstraction layers at the same time. This is partially evidenced in recent developments such as the MLIR proposed by Google. However, rigorous use of data flow centric IRs in general purpose compilers has not been evaluated for feasibility and usability as previous works provide no practical implementations. We present the Regionalized Value State Dependence Graph (RVSDG) IR for optimizing compilers. The RVSDG is a data flow centric IR where nodes represent computations, edges represent computational dependencies, and regions capture the hierarchical structure of programs. It represents programs in demand-dependence form, implicitly supports structured control flow, and models entire programs within a single IR. We provide a complete specification of the RVSDG, construction and destruction methods, as well as exemplify its utility by presenting Dead Node and Common Node Elimination optimizations. We implemented a prototype compiler and evaluate it in terms of performance, code size, compilation time, and representational overhead. Our results indicate that the RVSDG can serve as a competitive IR in optimizing compilers while reducing complexity

    Étendre la spécification de programmes C concurrents et les vérifier par une transformation de source à source

    Get PDF
    L’utilisation croissante de systèmes informatiques critiques et l’augmentation de leur complexité rendent leur bon fonctionnement essentiel. Le model-checking logiciel permet de prouver formellement l’absence d’erreurs dans un programme. Il reste cependant limité par deux facteurs : l’explosion combinatoire et la capacité à spécifier le comportement correct d’un programme. Ces deux problèmes sont amplifiés dans le cas de programmes concurrents, à cause des différents entrelacements possibles entre les fils d’exécutions. Les assertions et les formules logiques LTL sont les deux formalismes de spécification les plus utilisés dans le cadre du model-checking logiciel. Cependant, ils sont limités dans un contexte concurrent : les assertions ne permettent pas d’exprimer des relations temporelles entre différents fils d’exécutions alors que les formules LTL utilisées par les outils actuels ne permettent pas d’exprimer des propriétés sur les variables locales et les positions dans le code du programme. Ces notions sont pourtant importantes dans le cas de programmes concurrents. Dans ce mémoire, nous établissons un formalisme de spécification visant à corriger ces limitations. Ce formalisme englobe LTL et les assertions, en permettant d’exprimer des relations temporelles sur des propositions faisant intervenir les variables locales et globales d’un programme ainsi que les positions dans le code source. Nous présentons aussi un outil permettant de vérifier une spécification exprimée dans ce formalisme dans le cas d’un programme concurrent codé en C. Notre formalisme se base sur la logique LTL. Il permet de surmonter deux des principales limitations rencontrées par les variantes de LTL utilisées par les outils de model-checking logiciel : manipuler des positions dans le code et des variables locales dans les propositions atomiques. La principale difficulté est de définir correctement dans l’ensemble du programme une proposition atomique lorsqu’elle dépend d’une variable locale. En effet, cette variable locale n’a de sens que dans une partie limitée du programme. Nous résolvons ce problème en utilisant le concept de zones de validité. Une zone de validité est un intervalle de positions dans le code source du programme dans laquelle la valeur d’une proposition atomique est définie à l’aide de sa fonction d’évaluation. Une valeur par défaut est utilisée hors de la zone de validité. Ceci permet de limiter l’utilisation de la fonction d’évaluation aux contextes où tous ses paramètres locaux sont définis.----------ABSTRACT : Critical programs and IT systems are more and more broadly used. Simultaneously, their complexity increase. More than ever, it is crucial to ensure their correctness. Software model-checking is a verification technique that allows to formally prove the absence of errors in a program. However, it faces two main issues: combinatorial explosion and the ability to specify the correct behavior of a program. These issues are amplified for concurrent programs because of the interleaving between threads. Assertions and LTL formulas are the most used specification formalism for software model-checking. However they are restricted in a concurrent context. On the one hand, it is not possible to express temporal relations between threads or events of the program using only assertions. On the other hand, LTL formulas that are supported by the main software model-checkers does not allow to use local variables and program locations, whereas program locations are often a convenient way to check the synchronization between threads. In this report, we establish a specification formalism aiming to overcome these issues. This formalism is a superset of both LTL and assertions. It allows to express propositions that use global and local variables and propositions, and to build temporal relations on these propositions. Then, we introduce a tool allowing to check for the correctness of concurrent C programs specified with the formalism we introduce. Our formalism is based on the LTL logic. It tackles the problems of code location manipulation in specification and the use of local variable in atomic propositions. The main difficulty is to properly define an atomic proposition in the whole program when it depends on a local variable, as the local variable is defined only in part of the program. We solve this issue using the concept of validity area. A validity area is an interval of code locations in which the value of an atomic proposition is computed using its evaluation function. A default value is used out of the validity area. Hence, it is possible to limit the use of the evaluation function to the lexical contexts where all local parameters are defined

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency
    corecore