51 research outputs found

    Faster linearizability checking via PP-compositionality

    Full text link
    Linearizability is a well-established consistency and correctness criterion for concurrent data types. An important feature of linearizability is Herlihy and Wing's locality principle, which says that a concurrent system is linearizable if and only if all of its constituent parts (so-called objects) are linearizable. This paper presents PP-compositionality, which generalizes the idea behind the locality principle to operations on the same concurrent data type. We implement PP-compositionality in a novel linearizability checker. Our experiments with over nine implementations of concurrent sets, including Intel's TBB library, show that our linearizability checker is one order of magnitude faster and/or more space efficient than the state-of-the-art algorithm.Comment: 15 pages, 2 figure

    Toward Linearizability Testing for Multi-Word Persistent Synchronization Primitives

    Get PDF
    Persistent memory makes it possible to recover in-memory data structures following a failure instead of rebuilding them from state saved in slow secondary storage. Implementing such recoverable data structures correctly is challenging as their underlying algorithms must deal with both parallelism and failures, which makes them especially susceptible to programming errors. Traditional proofs of correctness should therefore be combined with other methods, such as model checking or software testing, to minimize the likelihood of uncaught defects. This research focuses specifically on the algorithmic principles of software testing, particularly linearizability analysis, for multi-word persistent synchronization primitives such as conditional swap operations. We describe an efficient decision procedure for linearizability in this context, and discuss its practical applications in detecting previously-unknown bugs in implementations of multi-word persistent primitives

    LIPIcs

    Get PDF
    The semantics of concurrent data structures is usually given by a sequential specification and a consistency condition. Linearizability is the most popular consistency condition due to its simplicity and general applicability. Nevertheless, for applications that do not require all guarantees offered by linearizability, recent research has focused on improving performance and scalability of concurrent data structures by relaxing their semantics. In this paper, we present local linearizability, a relaxed consistency condition that is applicable to container-type concurrent data structures like pools, queues, and stacks. While linearizability requires that the effect of each operation is observed by all threads at the same time, local linearizability only requires that for each thread T, the effects of its local insertion operations and the effects of those removal operations that remove values inserted by T are observed by all threads at the same time. We investigate theoretical and practical properties of local linearizability and its relationship to many existing consistency conditions. We present a generic implementation method for locally linearizable data structures that uses existing linearizable data structures as building blocks. Our implementations show performance and scalability improvements over the original building blocks and outperform the fastest existing container-type implementations

    Polynomial-Time Verification and Testing of Implementations of the Snapshot Data Structure

    Get PDF
    We analyze correctness of implementations of the snapshot data structure in terms of linearizability. We show that such implementations can be verified in polynomial time. Additionally, we identify a set of representative executions for testing and show that the correctness of each of these executions can be validated in linear time. These results present a significant speedup considering that verifying linearizability of implementations of concurrent data structures, in general, is EXPSPACE-complete in the number of program-states, and testing linearizability is NP-complete in the length of the tested execution. The crux of our approach is identifying a class of executions, which we call simple, such that a snapshot implementation is linearizable if and only if all of its simple executions are linearizable. We then divide all possible non-linearizable simple executions into three categories and construct a small automaton that recognizes each category. We describe two implementations (one for verification and one for testing) of an automata-based approach that we develop based on this result and an evaluation that demonstrates significant improvements over existing tools. For verification, we show that restricting a state-of-the-art tool to analyzing only simple executions saves resources and allows the analysis of more complex cases. Specifically, restricting attention to simple executions finds bugs in 27 instances, whereas, without this restriction, we were only able to find 14 of the 30 bugs in the instances we examined. We also show that our technique accelerates testing performance significantly. Specifically, our implementation solves the complete set of 900 problems we generated, whereas the state-of-the-art linearizability testing tool solves only 554 problems

    Verifikation Nicht-blockierender Datenstrukturen mit Manueller Speicherverwaltung

    Get PDF
    Verification of concurrent data structures is one of the most challenging tasks in software verification. The topic has received considerable attention over the course of the last decade. Nevertheless, human-driven techniques remain cumbersome and notoriously difficult while automated approaches suffer from limited applicability. This is particularly true in the absence of garbage collection. The intricacy of non-blocking manual memory management (manual memory reclamation) paired with the complexity of concurrent data structures has so far made automated verification prohibitive. We tackle the challenge of automated verification of non-blocking data structures which manually manage their memory. To that end, we contribute several insights that greatly simplify the verification task. The guiding theme of those simplifications are semantic reductions. We show that the verification of a data structure's complicated target semantics can be conducted in a simpler and smaller semantics which is more amenable to automatic techniques. Some of our reductions rely on good conduct properties of the data structure. The properties we use are derived from practice, for instance, by exploiting common programming patterns. Furthermore, we also show how to automatically check for those properties under the smaller semantics. The main contributions are: (i) A compositional verification approach that verifies the memory management and the data structure separately. (ii) A notion of weak ownership that applies when memory is reclaimed and reused, bridging the gap between garbage collection and manual memory management (iii) A notion of pointer races and harmful ABAs the absence of which ensures that the memory management does not influence the data structure, i.e., it behaves as if executed under garbage collection. Notably, we show that a check for pointer races and harmful ABAs only needs to consider executions where at most a single address is reused. (iv) A notion of strong pointer races the absence of which entails the absence of ordinary pointer races and harmful ABAs. We devise a highly-efficient type check for strong pointer races. After a successful type check, the actual verification can be performed under garbage collection using an off-the-shelf verifier. (v) Experimental evaluations of the aforementioned contributions. We are the first to fully automatically verify practical non-blocking data structures with manual memory management.Verifikation nebenläufiger Datenstrukturen ist eine der herausforderndsten Aufgaben der Programmverifikation. Trotz vieler Beiträge zu diesem Thema, bleiben die existierenden manuellen Techniken mühsam und kompliziert in der Anwendung. Auch automatisierte Verifikationsverfahren sind nur eingeschränkt anwendbar. Diese Schwächen sind besonders ausgeprägt, wenn sich Programme nicht auf einen Garbage-Collector verlassen. Die Komplexität manueller Speicherverwaltung gepaart mit komplexen nicht-blockierenden Datenstrukturen macht die automatisierte Programmverifikation derzeit unmöglich. Diese Arbeit betrachtet die automatisierte Verifikation nicht-blockierender Datenstrukturen, welche ihren Speicher manuell verwalten. Dazu werden Konzepte vorgestellt, die die Verifikation stark vereinfachen. Das Leitmotiv dabei ist die semantische Reduktion, welche die Verifikation in einer leichteren Semantik erlaubt, ohne die eigentliche komplexere Semantik zu betrachten. Einige dieser Reduktion beruhen auf einem Wohlverhalten des zu verifizierenden Programms. Dabei wird das Wohlverhalten mit Bezug auf praxisnahe Eigenschaften definiert, wie sie z.B. von gängigen Programmiermustern vorgegeben werden. Ferner wird gezeigt, dass die Wohlverhaltenseigenschaften ebenfalls unter der einfacheren Semantik nachgewiesen werden können. Die Hauptresultate der vorliegenden Arbeit sind die Folgenden: (i) Ein kompositioneller Verifikationsansatz, welcher Speicherverwaltung und Datenstruktur getrennt verifiziert. (ii) Ein Begriff des Weak-Ownership, welcher selbst dann Anwendung findet, wenn Speicher wiederverwendet wird. (iii) Ein Begriff des Pointer-Race und des Harmful-ABA, deren Abwesenheit garantiert, dass die Speicherverwaltung keinen Einfluss auf die Datenstruktur ausübt und somit unter der Annahme von Garbage-Collection verifiziert werden kann. Bemerkenswerterweise genügt es diese Abwesenheit unter Reallokation nur einer fixex Speicherzelle zu prüfen. (iv) Ein Begriff des Strong-Pointer-Race, dessen Abwesenheit sowohl Pointer-Races als auch Harmful-ABA ausschließt. Um ein Programm auf Strong-Pointer-Races zu prüfen, präsentieren wir ein Typsystem. Ein erfolgreicher Typcheck erlaubt die tatsächlich zu überprüfende Eigenschaft unter der Annahme eines Garbage-Collectors nachzuweisen. (v) Experimentelle Evaluationen. Die vorgestellten Techniken sind die Ersten, die nicht-blockierende Datenstrukturen mit gängigen Speicherverwaltungen vollständig automatisch verifizieren können

    Compass: {S}trong and Compositional Library Specifications in Relaxed Memory Separation Logic

    Get PDF

    Model checking concurrent and real-time systems : the PAT approach

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore