289 research outputs found

    Proving opacity of a pessimistic STM

    Get PDF
    Transactional Memory (TM) is a high-level programming abstraction for concurrency control that provides programmers with the illusion of atomically executing blocks of code, called transactions. TMs come in two categories, optimistic and pessimistic, where in the latter transactions never abort. While this simplifies the programming model, high-performing pessimistic TMs can complex. In this paper, we present the first formal verification of a pessimistic software TM algorithm, namely, an algorithm proposed by Matveev and Shavit. The correctness criterion used is opacity, formalising the transactional atomicity guarantees. We prove that this pessimistic TM is a refinement of an intermediate opaque I/O-automaton, known as TMS2. To this end, we develop a rely-guarantee approach for reducing the complexity of the proof. Proofs are mechanised in the interactive prover Isabelle

    Safe abstractions of data encodings in formal security protocol models

    Get PDF
    When using formal methods, security protocols are usually modeled at a high level of abstraction. In particular, data encoding and decoding transformations are often abstracted away. However, if no assumptions at all are made on the behavior of such transformations, they could trivially lead to security faults, for example leaking secrets or breaking freshness by collapsing nonces into constants. In order to address this issue, this paper formally states sufficient conditions, checkable on sequential code, such that if an abstract protocol model is secure under a Dolev-Yao adversary, then a refined model, which takes into account a wide class of possible implementations of the encoding/decoding operations, is implied to be secure too under the same adversary model. The paper also indicates possible exploitations of this result in the context of methods based on formal model extraction from implementation code and of methods based on automated code generation from formally verified model

    Quiescent consistency: Defining and verifying relaxed linearizability

    Get PDF
    Concurrent data structures like stacks, sets or queues need to be highly optimized to provide large degrees of parallelism with reduced contention. Linearizability, a key consistency condition for concurrent objects, sometimes limits the potential for optimization. Hence algorithm designers have started to build concurrent data structures that are not linearizable but only satisfy relaxed consistency requirements. In this paper, we study quiescent consistency as proposed by Shavit and Herlihy, which is one such relaxed condition. More precisely, we give the first formal definition of quiescent consistency, investigate its relationship with linearizability, and provide a proof technique for it based on (coupled) simulations. We demonstrate our proof technique by verifying quiescent consistency of a (non-linearizable) FIFO queue built using a diffraction tree. © 2014 Springer International Publishing Switzerland

    Verifying linearizability on TSO architectures

    Get PDF
    Linearizability is the standard correctness criterion for fine-grained, non-atomic concurrent algorithms, and a variety of methods for verifying linearizability have been developed. However, most approaches assume a sequentially consistent memory model, which is not always realised in practice. In this paper we define linearizability on a weak memory model: the TSO (Total Store Order) memory model, which is implemented in the x86 multicore architecture. We also show how a simulation-based proof method can be adapted to verify linearizability for algorithms running on TSO architectures. We demonstrate our approach on a typical concurrent algorithm, spinlock, and prove it linearizable using our simulation-based approach. Previous approaches to proving linearizabilty on TSO architectures have required a modification to the algorithm's natural abstract specification. Our proof method is the first, to our knowledge, for proving correctness without the need for such modification

    Stationary phase expression of the arginine biosynthetic operon argCBH in Escherichia coli

    Get PDF
    BACKGROUND: Arginine biosynthesis in Escherichia coli is elevated in response to nutrient limitation, stress or arginine restriction. Though control of the pathway in response to arginine limitation is largely modulated by the ArgR repressor, other factors may be involved in increased stationary phase and stress expression. RESULTS: In this study, we report that expression of the argCBH operon is induced in stationary phase cultures and is reduced in strains possessing a mutation in rpoS, which encodes an alternative sigma factor. Using strains carrying defined argR, and rpoS mutations, we evaluated the relative contributions of these two regulators to the expression of argH using operon-lacZ fusions. While ArgR was the main factor responsible for modulating expression of argCBH, RpoS was also required for full expression of this biosynthetic operon at low arginine concentrations (below 60 μM L-arginine), a level at which growth of an arginine auxotroph was limited by arginine. When the argCBH operon was fully de-repressed (arginine limited), levels of expression were only one third of those observed in ΔargR mutants, indicating that the argCBH operon is partially repressed by ArgR even in the absence of arginine. In addition, argCBH expression was 30-fold higher in ΔargR mutants relative to levels found in wild type, fully-repressed strains, and this expression was independent of RpoS. CONCLUSION: The results of this study indicate that both derepression and positive control by RpoS are required for full control of arginine biosynthesis in stationary phase cultures of E. coli

    TWAM: A Certifying Abstract Machine for Logic Programs

    Full text link
    Type-preserving (or typed) compilation uses typing derivations to certify correctness properties of compilation. We have designed and implemented a type-preserving compiler for a simply-typed dialect of Prolog we call T-Prolog. The crux of our approach is a new certifying abstract machine which we call the Typed Warren Abstract Machine (TWAM). The TWAM has a dependent type system strong enough to specify the semantics of a logic program in the logical framework LF. We present a soundness metatheorem which constitutes a partial correctness guarantee: well-typed programs implement the logic program specified by their type. This metatheorem justifies our design and implementation of a certifying compiler from T-Prolog to TWAM.Comment: 41 pages, under submission to ACM Transactions on Computational Logi

    Revision of the Cretaceous shark Protoxynotus (Chondrichthyes, Squaliformes) and early evolution of somniosid sharks

    Get PDF
    Due to the peculiar combination of dental features characteristic for different squaliform families, the position of the Late Cretaceous genera Protoxynotus and Paraphorosoides within Squaliformes has long been controversial. In this study, we revise these genera based on previously known fossil teeth and new dental material. The phylogenetic placement of Protoxynotus and Paraphorosoides among other extant and extinct squaliforms is discussed based on morphological characters combined with DNA sequence data of extant species. Our results suggest that Protoxynotus and Paraphorosoides should be included in the Somniosidae and that Paraphorosoides is a junior synonym of Protoxynotus. New dental material from the Campanian of Germany and the Maastrichtian of Austria enabled the description of a new species Protoxynotus mayrmelnhofi sp. nov. In addition, the evolution and origin of the characteristic squaliform tooth morphology are discussed, indicating that the elongated lower jaw teeth with erected cusp and distinct dignathic heterodonty of Protoxynotus represents a novel functional adaptation in its cutting-clutching type dentition among early squaliform sharks. Furthermore, the depositional environment of the tooth bearing horizons allows for an interpretation of the preferred habitat of this extinct dogfish shark, which exclusively occupied shelf environments of the Boreal- and northern Tethyan realms during the Late Cretaceous.publishedVersio

    Verifying correctness of persistent concurrent data structures: a sound and complete method

    Get PDF
    Non-volatile memory (NVM), aka persistent memory, is a new memory paradigm that preserves its contents even after power loss. The expected ubiquity of NVM has stimulated interest in the design of persistent concurrent data structures, together with associated notions of correctness. In this paper, we present a formal proof technique for durable linearizability, which is a correctness criterion that extends linearizability to handle crashes and recovery in the context ofNVM.Our proofs are based on refinement of Input/Output automata (IOA) representations of concurrent data structures. To this end, we develop a generic procedure for transforming any standard sequential data structure into a durable specification and prove that this transformation is both sound and complete. Since the durable specification only exhibits durably linearizable behaviours, it serves as the abstract specification in our refinement proof. We exemplify our technique on a recently proposed persistentmemory queue that builds on Michael and Scott’s lock-free queue. To support the proofs, we describe an automated translation procedure from code to IOA and a thread-local proof technique for verifying correctness of invariants

    Brief announcement: On strong observational refinement and forward simulation

    Get PDF
    Hyperproperties are correctness conditions for labelled transition systems that are more expressive than traditional trace properties, with particular relevance to security. Recently, Attiya and Enea studied a notion of strong observational refinement that preserves all hyperproperties. They analyse the correspondence between forward simulation and strong observational refinement in a setting with finite traces only. We study this correspondence in a setting with both finite and infinite traces. In particular, we show that forward simulation does not preserve hyperliveness properties in this setting. We extend the forward simulation proof obligation with a progress condition, and prove that this progressive forward simulation does imply strong observational refinement
    • …
    corecore