132 research outputs found

    Memory Subsystems for Security, Consistency, and Scalability

    Get PDF
    In response to the continuous demand for the ability to process ever larger datasets, as well as discoveries in next-generation memory technologies, researchers have been vigorously studying memory-driven computing architectures that shall allow data-intensive applications to access enormous amounts of pooled non-volatile memory. As applications continue to interact with increasing amounts of components and datasets, existing systems struggle to eÿciently enforce the principle of least privilege for security. While non-volatile memory can retain data even after a power loss and allow for large main memory capacity, programmers have to bear the burdens of maintaining the consistency of program memory for fault tolerance as well as handling huge datasets with traditional yet expensive memory management interfaces for scalability. Today’s computer systems have become too sophisticated for existing memory subsystems to handle many design requirements. In this dissertation, we introduce three memory subsystems to address challenges in terms of security, consistency, and scalability. Specifcally, we propose SMVs to provide threads with fne-grained control over access privileges for a partially shared address space for security, NVthreads to allow programmers to easily leverage nonvolatile memory with automatic persistence for consistency, and PetaMem to enable memory-centric applications to freely access memory beyond the traditional process boundary with support for memory isolation and crash recovery for security, consistency, and scalability

    NuMDG: A New Tool for Multiway Decision Graphs Construction

    Get PDF
    Multiway Decision Graphs (MDGs) are a canonical representation of a subset of many-sorted first-order logic. This subset generalizes the logic of equality with abstract types and uninterpreted function symbols. The distinction between abstract and concrete sorts mirrors the hardware distinction between data path and control. Here we consider ways to improve MDGs construction. Efficiency is achieved through the use of the Generalized-If-Then-Else (GITE) commonly operator in Binary Decision Diagram packages. Consequently, we review the main algorithms used for MDGs verification techniques. In particular, Relational Product and Pruning by Subsumption are algorithms defined uniformly through this single GITE operator which will lead to a more efficient implementation. Moreover, we provide their correctness proof. This work can be viewed as a way to accommodate the ROBBD algorithms to the realm of abstract sorts and uninterpreted functions. The new tool, called NuMDG, accepts an extended SMV language, supporting abstract data sorts. Finally, we present experimental results demonstrating the efficiency of the NuMDG tool and evaluating its performance using a set of benchmarks from the SMV package

    Design, Implementation, and Verification of the Reliable Multicast Protocol

    Get PDF
    This document describes the Reliable Multicast Protocol (RMP) design, first implementation, and formal verification. RMP provides a totally ordered, reliable, atomic multicast service on top of an unreliable multicast datagram service. RMP is fully and symmetrically distributed so that no site bears an undue portion of the communications load. RMP provides a wide range of guarantees, from unreliable delivery to totally ordered delivery, to K-resilient, majority resilient, and totally resilient atomic delivery. These guarantees are selectable on a per message basis. RMP provides many communication options, including virtual synchrony, a publisher/subscriber model of message delivery, a client/server model of delivery, mutually exclusive handlers for messages, and mutually exclusive locks. It has been commonly believed that total ordering of messages can only be achieved at great performance expense. RMP discounts this. The first implementation of RMP has been shown to provide high throughput performance on Local Area Networks (LAN). For two or more destinations a single LAN, RMP provides higher throughput than any other protocol that does not use multicast or broadcast technology. The design, implementation, and verification activities of RMP have occurred concurrently. This has allowed the verification to maintain a high fidelity between design model, implementation model, and the verification model. The restrictions of implementation have influenced the design earlier than in normal sequential approaches. The protocol as a whole has matured smoother by the inclusion of several different perspectives into the product development

    Maintaining the correctness of transactional memory programs

    Get PDF
    Dissertação para obtenção do Grau de Doutor em Engenharia InformáticaThis dissertation addresses the challenge of maintaining the correctness of transactional memory programs, while improving its parallelism with small transactions and relaxed isolation levels. The efficiency of the transactional memory systems depends directly on the level of parallelism, which in turn depends on the conflict rate. A high conflict rate between memory transactions can be addressed by reducing the scope of transactions, but this approach may turn the application prone to the occurrence of atomicity violations. Another way to address this issue is to ignore some of the conflicts by using a relaxed isolation level, such as snapshot isolation, at the cost of introducing write-skews serialization anomalies that break the consistency guarantees provided by a stronger consistency property, such as opacity. In order to tackle the correctness issues raised by the atomicity violations and the write-skew anomalies, we propose two static analysis techniques: one based in a novel static analysis algorithm that works on a dependency graph of program variables and detects atomicity violations; and a second one based in a shape analysis technique supported by separation logic augmented with heap path expressions, a novel representation based on sequences of heap dereferences that certifies if a transactional memory program executing under snapshot isolation is free from writeskew anomalies. The evaluation of the runtime execution of a transactional memory algorithm using snapshot isolation requires a framework that allows an efficient implementation of a multi-version algorithm and, at the same time, enables its comparison with other existing transactional memory algorithms. In the Java programming language there was no framework satisfying both these requirements. Hence, we extended an existing software transactional memory framework that already supported efficient implementations of some transactional memory algorithms, to also support the efficient implementation of multi-version algorithms. The key insight for this extension is the support for storing the transactional metadata adjacent to memory locations. We illustrate the benefits of our approach by analyzing its impact with both single- and multi-version transactional memory algorithms using several transactional workloads.Fundação para a Ciência e Tecnologia - PhD research grant SFRH/BD/41765/2007, and in the research projects Synergy-VM (PTDC/EIA-EIA/113613/2009), and RepComp (PTDC/EIAEIA/ 108963/2008

    A Reduction from Smart Contract Verification to Model Checking

    Get PDF
    We present a reduction from verification of smart contracts to model checking. A smart contract is a computer program written in a language with constructs that correspond to real-world contracts, such as verified sending and accepting digital cash. Model checking is an approach to verification of state-transition systems in which a state is the valuation of a set of variables. A reduction, in our context, is a polynomial-time computable function which guarantees that an input smart contract possesses a property if and only if the output instance of model checking possesses a property to which the former property is mapped. Our focus is smart contracts written to run on the Ethereum blockchain in a language compiled to Ethereum Virtual Machine (EVM) code. Our work is motivated by the importance of checking smart contracts for properties of interest and also by the observation in recent empirical work that establishes that existing verification tools are deficient. Our approach has some distinguishing characteristics from prior approaches, which we discuss in this thesis. We have implemented and carried out a limited empirical assessment of our reduction. We used a dataset of 69 curated smart contracts that contains 115 instances of security vulnerabilities from 10 different classes of such vulnerabilities. Our empirical work suggests that our approach can scale to real-world smart contracts

    Exploring Domain Specific Approaches to Software Model Checking

    Get PDF
    Model checking has proven to be an effective technology for verification and debugging in hardware domains and more recently in software domains. The major challenges in the application of model checking to software systems are: the mapping of software executables to model checker's input language and the intrinsic complexity of the ever growing software systems. This thesis explores the domain specific model checking approaches to large systems in order to optimize the state space storage for specific domains. Bogor [Bogor 2003] is an extensible, customizable, and highly modular model checking framework that supports general as well as domain specific software model checking. As a part of the thesis, domain specific extensions to Bogor's input language, called Bandera Intermediate Representation (BIR), were implemented by providing a plugin for Eclipse [Eclipse 2004]. Eclipse is a universal platform for tool integration and its plugin development environment facilitates addition of new plugins to the existing ones. Eclipse's extension mechanism is exploited by Bogor. Bogor was installed as an Eclipse plugin and with the help of Eclipse's Plugin Development Environment (PDE), new data types were integrated with the existing Bogor framework. Two case studies ('postfix calculator' using stack extension and 'resource allocation' using multiset extension) were investigated. Various metrics such as number of states, transitions, and maximum depth were analyzed. The complexity of the test cases was increased gradually to test the extensions for feasibility and scalability. The thesis also involves a comprehensive study of some of the well-known model checkers and their features, degree of automation, and input languages. It was observed that customizing the model checker as per domain specifications helped in achieving space reduction. The space reduction is prominent, especially in large domains where it contributes towards state space explosion solution. Although development of extensions is achievable, it requires a working knowledge of Eclipse and specific knowledge of model checking. In conclusion, a domain specific approach for software model checking was demonstrated to be a promising technology. Language extensions to BIR were successfully built and tested for accuracy and scalability.Computer Science Departmen

    Verification of Branching-Time and Alternating-Time Properties for Exogenous Coordination Models

    Get PDF
    Information and communication systems enter an increasing number of areas of daily lives. Our reliance and dependence on the functioning of such systems is rapidly growing together with the costs and the impact of system failures. At the same time the complexity of hardware and software systems extends to new limits as modern hardware architectures become more and more parallel, dynamic and heterogenous. These trends demand for a closer integration of formal methods and system engineering to show the correctness of complex systems within the design phase of large projects. The goal of this thesis is to introduce a formal holistic approach for modeling, analysis and synthesis of parallel systems that potentially addresses complex system behavior at any layer of the hardware/software stack. Due to the complexity of modern hardware and software systems, we aim to have a hierarchical modeling framework that allows to specify the behavior of a parallel system at various levels of abstraction and that facilitates designing complex systems in an iterative refinement procedure, in which more detailed behavior is added successively to the system description. In this context, the major challenge is to provide modeling formalisms that are expressive enough to address all of the above issues and are at the same time amenable to the application of formal methods for proving that the system behavior conforms to its specification. In particular, we are interested in specification formalisms that allow to apply formal verification techniques such that the underlying model checking problems are still decidable within reasonable time and space bounds. The presented work relies on an exogenous modeling approach that allows a clear separation of coordination and computation and provides an operational semantic model where formal methods such as model checking are well suited and applicable. The channel-based exogenous coordination language Reo is used as modeling formalism as it supports hierarchical modeling in an iterative top-down refinement procedure. It facilitates reusability, exchangeability, and heterogeneity of components and forms the basis to apply formal verification methods. At the same time Reo has a clear formal semantics based on automata, which serve as foundation to apply formal methods such as model checking. In this thesis new modeling languages are presented that allow specifying complex systems in terms of Reo and automata models which yield the basis for a holistic approach on modeling, verification and synthesis of parallel systems. The second main contribution of this thesis are tailored branching-time and alternating time temporal logics as well as corresponding model checking algorithms. The thesis includes results on the theoretical complexity of the underlying model checking problems as well as practical results. For the latter the presented approach has been implemented in the symbolic verification tool set Vereofy. The implementation within Vereofy and evaluation of the branching-time and alternating-time model checker is the third main contribution of this thesis

    Model Checking of Distributed Multi-Threaded Java Applications

    Get PDF
    In this dissertation, we focus on the verification of distributed Java applications composed of communicating multithreaded processes. We choose model checking as the verification technique. We propose an instance of the so-called centralization approach which allows for model checking multiple communicating processes. The main challenge of applying centralization is keeping data separated between different processes. In our approach, this issue is addressed through a new class-loading model. As one of our contributions, we implement our approach within an existing model checker, Java PathFinder (JPF). To account for interactions between processes, our approach provides the model checker with a model of interprocess communication. Moreover, our model allows for systematically exploring potential exceptional control flows caused by network failures. We also apply a partial order reduction (POR) algorithm to reduce the state space of distributed applications, and we prove that our POR algorithm preserves deadlocks. Furthermore, we propose an automatic approach to capture interactions between the system being verified and external resources, such as cloud computing services. The dissertation also discusses how our approach is superior to existing approaches. Our approach exhibits better performance which is mainly due to the POR technique. Furthermore, our approach allows for verifying a considerably larger class of applications without the need for any manual modeling, and it has been successfully used to detect bugs that cannot be found using previous work

    Proceedings of Monterey Workshop 2001 Engineering Automation for Sofware Intensive System Integration

    Get PDF
    The 2001 Monterey Workshop on Engineering Automation for Software Intensive System Integration was sponsored by the Office of Naval Research, Air Force Office of Scientific Research, Army Research Office and the Defense Advance Research Projects Agency. It is our pleasure to thank the workshop advisory and sponsors for their vision of a principled engineering solution for software and for their many-year tireless effort in supporting a series of workshops to bring everyone together.This workshop is the 8 in a series of International workshops. The workshop was held in Monterey Beach Hotel, Monterey, California during June 18-22, 2001. The general theme of the workshop has been to present and discuss research works that aims at increasing the practical impact of formal methods for software and systems engineering. The particular focus of this workshop was "Engineering Automation for Software Intensive System Integration". Previous workshops have been focused on issues including, "Real-time & Concurrent Systems", "Software Merging and Slicing", "Software Evolution", "Software Architecture", "Requirements Targeting Software" and "Modeling Software System Structures in a fastly moving scenario".Office of Naval ResearchAir Force Office of Scientific Research Army Research OfficeDefense Advanced Research Projects AgencyApproved for public release, distribution unlimite
    • …
    corecore