87 research outputs found

    Testing, runtime verification, and analysis of concurrent programs

    Get PDF
    With the development of multi-core processors, concurrent programs are becoming more and more popular. Among several models, the multithreaded shared-memory model is the predominant programming paradigm for developing concurrent programs. However, because of non-deterministic scheduling, multithreaded code is hard to develop and test. Concurrency bugs, such as data races, atomicity violations, and deadlocks, are hard to detect and fix in multithreaded programs. To test and verify multithreaded programs, two sets of techniques are needed. The first one is to enforce thread schedules and runtime properties efficiently. Being able to enforce desired thread schedules and runtime properties would greatly help developers to develop reliable multithreaded code. The second one is to explore the state space of multithreaded programs efficiently. Systematic state-space exploration could guarantee correctness for mul- tithreaded code, however, it is usually time consuming and thus infeasible in most cases. This dissertation presents several techniques to address challenges arising in testing and runtime verification of multithreaded programs. The first two techniques are the IMUnit framework for enforcing testing schedules and the EnforceMOP system for enforcing runtime properties for multithreaded programs. An experimental evaluation shows that our techniques can enforce thread schedules and runtime properties effectively and efficiently, and have their own advantages over existing techniques. The other techniques are the RV-Causal framework and the CAPP technique in the ReEx framework for efficient state-space exploration of multithreaded code. RV-Causal employs the idea of the maximal causal model for state-space exploration in a novel way to reduce the exploration cost, without losing the ability to detect certain types of concurrency bugs. The results show that RV-Causal outperforms existing techniques by finding concurrency bugs and exploring the entire state space much more efficiently

    Combining Model Checking and Testing

    Get PDF
    Abstract Model checking and testing have a lot in common. Over the last two decades, significant progress has been made on how to broaden the scope of model checking from finite-state abstractions to actual software implementations. One way to do this consists of adapting model checking into a form of systematic testing that is applicable to industrial-size software. This chapter presents an overview of this strand of software model checking

    Large Scale Distributed Testing for Fault Classification and Isolation

    Get PDF
    Developing confidence in the quality of software is an increasingly difficult problem. As the complexity and integration of software systems increases, the tools and techniques used to perform quality assurance (QA) tasks must evolve with them. To date, several quality assurance tools have been developed to help ensure of quality in modern software, but there are still several limitations to be overcome. Among the challenges faced by current QA tools are (1) increased use of distributed software solutions, (2) limited test resources and constrained time schedules and (3) difficult to replicate and possibly rarely occurring failures. While existing distributed continuous quality assurance (DCQA) tools and techniques, including our own Skoll project, begin to address these issues, new and novel approaches are needed to address these challenges. This dissertation explores three strategies to do this. First, I present an improved version of our Skoll distributed quality assurance system. Skoll provides a platform for executing sophisticated, long-running QA processes across a large number of distributed, heterogeneous computing nodes. This dissertation details changes to Skoll resulting in a more robust, configurable, and user-friendly implementation for both the client and server components. Additionally, this dissertation details infrastructure development done to support the evaluation of DCQA processes using Skoll -- specifically the design and deployment of a dedicated 120-node computing cluster for evaluating DCQA practices. The techniques and case studies presented in the latter parts of this work leveraged the improvements to Skoll as their testbed. Second, I present techniques for automatically classifying test execution outcomes based on an adaptive-sampling classification technique along with a case study on the Java Architecture for Bytecode Analysis (JABA) system. One common need for these techniques is the ability to distinguish test execution outcomes (e.g., to collect only data corresponding to some behavior or to determine how often and under which conditions a specific behavior occurs). Most current approaches, however, do not perform any kind of classification of remote executions and either focus on easily observable behaviors (e.g., crashes) or assume that outcomes' classifications are externally provided (e.g., by the users). In this work, I present an empirical study on JABA where we automatically classified execution data into passing and failing behaviors using adaptive association trees. Finally, I present a long-term case study of the highly-configurable MySQL open-source project. Exhaustive testing of real-world software systems can involve configuration spaces that are too large to test exhaustively, but that nonetheless contain subtle interactions that lead to failure-inducing system faults. In the literature covering arrays, in combination with classification techniques, have been used to effectively sample these large configuration spaces and to detect problematic configuration dependencies. Applying this approach in practice, however, is tricky because testing time and resource availability are unpredictable. Therefore we developed and evaluated an alternative approach that incrementally builds covering array schedules. This approach begins at a low strength, and then iteratively increases strength as resources allow reusing previous test results to avoid duplicated effort. The results are test schedules that allow for successful classification with fewer test executions and that require less test-subject specific information to develop

    Software redundancy: what, where, how

    Get PDF
    Software systems have become pervasive in everyday life and are the core component of many crucial activities. An inadequate level of reliability may determine the commercial failure of a software product. Still, despite the commitment and the rigorous verification processes employed by developers, software is deployed with faults. To increase the reliability of software systems, researchers have investigated the use of various form of redundancy. Informally, a software system is redundant when it performs the same functionality through the execution of different elements. Redundancy has been extensively exploited in many software engineering techniques, for example for fault-tolerance and reliability engineering, and in self-adaptive and self- healing programs. Despite the many uses, though, there is no formalization or study of software redundancy to support a proper and effective design of software. Our intuition is that a systematic and formal investigation of software redundancy will lead to more, and more effective uses of redundancy. This thesis develops this intuition and proposes a set of ways to characterize qualitatively as well as quantitatively redundancy. We first formalize the intuitive notion of redundancy whereby two code fragments are considered redundant when they perform the same functionality through different executions. On the basis of this abstract and general notion, we then develop a practical method to obtain a measure of software redundancy. We prove the effectiveness of our measure by showing that it distinguishes between shallow differences, where apparently different code fragments reduce to the same underlying code, and deep code differences, where the algorithmic nature of the computations differs. We also demonstrate that our measure is useful for developers, since it is a good predictor of the effectiveness of techniques that exploit redundancy. Besides formalizing the notion of redundancy, we investigate the pervasiveness of redundancy intrinsically found in modern software systems. Intrinsic redundancy is a form of redundancy that occurs as a by-product of modern design and development practices. We have observed that intrinsic redundancy is indeed present in software systems, and that it can be successfully exploited for good purposes. This thesis proposes a technique to automatically identify equivalent method sequences in software systems to help developers assess the presence of intrinsic redundancy. We demonstrate the effectiveness of the technique by showing that it identifies the majority of equivalent method sequences in a system with good precision and performance

    Machine-Checked Formalisation and Verification of Cryptographic Protocols

    Get PDF
    PhD ThesisAiming for strong security assurance, researchers in academia and industry focus their interest on formal verification of cryptographic constructions. Automatising formal verification has proved itself to be a very difficult task, where the main challenge is to support generic constructions and theorems, and to carry out the mathematical proofs. This work focuses on machine-checked formalisation and automatic verification of cryptographic protocols. One aspect we covered is the novel support for generic schemes and real-world constructions among old and novel protocols: key exchange schemes (Simple Password Exponential Key Exchange, SPEKE), commitment schemes (with the popular Pedersen scheme), sigma protocols (with the Schnorr’s zero-knowledge proof of knowledge protocol), and searchable encryption protocols (Sophos). We also investigated aspects related to the reasoning of simulation based proofs, where indistinguishability of two different algorithms by any adversary is the crucial point to prove privacy-related properties. We embedded information-flow techniques into the EasyCrypt core language, then we show that our effort not only makes some proofs easier and (sometimes) fewer, but is also more powerful than other existing techniques in particular situations

    Logs and Models in Engineering Complex Embedded Production Software Systems

    Get PDF

    Integrity and access control in untrusted content distribution networks

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.Vita.Includes bibliographical references (p. 129-142).A content distribution network (CDN) makes a publisher's content highly available to readers through replication on remote computers. Content stored on untrusted servers is susceptible to attack, but a reader should have confidence that content originated from the publisher and that the content is unmodified. This thesis presents the SFS read-only file system (SFSRO) and key regression in the Chefs file system for secure, efficient content distribution using untrusted servers for public and private content respectively. SFSRO ensures integrity, authenticity, and freshness of single-writer, many-reader content. A publisher creates a digitally-signed database representing the contents of a source file system. Untrusted servers replicate the database for high availability. Chefs extends SFSRO with key regression to support decentralized access control of private content protected by encryption. Key regression allows a client to derive past versions of a key, reducing the number of keys a client must fetch from the publisher. Thus, key regression reduces the bandwidth requirements of publisher to make keys available to many clients.(cont.) Contributions of this thesis include the design and implementation of SFSRO and Chefs; a concrete definition of security, provably-secure constructions, and an implementation of key regression; and a performance evaluation of SFSRO and Chefs confirming that latency for individual clients remains low, and a single server can support many simultaneous clients.by Kevin E. Fu.Ph.D
    • …
    corecore