149 research outputs found

    Automated Validation of State-Based Client-Centric Isolation with TLA <sup>+</sup>

    Get PDF
    Clear consistency guarantees on data are paramount for the design and implementation of distributed systems. When implementing distributed applications, developers require approaches to verify the data consistency guarantees of an implementation choice. Crooks et al. define a state-based and client-centric model of database isolation. This paper formalizes this state-based model in, reproduces their examples and shows how to model check runtime traces and algorithms with this formalization. The formalized model in enables semi-automatic model checking for different implementation alternatives for transactional operations and allows checking of conformance to isolation levels. We reproduce examples of the original paper and confirm the isolation guarantees of the combination of the well-known 2-phase locking and 2-phase commit algorithms. Using model checking this formalization can also help finding bugs in incorrect specifications. This improves feasibility of automated checking of isolation guarantees in synthesized synchronization implementations and it provides an environment for experimenting with new designs.</p

    Choice and chance:model-based testing of stochastic behaviour

    Get PDF
    Probability plays an important role in many computer applications. A vast number of algorithms, protocols and computation methods uses randomisation to achieve their goals. A crucial question then becomes whether such probabilistic systems work as intended. To investigate this, such systems are often subjected to a large number of well-designed test cases, that compare a observed behaviour to a requirements specification. Model-based testing is an innovative testing technique rooted in formal methods, that aims at automating this labour intense and often error-prone manual task. By providing faster and more thorough testing at lower cost, it has gained rapid popularity in industry and academia alike. However, classic model-based testing methods are insufficient when dealing with inherently stochastic systems. This thesis introduces a rigorous model-based testing framework, that is capable to automatically test such systems. The presented methods are capable of judging functional correctness, discrete probability choices, and hard and soft-real time constraints. The framework is constructed in a clear step-by-step approach. First, the model-based testing landscape is laid out, and related work is discussed. Next, we instantiate a model-based testing framework to highlight the purpose of individual theoretical components like, e.g., a conformance relation, test cases, and practical test generation algorithms. This framework is then conservatively extended by introducing discrete probability choices to the specification language. A last step further extends this probabilistic framework by adding hard and soft real time constraints. Classical functional correctness verdicts are thus extended with goodness of fit methods known from statistics. Proofs of the framework’s correctness are presented before its capabilities are exemplified by studying smaller scale case studies known from the literature. The framework reconciles non-deterministic and probabilistic choices in a fully-fledged way via the use of schedulers. Schedulers then become a subject worthy to study in their own rights. This is done in the second part of this thesis; we introduce a most natural equivalence relation based on schedulers for Markov automata, and compare its distinguishing power to notions of trace distributions and bisimulation relations. Lastly, the power of different scheduler classes of stochastic automata is investigated. We compare reachability probabilities of different schedulers by altering the information available to them. A hierarchy of scheduler classes is established, with the intent to reduce complexity of related problems by gaining near optimal results for smaller scheduler classes

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 13371 and 13372 constitutes the refereed proceedings of the 34rd International Conference on Computer Aided Verification, CAV 2022, which was held in Haifa, Israel, in August 2022. The 40 full papers presented together with 9 tool papers and 2 case studies were carefully reviewed and selected from 209 submissions. The papers were organized in the following topical sections: Part I: Invited papers; formal methods for probabilistic programs; formal methods for neural networks; software Verification and model checking; hyperproperties and security; formal methods for hardware, cyber-physical, and hybrid systems. Part II: Probabilistic techniques; automata and logic; deductive verification and decision procedures; machine learning; synthesis and concurrency. This is an open access book

    Open Dynamic Interaction Network: a cell-phone based platform for responsive EMA

    Get PDF
    The study of social networks is central to advancing our understanding of a wide range of phenomena in human societies. Social networks co-evolve concurrently alongside the individuals within them. Selection processes cause network structure to change in response to emerging similarities/differences between individuals. At the same time, diffusion processes occur as individuals influence one another when they interact across network links. Indeed, each network link is a logical abstraction that aggregates many short-lived pairwise interactions of interest that are being studied. Traditionally, network co-evolution is studied by periodically taking static snapshots of social networks using surveys. Unfortunately, participation incentives make surveys costly to deliver, which makes it impractical to collect snapshots at fine temporal resolution. On the other hand, collecting data at wider time intervals requires participants to perform error-prone recall about long periods of time. This creates a difficult research tradeoff between data cost and data quality. More recently, techniques of Ecological Momentary Assessment (EMA) have been developed, involving repeated sampling of subjects\u27 current behaviors and experiences in real time, in subjects\u27 natural environments. This thesis project describes the design, implementation, and validation of a new platform for responsive EMA. The Open Dynamic Interaction Network (ODIN) platform is a cost-effective and flexible cell-phone based platform to collect continuous time sensor data and deliver contextual surveys to a study population. ODIN allows social and behavioral health researchers to instrument study protocols by specifying both the questions to be asked and the rules governing when questions should be asked over the duration of the study. Researcher-specified rules can reference sensor data (e.g. time, GPS, accelerometer-based activity, Bluetooth-based proximity to other participants, etc), as well as the subject\u27s previous answers. ODIN is composed of four backend services, two web user interfaces, and an Android application. A pilot study was conducted over the course of 30 days with 16 participants to evaluate the system. The results obtained from the pilot show that the system successfully collects relevant data for the study as well as triggering questions according to the study needs. Adviser: Jitender Deogun and Bilal Kha

    Decompose and Conquer: Addressing Evasive Errors in Systems on Chip

    Full text link
    Modern computer chips comprise many components, including microprocessor cores, memory modules, on-chip networks, and accelerators. Such system-on-chip (SoC) designs are deployed in a variety of computing devices: from internet-of-things, to smartphones, to personal computers, to data centers. In this dissertation, we discuss evasive errors in SoC designs and how these errors can be addressed efficiently. In particular, we focus on two types of errors: design bugs and permanent faults. Design bugs originate from the limited amount of time allowed for design verification and validation. Thus, they are often found in functional features that are rarely activated. Complete functional verification, which can eliminate design bugs, is extremely time-consuming, thus impractical in modern complex SoC designs. Permanent faults are caused by failures of fragile transistors in nano-scale semiconductor manufacturing processes. Indeed, weak transistors may wear out unexpectedly within the lifespan of the design. Hardware structures that reduce the occurrence of permanent faults incur significant silicon area or performance overheads, thus they are infeasible for most cost-sensitive SoC designs. To tackle and overcome these evasive errors efficiently, we propose to leverage the principle of decomposition to lower the complexity of the software analysis or the hardware structures involved. To this end, we present several decomposition techniques, specific to major SoC components. We first focus on microprocessor cores, by presenting a lightweight bug-masking analysis that decomposes a program into individual instructions to identify if a design bug would be masked by the program's execution. We then move to memory subsystems: there, we offer an efficient memory consistency testing framework to detect buggy memory-ordering behaviors, which decomposes the memory-ordering graph into small components based on incremental differences. We also propose a microarchitectural patching solution for memory subsystem bugs, which augments each core node with a small distributed programmable logic, instead of including a global patching module. In the context of on-chip networks, we propose two routing reconfiguration algorithms that bypass faulty network resources. The first computes short-term routes in a distributed fashion, localized to the fault region. The second decomposes application-aware routing computation into simple routing rules so to quickly find deadlock-free, application-optimized routes in a fault-ridden network. Finally, we consider general accelerator modules in SoC designs. When a system includes many accelerators, there are a variety of interactions among them that must be verified to catch buggy interactions. To this end, we decompose such inter-module communication into basic interaction elements, which can be reassembled into new, interesting tests. Overall, we show that the decomposition of complex software algorithms and hardware structures can significantly reduce overheads: up to three orders of magnitude in the bug-masking analysis and the application-aware routing, approximately 50 times in the routing reconfiguration latency, and 5 times on average in the memory-ordering graph checking. These overhead reductions come with losses in error coverage: 23% undetected bug-masking incidents, 39% non-patchable memory bugs, and occasionally we overlook rare patterns of multiple faults. In this dissertation, we discuss the ideas and their trade-offs, and present future research directions.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147637/1/doowon_1.pd

    Model based test suite minimization using metaheuristics

    Get PDF
    Software testing is one of the most widely used methods for quality assurance and fault detection purposes. However, it is one of the most expensive, tedious and time consuming activities in software development life cycle. Code-based and specification-based testing has been going on for almost four decades. Model-based testing (MBT) is a relatively new approach to software testing where the software models as opposed to other artifacts (i.e. source code) are used as primary source of test cases. Models are simplified representation of a software system and are cheaper to execute than the original or deployed system. The main objective of the research presented in this thesis is the development of a framework for improving the efficiency and effectiveness of test suites generated from UML models. It focuses on three activities: transformation of Activity Diagram (AD) model into Colored Petri Net (CPN) model, generation and evaluation of AD based test suite and optimization of AD based test suite. Unified Modeling Language (UML) is a de facto standard for software system analysis and design. UML models can be categorized into structural and behavioral models. AD is a behavioral type of UML model and since major revision in UML version 2.x it has a new Petri Nets like semantics. It has wide application scope including embedded, workflow and web-service systems. For this reason this thesis concentrates on AD models. Informal semantics of UML generally and AD specially is a major challenge in the development of UML based verification and validation tools. One solution to this challenge is transforming a UML model into an executable formal model. In the thesis, a three step transformation methodology is proposed for resolving ambiguities in an AD model and then transforming it into a CPN representation which is a well known formal language with extensive tool support. Test case generation is one of the most critical and labor intensive activities in testing processes. The flow oriented semantic of AD suits modeling both sequential and concurrent systems. The thesis presented a novel technique to generate test cases from AD using a stochastic algorithm. In order to determine if the generated test suite is adequate, two test suite adequacy analysis techniques based on structural coverage and mutation have been proposed. In terms of structural coverage, two separate coverage criteria are also proposed to evaluate the adequacy of the test suite from both perspectives, sequential and concurrent. Mutation analysis is a fault-based technique to determine if the test suite is adequate for detecting particular types of faults. Four categories of mutation operators are defined to seed specific faults into the mutant model. Another focus of thesis is to improve the test suite efficiency without compromising its effectiveness. One way of achieving this is identifying and removing the redundant test cases. It has been shown that the test suite minimization by removing redundant test cases is a combinatorial optimization problem. An evolutionary computation based test suite minimization technique is developed to address the test suite minimization problem and its performance is empirically compared with other well known heuristic algorithms. Additionally, statistical analysis is performed to characterize the fitness landscape of test suite minimization problems. The proposed test suite minimization solution is extended to include multi-objective minimization. As the redundancy is contextual, different criteria and their combination can significantly change the solution test suite. Therefore, the last part of the thesis describes an investigation into multi-objective test suite minimization and optimization algorithms. The proposed framework is demonstrated and evaluated using prototype tools and case study models. Empirical results have shown that the techniques developed within the framework are effective in model based test suite generation and optimizatio
    • …
    corecore