34 research outputs found

    Polynomial-Time Verification and Testing of Implementations of the Snapshot Data Structure

    Get PDF
    We analyze correctness of implementations of the snapshot data structure in terms of linearizability. We show that such implementations can be verified in polynomial time. Additionally, we identify a set of representative executions for testing and show that the correctness of each of these executions can be validated in linear time. These results present a significant speedup considering that verifying linearizability of implementations of concurrent data structures, in general, is EXPSPACE-complete in the number of program-states, and testing linearizability is NP-complete in the length of the tested execution. The crux of our approach is identifying a class of executions, which we call simple, such that a snapshot implementation is linearizable if and only if all of its simple executions are linearizable. We then divide all possible non-linearizable simple executions into three categories and construct a small automaton that recognizes each category. We describe two implementations (one for verification and one for testing) of an automata-based approach that we develop based on this result and an evaluation that demonstrates significant improvements over existing tools. For verification, we show that restricting a state-of-the-art tool to analyzing only simple executions saves resources and allows the analysis of more complex cases. Specifically, restricting attention to simple executions finds bugs in 27 instances, whereas, without this restriction, we were only able to find 14 of the 30 bugs in the instances we examined. We also show that our technique accelerates testing performance significantly. Specifically, our implementation solves the complete set of 900 problems we generated, whereas the state-of-the-art linearizability testing tool solves only 554 problems

    Characterizing Faults on Real-Time Systems Based on Grid Automata

    Get PDF
    Real-time systems are, in general, critical systems that interact with the environment through input and output events regulated by time constraints. The testing activity on systems of this nature requires rigorous approaches due to their critical aspects. Model-based testing approaches rely on formalisms that provide more reliability to testing activities. However, a model-based testing approach for real-time systems depends on techniques that can deal with continuous evolution of time appropriately. Several testing approaches apply discretization techniques in order to represent continuous behavior of timed models. Test suites can then be extracted from discretized models to support conformance testing between specifications and their respective implementations. Therefore an evaluation of test suites considering a fault coverage is an important task, but rarely addressed by model-based testing approaches for real-time systems. In this work we propose a systematic strategy to identify faults in TIOA models based on their corresponding discretized models. We precisely define a fault model to support model-based testing activities such as coverage analysis and test case generation

    A Mechanized Proof of Kleene’s Theorem in Why3

    Get PDF
    In this dissertation we present a mathematically minded development of the correction proof of Kleene’s theorem conversion of regular expressions into finite automata, on the basis of equivalent expressive power. We formalise a functional implementation of the algorithm and prove, in full detail, the soundness of its mathematical definition, working within the Why3 framework to develop a mechanically verified implementation of the conversion algorithm. The motivation for this work is to test the feasibility of the deductive approach to the verification of software and pave the way to do similar proofs in the context of a static analysis approach to (object-oriented) programming. In particular, on the subject of behavioural types in typestate settings, whose expressiveness stands between regular and context-free languages and, therefore, can greatly benefit from mechanically certified implementations.Nesta dissertação apresentamos um desenvolvimento matemático da prova de correcção da conversão de expressões regulares em autómatos finitos do teorema de Kleene, com base no seu poder expressivo equivalente. Formalizamos uma implementação funcional do algoritmo e provamos, em detalhe, a correcção da sua definição matemática. Trabalhando no framework Why3 para desenvolver uma implementação mecanicamente certificada do algoritmo de conversão. A motivação para este trabalho é testar a viabilidade da metodologia e preparar o caminho para fazer provas semelhantes no contexto de uma abordagem de análise estática na programação (orientada para objectos). Em particular, no tópico dos tipos comportamentais com typestates, cuja expressividade está entre a das linguagens regulares e livres-de-contexto. Podendo, por isso, beneficiar enormemente de implementações mecanicamente certificada

    A Constructive Formalisation of Semi-algebraic Sets and Functions

    Get PDF
    International audienceSemi-algebraic sets and semi-algebraic functions are essential to specify and certify cylindrical algebraic decomposition algorithms. We formally define in Coq the base operations on semi-algebraic sets and functions using embedded first-order formulae over the language of real closed fields, and we prove the correctness of their geometrical interpretation. In doing so, we exploit a previous formalisation of quantifier elimination on such embedded formulae to guarantee the decidability of several first-order properties and keep our development constructive. We also exploit it to formalise formulae substitution without having to handle bound variables

    RitHM: A Modular Software Framework for Runtime Monitoring Supporting Complete and Lossy Traces

    Get PDF
    Runtime verification (RV) is an effective and automated method for specification based offline testing as well as online monitoring of complex real-world systems. Firstly, a software framework for RV needs to exhibit certain design features to support usability, modifiability and efficiency. While usability and modifiability are important for providing support for expressive logical formalisms, efficiency is required to reduce the extra overhead at run time. Secondly, most existing techniques assume the existence of a complete execution trace for RV. However, real-world systems often produce incomplete execution traces due to reasons such as network issues, logging failures, etc. A few verification techniques have recently emerged for performing verification of incomplete execution traces. While some of these techniques sacrifice soundness, others are too restrictive in their tolerance for incompleteness. For addressing the first problem, we introduce RitHM, a comprehensive framework, which enables development and integration of efficient verification techniques. RitHM's design takes into account various state-of-the-art techniques that are developed to optimize RV w.r.t. the efficiency of monitors and expressivity of logical formalisms. RitHM's design supports modifiability by allowing a reuse of efficient monitoring algorithms in the form of plugins, which can utilize heterogeneous back-ends. RitHM also supports extensions of logical formalisms through logic plugins. It also facilitates the interoperability between implementations of monitoring algorithms, and this feature allows utilizing different efficient algorithms for monitoring different sub-parts of a specification. We evaluate RitHM's architecture and architectures of a few more tools using architecture trade-off analysis (ATAM) method. We also report empirical results, where RitHM is used for monitoring real-world systems. The results underscore the importance of various design features of RitHM. For addressing the second problem, we identify a fragment of LTL specifications, which can be soundly monitored in the presence of transient loss events in an execution trace. We present an offline algorithm, which identifies whether an LTL formula is monitorable in a presence of a transient loss of events and constructs a loss-tolerant monitor depending upon the monitorability of the formula. Our experimental results demonstrate that our method increases the applicability of RV for monitoring various real-world applications, which produce lossy traces. The extra overhead caused by our constructed monitors is minimal as demonstrated by application of our method on commonly used patterns of LTL formulas

    Verifying algorithms and data structures in Dafny

    Get PDF
    Trabajo de Fin de Grado en Ingeniería Informática y Matemáticas (Universidad Complutense, Facultad de Informática, curso 2015/2016)La verificación formal de un programa es la demostración de que este funciona de acuerdo a una descripción del comportamiento esperado en toda posible ejecución. La especificación de lo deseado puede utilizar técnicas diversas y entrar en mayor o menor detalle, pero para ganarse el título de formal esta ha de ser matemáticamente rigurosa. El estudio y ejercicio manual de alguna de esas técnicas forma parte del currículo común a los estudios de grado de la Facultad de Informática y del itinerario de Ciencias de la Computación de la Facultad de Ciencias Matemáticas de la Universidad Complutense de Madrid, como es el caso de la verificación con pre- y postcondiciones o lógica de Hoare. En el presente trabajo se explora la automatización de estos métodos mediante el lenguaje y verificador Dafny, con el que se especifican y verifican algoritmos y estructuras de datos de diversa complejidad. Dafny es un lenguaje de programación diseñado para integrar la especificación y permitir la verificación automática de sus programas, con la ayuda del programador y de un demostrador de teoremas en la sombra. Dafny es un proyecto en desarrollo activo aunque suficientemente maduro, que genera programas ejecutables.The formal verification of a program is the proof that it works according to a description of its expected behaviour in any possible execution. The specification of what is desired can use different techniques and go into more or less detail, but to win the formal title it must be mathematically rigorous. The study and manual exercise of some of those techniques is part of the common curriculum of the degree studies at the School of Computer Science and of the Computer Science itinerary at the School of Mathematics at the Universidad Complutense de Madrid, such as verification with pre- and postconditions or Hoare logic. In the current work, the automation of those methods is explored through the language and verifier Dafny, with has been used to specify and verify some algorithms and data structures of diverse complexity. Dafny is a programming language designed to integrate specification and allow automatic verification of its programs, with the help of the programmer and a theorem prover in the shade. Dafny is in active development but mature enough and it generates executable programs.Depto. de Sistemas Informáticos y ComputaciónFac. de InformáticaTRUEunpu

    On the connection of probabilistic model checking, planning, and learning for system verification

    Get PDF
    This thesis presents approaches using techniques from the model checking, planning, and learning community to make systems more reliable and perspicuous. First, two heuristic search and dynamic programming algorithms are adapted to be able to check extremal reachability probabilities, expected accumulated rewards, and their bounded versions, on general Markov decision processes (MDPs). Thereby, the problem space originally solvable by these algorithms is enlarged considerably. Correctness and optimality proofs for the adapted algorithms are given, and in a comprehensive case study on established benchmarks it is shown that the implementation, called Modysh, is competitive with state-of-the-art model checkers and even outperforms them on very large state spaces. Second, Deep Statistical Model Checking (DSMC) is introduced, usable for quality assessment and learning pipeline analysis of systems incorporating trained decision-making agents, like neural networks (NNs). The idea of DSMC is to use statistical model checking to assess NNs resolving nondeterminism in systems modeled as MDPs. The versatility of DSMC is exemplified in a number of case studies on Racetrack, an MDP benchmark designed for this purpose, flexibly modeling the autonomous driving challenge. In a comprehensive scalability study it is demonstrated that DSMC is a lightweight technique tackling the complexity of NN analysis in combination with the state space explosion problem.Diese Arbeit präsentiert Ansätze, die Techniken aus dem Model Checking, Planning und Learning Bereich verwenden, um Systeme verlässlicher und klarer verständlich zu machen. Zuerst werden zwei Algorithmen für heuristische Suche und dynamisches Programmieren angepasst, um Extremwerte für Erreichbarkeitswahrscheinlichkeiten, Erwartungswerte für Kosten und beschränkte Varianten davon, auf generellen Markov Entscheidungsprozessen (MDPs) zu untersuchen. Damit wird der Problemraum, der ursprünglich mit diesen Algorithmen gelöst wurde, deutlich erweitert. Korrektheits- und Optimalitätsbeweise für die angepassten Algorithmen werden gegeben und in einer umfassenden Fallstudie wird gezeigt, dass die Implementierung, namens Modysh, konkurrenzfähig mit den modernsten Model Checkern ist und deren Leistung auf sehr großen Zustandsräumen sogar übertrifft. Als Zweites wird Deep Statistical Model Checking (DSMC) für die Qualitätsbewertung und Lernanalyse von Systemen mit integrierten trainierten Entscheidungsgenten, wie z.B. neuronalen Netzen (NN), eingeführt. Die Idee von DSMC ist es, statistisches Model Checking zur Bewertung von NNs zu nutzen, die Nichtdeterminismus in Systemen, die als MDPs modelliert sind, auflösen. Die Vielseitigkeit des Ansatzes wird in mehreren Fallbeispielen auf Racetrack gezeigt, einer MDP Benchmark, die zu diesem Zweck entwickelt wurde und die Herausforderung des autonomen Fahrens flexibel modelliert. In einer umfassenden Skalierbarkeitsstudie wird demonstriert, dass DSMC eine leichtgewichtige Technik ist, die die Komplexität der NN-Analyse in Kombination mit dem State Space Explosion Problem bewältigt

    Catalog 2010-2011

    Full text link

    Catalog 2012-2013

    Full text link
    corecore