348 research outputs found

    From Input to Failure: Explaining Program Behavior via Cause-Effect Chains

    Get PDF
    Debugging a fault in a program is an error-prone and resource-intensive process that requires considerable work. My doctoral research aims at supporting developers during this process by integrating test generation as a feedback loop into a novel fault diagnosis to narrow down the causality by validating or disproving suggested hypotheses. I will combine input, output, and state to detect relevant relations for an immersive fault diagnosis. Further, I want to introduce an approach for a targeted test that leverages statistical fault localization to extract oracles based on execution features to identify failing tests

    On the connection of probabilistic model checking, planning, and learning for system verification

    Get PDF
    This thesis presents approaches using techniques from the model checking, planning, and learning community to make systems more reliable and perspicuous. First, two heuristic search and dynamic programming algorithms are adapted to be able to check extremal reachability probabilities, expected accumulated rewards, and their bounded versions, on general Markov decision processes (MDPs). Thereby, the problem space originally solvable by these algorithms is enlarged considerably. Correctness and optimality proofs for the adapted algorithms are given, and in a comprehensive case study on established benchmarks it is shown that the implementation, called Modysh, is competitive with state-of-the-art model checkers and even outperforms them on very large state spaces. Second, Deep Statistical Model Checking (DSMC) is introduced, usable for quality assessment and learning pipeline analysis of systems incorporating trained decision-making agents, like neural networks (NNs). The idea of DSMC is to use statistical model checking to assess NNs resolving nondeterminism in systems modeled as MDPs. The versatility of DSMC is exemplified in a number of case studies on Racetrack, an MDP benchmark designed for this purpose, flexibly modeling the autonomous driving challenge. In a comprehensive scalability study it is demonstrated that DSMC is a lightweight technique tackling the complexity of NN analysis in combination with the state space explosion problem.Diese Arbeit präsentiert Ansätze, die Techniken aus dem Model Checking, Planning und Learning Bereich verwenden, um Systeme verlässlicher und klarer verständlich zu machen. Zuerst werden zwei Algorithmen für heuristische Suche und dynamisches Programmieren angepasst, um Extremwerte für Erreichbarkeitswahrscheinlichkeiten, Erwartungswerte für Kosten und beschränkte Varianten davon, auf generellen Markov Entscheidungsprozessen (MDPs) zu untersuchen. Damit wird der Problemraum, der ursprünglich mit diesen Algorithmen gelöst wurde, deutlich erweitert. Korrektheits- und Optimalitätsbeweise für die angepassten Algorithmen werden gegeben und in einer umfassenden Fallstudie wird gezeigt, dass die Implementierung, namens Modysh, konkurrenzfähig mit den modernsten Model Checkern ist und deren Leistung auf sehr großen Zustandsräumen sogar übertrifft. Als Zweites wird Deep Statistical Model Checking (DSMC) für die Qualitätsbewertung und Lernanalyse von Systemen mit integrierten trainierten Entscheidungsgenten, wie z.B. neuronalen Netzen (NN), eingeführt. Die Idee von DSMC ist es, statistisches Model Checking zur Bewertung von NNs zu nutzen, die Nichtdeterminismus in Systemen, die als MDPs modelliert sind, auflösen. Die Vielseitigkeit des Ansatzes wird in mehreren Fallbeispielen auf Racetrack gezeigt, einer MDP Benchmark, die zu diesem Zweck entwickelt wurde und die Herausforderung des autonomen Fahrens flexibel modelliert. In einer umfassenden Skalierbarkeitsstudie wird demonstriert, dass DSMC eine leichtgewichtige Technik ist, die die Komplexität der NN-Analyse in Kombination mit dem State Space Explosion Problem bewältigt

    Verification of Concurrent Systems : optimality, Scalability and Applicability

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, leída el 14-10-2020Tanto el testing como la verificacion de sistemas concurrentes requieren explorar todos los posibles entrelazados no deterministas que la ejecucion concurrente puede tener, ya que cualquiera de estos entrelazados podra revelar un comportamiento erroneo del sistema. Esto introduce una explosion combinatoria en el numero de estados del programa que deben ser considerados, lo que frecuentemente lleva a un problema computacionalmente intratable. El objetivo de esta tesis es el desarrollo de tecnicas novedosas para el testing y la verificacion de programas concurrentes que permitan reducir esta explosion combinatoria...Both verification and testing of concurrent systems require exploring all possible non-deterministic interleavings that the concurrent execution may have, as any of the interleavings may reveal an erroneous behavior of the system. This introduces a combinatorial explosion on the number of program states that must be considered, what leads often to a computationally intractable problem. The overall goal of this thesis is to investigate novel techniques for testing and verification of concurrent programs that reduce this combinatorial explosion...Fac. de InformáticaTRUEunpu

    Computational modelling of diffusion magnetic resonance imaging based on cardiac histology

    Get PDF
    The exact relationship between changes in myocardial microstructure as a result of heart disease and the signal measured using diffusion tensor cardiovascular magnetic resonance (DT-CMR) is currently not well understood. Computational modelling of diffusion in combination with realistic numerical phantoms offers the unique opportunity to study effects of pathologies or the efficacy of improvements to acquisition protocols in a controlled in-silico environment. In this work, Monte Carlo random walk (MCRW) methods are used to simulate diffusion in a histology-based 3D model of the myocardium. Sensitivity of typical DT-CMR sequences to changes in tissue properties is assessed. First, myocardial tissue is analysed to identify important geometric features and diffusion parameters. A two-compartment model is considered where intra-cellular compartments with a reduced bulk diffusion coefficient are separated from extra-cellular space by permeable membranes. Secondary structures like groups of cardiomyocyte (sheetlets) must also be included, and different methods are developed to automatically generate realistic histology-based substrates. Next, in-silico simulation of DT-CMR is reviewed and a tool to generate idealised versions of common pulse sequences is discussed. An efficient GPU-based numerical scheme for obtaining a continuum solution to the Bloch--Torrey equations is presented and applied to domains directly extracted from histology images. In order to verify the numerical methods used throughout this work, an analytical solution to the diffusion equation in 1D is described. It relies on spectral analysis of the diffusion operator and requires that all roots of a complex transcendental equation are found. To facilitate a fast and reliable solution, a novel root finding algorithm based on Chebyshev polynomial interpolation is proposed. To simulate realistic 3D geometries MCRW methods are employed. A parallel simulator for both grid-based and surface mesh--based geometries is presented. The presence of permeable membranes requires special treatment. For this, a commonly used transit model is analysed. Finally, the methods above are applied to study the effect of various model and sequence parameters on DT-CMR results. Simulations with impermeable membranes reveal sequence-specific sensitivity to extra-cellular volume fraction and diffusion coefficients. By including membrane permeability, DT-CMR results further approach values expected in vivo.Open Acces

    Review and classification of trajectory summarisation algorithms: From compression to segmentation

    Get PDF
    With the continuous development and cost reduction of positioning and tracking technologies, a large amount of trajectories are being exploited in multiple domains for knowledge extraction. A trajectory is formed by a large number of measurements, where many of them are unnecessary to describe the actual trajectory of the vehicle, or even harmful due to sensor noise. This not only consumes large amounts of memory, but also makes the extracting knowledge process more difficult. Trajectory summarisation techniques can solve this problem, generating a smaller and more manageable representation and even semantic segments. In this comprehensive review, we explain and classify techniques for the summarisation of trajectories according to their search strategy and point evaluation criteria, describing connections with the line simplification problem. We also explain several special concepts in trajectory summarisation problem. Finally, we outline the recent trends and best practices to continue the research in next summarisation algorithms.The author(s) disclosed receipt of the following financial support for the research, authorship and/or publication of this article: This work was funded by public research projects of Spanish Ministry of Economy and Competitivity (MINECO), reference TEC2017-88048-C2-2-

    Understanding Concurrency Vulnerabilities in Linux Kernel

    Full text link
    While there is a large body of work on analyzing concurrency related software bugs and developing techniques for detecting and patching them, little attention has been given to concurrency related security vulnerabilities. The two are different in that not all bugs are vulnerabilities: for a bug to be exploitable, there needs be a way for attackers to trigger its execution and cause damage, e.g., by revealing sensitive data or running malicious code. To fill the gap, we conduct the first empirical study of concurrency vulnerabilities reported in the Linux operating system in the past ten years. We focus on analyzing the confirmed vulnerabilities archived in the Common Vulnerabilities and Exposures (CVE) database, which are then categorized into different groups based on bug types, exploit patterns, and patch strategies adopted by developers. We use code snippets to illustrate individual vulnerability types and patch strategies. We also use statistics to illustrate the entire landscape, including the percentage of each vulnerability type. We hope to shed some light on the problem, e.g., concurrency vulnerabilities continue to pose a serious threat to system security, and it is difficult even for kernel developers to analyze and patch them. Therefore, more efforts are needed to develop tools and techniques for analyzing and patching these vulnerabilities.Comment: It was finished in Oct 201
    • …
    corecore