2,060 research outputs found

    Model-Checking Speculation-Dependent Security Properties: Abstracting and Reducing Processor Models for Sound and Complete Verification

    Get PDF
    Spectre and Meltdown attacks in modern microprocessors represent a new class of attacks that have been difficult to deal with. They underline vulnerabilities in hardware design that have been going unnoticed for years. This shows the weakness of the state-of-the-art verification process and design practices. These attacks are OS-independent, and they do not exploit any software vulnerabilities. Moreover, they violate all security assumptions ensured by standard security procedures, (e.g., address space isolation), and, as a result, every security mechanism built upon these guarantees. These vulnerabilities allow the attacker to retrieve leaked data without accessing the secret directly. Indeed, they make use of covert channels, which are mechanisms of hidden communication that convey sensitive information without any visible information flow between the malicious party and the victim. The root cause of this type of side-channel attacks lies within the speculative and out-of-order execution of modern high-performance microarchitectures. Since modern processors are hard to verify with standard formal verification techniques, we present a methodology that shows how to transform a realistic model of a speculative and out-of-order processor into an abstract one. Following related formal verification approaches, we simplify the model under consideration by abstraction and refinement steps. We also present an approach to formally verify the abstract model using a standard model checker. The theoretical flow, reliant on established formal verification results, is introduced and a sketch of proof is provided for soundness and correctness. Finally, we demonstrate the feasibility of our approach, by applying it on a pipelined DLX RISC-inspired processor architecture. We show preliminary experimental results to support our claim, performing Bounded Model-Checking with a state-of-the-art model checker

    Cyber-security for embedded systems: methodologies, techniques and tools

    Get PDF
    L'abstract Ăš presente nell'allegato / the abstract is in the attachmen

    Model Checking at Scale: Automated Air Traffic Control Design Space Exploration

    Get PDF
    Many possible solutions, differing in the assumptions and implementations of the components in use, are usually in competition during early design stages. Deciding which solution to adopt requires considering several trade-offs. Model checking represents a possible way of comparing such designs, however, when the number of designs is large, building and validating so many models may be intractable. During our collaboration with NASA, we faced the challenge of considering a design space with more than 20,000 designs for the NextGen air traffic control system. To deal with this problem, we introduce a compositional, modular, parameterized approach combining model checking with contract-based design to automatically generate large numbers of models from a possible set of components and their implementations. Our approach is fully automated, enabling the generation and validation of all target designs. The 1,620 designs that were most relevant to NASA were analyzed exhaustively. To deal with the massive amount of data generated, we apply novel data-analysis techniques that enable a rich comparison of the designs, including safety aspects. Our results were validated by NASA system designers, and helped to identify novel as well as known problematic configurations

    IST Austria Thesis

    Get PDF
    Designing and verifying concurrent programs is a notoriously challenging, time consuming, and error prone task, even for experts. This is due to the sheer number of possible interleavings of a concurrent program, all of which have to be tracked and accounted for in a formal proof. Inventing an inductive invariant that captures all interleavings of a low-level implementation is theoretically possible, but practically intractable. We develop a refinement-based verification framework that provides mechanisms to simplify proof construction by decomposing the verification task into smaller subtasks. In a first line of work, we present a foundation for refinement reasoning over structured concurrent programs. We introduce layered concurrent programs as a compact notation to represent multi-layer refinement proofs. A layered concurrent program specifies a sequence of connected concurrent programs, from most concrete to most abstract, such that common parts of different programs are written exactly once. Each program in this sequence is expressed as structured concurrent program, i.e., a program over (potentially recursive) procedures, imperative control flow, gated atomic actions, structured parallelism, and asynchronous concurrency. This is in contrast to existing refinement-based verifiers, which represent concurrent systems as flat transition relations. We present a powerful refinement proof rule that decomposes refinement checking over structured programs into modular verification conditions. Refinement checking is supported by a new form of modular, parameterized invariants, called yield invariants, and a linear permission system to enhance local reasoning. In a second line of work, we present two new reduction-based program transformations that target asynchronous programs. These transformations reduce the number of interleavings that need to be considered, thus reducing the complexity of invariants. Synchronization simplifies the verification of asynchronous programs by introducing the fiction, for proof purposes, that asynchronous operations complete synchronously. Synchronization summarizes an asynchronous computation as immediate atomic effect. Inductive sequentialization establishes sequential reductions that captures every behavior of the original program up to reordering of coarse-grained commutative actions. A sequential reduction of a concurrent program is easy to reason about since it corresponds to a simple execution of the program in an idealized synchronous environment, where processes act in a fixed order and at the same speed. Our approach is implemented the CIVL verifier, which has been successfully used for the verification of several complex concurrent programs. In our methodology, the overall correctness of a program is established piecemeal by focusing on the invariant required for each refinement step separately. While the programmer does the creative work of specifying the chain of programs and the inductive invariant justifying each link in the chain, the tool automatically constructs the verification conditions underlying each refinement step

    Enabling computation of correlation bounds for finite-dimensional quantum systems via symmetrisation

    Full text link
    We present a technique for reducing the computational requirements by several orders of magnitude in the evaluation of semidefinite relaxations for bounding the set of quantum correlations arising from finite-dimensional Hilbert spaces. The technique, which we make publicly available through a user-friendly software package, relies on the exploitation of symmetries present in the optimisation problem to reduce the number of variables and the block sizes in semidefinite relaxations. It is widely applicable in problems encountered in quantum information theory and enables computations that were previously too demanding. We demonstrate its advantages and general applicability in several physical problems. In particular, we use it to robustly certify the non-projectiveness of high-dimensional measurements in a black-box scenario based on self-tests of dd-dimensional symmetric informationally complete POVMs.Comment: A. T. and D. R. contributed equally for this projec

    Proceedings of the 21st Conference on Formal Methods in Computer-Aided Design – FMCAD 2021

    Get PDF
    The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing

    Seventh Biennial Report : June 2003 - March 2005

    No full text

    Artificial intelligence and model checking methods for in silico clinical trials

    Get PDF
    Model-based approaches to safety and efficacy assessment of pharmacological treatments (In Silico Clinical Trials, ISCT) hold the promise to decrease time and cost for the needed experimentations, reduce the need for animal and human testing, and enable personalised medicine, where treatments tailored for each single patient can be designed before being actually administered. Research in Virtual Physiological Human (VPH) is harvesting such promise by developing quantitative mechanistic models of patient physiology and drugs. Depending on many parameters, such models define physiological differences among different individuals and different reactions to drug administrations. Value assignments to model parameters can be regarded as Virtual Patients (VPs). Thus, as in vivo clinical trials test relevant drugs against suitable candidate patients, ISCT simulate effect of relevant drugs against VPs covering possible behaviours that might occur in vivo. Having a population of VPs representative of the whole spectrum of human patient behaviours is a key enabler of ISCT. However, VPH models of practical relevance are typically too complex to be solved analytically or to be formally analysed. Thus, they are usually solved numerically within simulators. In this setting, Artificial Intelligence and Model Checking methods are typically devised. Indeed, a VP coupled together with a pharmacological treatment represents a closed-loop model where the VP plays the role of a physical subsystem and the treatment strategy plays the role of the control software. Systems with this structure are known as Cyber-Physical Systems (CPSs). Thus, simulation-based methodologies for CPSs can be employed within personalised medicine in order to compute representative VP populations and to conduct ISCT. In this thesis, we advance the state of the art of simulation-based Artificial Intelligence and Model Checking methods for ISCT in the following directions. First, we present a Statistical Model Checking (SMC) methodology based on hypothesis testing that, given a VPH model as input, computes a population of VPs which is representative (i.e., large enough to represent all relevant phenotypes, with a given degree of statistical confidence) and stratified (i.e., organised as a multi-layer hierarchy of homogeneous sub-groups). Stratification allows ISCT to adaptively focus on specific phenotypes, also supporting prioritisation of patient sub-groups in follow-up in vivo clinical trials. Second, resting on a representative VP population, we design an ISCT aiming at optimising a complex treatment for a patient digital twin, that is the virtual counterpart of that patient physiology defined by means of a set of VPs. Our ISCT employs an intelligent search driving a VPH model simulator to seek the lightest but still effective treatment for the input patient digital twin. Third, to enable interoperability among VPH models defined with different modelling and simulation environments and to increase efficiency of our ISCT, we also design an optimised simulator driver to speed-up backtracking-based search algorithms driving simulators. Finally, we evaluate the effectiveness of our presented methodologies on state-of-the-art use cases and validate our results on retrospective clinical data
    • 

    corecore