21,169 research outputs found

    Extending Hybrid CSP with Probability and Stochasticity

    Full text link
    Probabilistic and stochastic behavior are omnipresent in computer controlled systems, in particular, so-called safety-critical hybrid systems, because of fundamental properties of nature, uncertain environments, or simplifications to overcome complexity. Tightly intertwining discrete, continuous and stochastic dynamics complicates modelling, analysis and verification of stochastic hybrid systems (SHSs). In the literature, this issue has been extensively investigated, but unfortunately it still remains challenging as no promising general solutions are available yet. In this paper, we give our effort by proposing a general compositional approach for modelling and verification of SHSs. First, we extend Hybrid CSP (HCSP), a very expressive and process algebra-like formal modeling language for hybrid systems, by introducing probability and stochasticity to model SHSs, which is called stochastic HCSP (SHCSP). To this end, ordinary differential equations (ODEs) are generalized by stochastic differential equations (SDEs) and non-deterministic choice is replaced by probabilistic choice. Then, we extend Hybrid Hoare Logic (HHL) to specify and reason about SHCSP processes. We demonstrate our approach by an example from real-world.Comment: The conference version of this paper is accepted by SETTA 201

    Probabilistic Reachability Analysis for Large Scale Stochastic Hybrid Systems

    Get PDF
    This paper studies probabilistic reachability analysis for large scale stochastic hybrid systems (SHS) as a problem of rare event estimation. In literature, advanced rare event estimation theory has recently been embedded within a stochastic analysis framework, and this has led to significant novel results in rare event estimation for a diffusion process using sequential MC simulation. This paper presents this rare event estimation theory directly in terms of probabilistic reachability analysis of an SHS, and develops novel theory which allows to extend the novel results for application to a large scale SHS where a very huge number of rare discrete modes may contribute significantly to the reach probability. Essentially, the approach taken is to introduce an aggregation of the discrete modes, and to develop importance sampling relative to the rare switching between the aggregation modes. The practical working of this approach is demonstrated for the safety verification of an advanced air traffic control example

    Quantitative Verification: Formal Guarantees for Timeliness, Reliability and Performance

    Get PDF
    Computerised systems appear in almost all aspects of our daily lives, often in safety-critical scenarios such as embedded control systems in cars and aircraft or medical devices such as pacemakers and sensors. We are thus increasingly reliant on these systems working correctly, despite often operating in unpredictable or unreliable environments. Designers of such devices need ways to guarantee that they will operate in a reliable and efficient manner. Quantitative verification is a technique for analysing quantitative aspects of a system's design, such as timeliness, reliability or performance. It applies formal methods, based on a rigorous analysis of a mathematical model of the system, to automatically prove certain precisely specified properties, e.g. ``the airbag will always deploy within 20 milliseconds after a crash'' or ``the probability of both sensors failing simultaneously is less than 0.001''. The ability to formally guarantee quantitative properties of this kind is beneficial across a wide range of application domains. For example, in safety-critical systems, it may be essential to establish credible bounds on the probability with which certain failures or combinations of failures can occur. In embedded control systems, it is often important to comply with strict constraints on timing or resources. More generally, being able to derive guarantees on precisely specified levels of performance or efficiency is a valuable tool in the design of, for example, wireless networking protocols, robotic systems or power management algorithms, to name but a few. This report gives a short introduction to quantitative verification, focusing in particular on a widely used technique called model checking, and its generalisation to the analysis of quantitative aspects of a system such as timing, probabilistic behaviour or resource usage. The intended audience is industrial designers and developers of systems such as those highlighted above who could benefit from the application of quantitative verification,but lack expertise in formal verification or modelling

    Analysis of Non-Linear Probabilistic Hybrid Systems

    Full text link
    This paper shows how to compute, for probabilistic hybrid systems, the clock approximation and linear phase-portrait approximation that have been proposed for non probabilistic processes by Henzinger et al. The techniques permit to define a rectangular probabilistic process from a non rectangular one, hence allowing the model-checking of any class of systems. Clock approximation, which applies under some restrictions, aims at replacing a non rectangular variable by a clock variable. Linear phase-approximation applies without restriction and yields an approximation that simulates the original process. The conditions that we need for probabilistic processes are the same as those for the classic case.Comment: In Proceedings QAPL 2011, arXiv:1107.074

    Safety Verification of Fault Tolerant Goal-based Control Programs with Estimation Uncertainty

    Get PDF
    Fault tolerance and safety verification of control systems that have state variable estimation uncertainty are essential for the success of autonomous robotic systems. A software control architecture called mission data system, developed at the Jet Propulsion Laboratory, uses goal networks as the control program for autonomous systems. Certain types of goal networks can be converted into linear hybrid systems and verified for safety using existing symbolic model checking software. A process for calculating the probability of failure of certain classes of verifiable goal networks due to state estimation uncertainty is presented. A verifiable example task is presented and the failure probability of the control program based on estimation uncertainty is found

    StocHy: automated verification and synthesis of stochastic processes

    Full text link
    StocHy is a software tool for the quantitative analysis of discrete-time stochastic hybrid systems (SHS). StocHy accepts a high-level description of stochastic models and constructs an equivalent SHS model. The tool allows to (i) simulate the SHS evolution over a given time horizon; and to automatically construct formal abstractions of the SHS. Abstractions are then employed for (ii) formal verification or (iii) control (policy, strategy) synthesis. StocHy allows for modular modelling, and has separate simulation, verification and synthesis engines, which are implemented as independent libraries. This allows for libraries to be easily used and for extensions to be easily built. The tool is implemented in C++ and employs manipulations based on vector calculus, the use of sparse matrices, the symbolic construction of probabilistic kernels, and multi-threading. Experiments show StocHy's markedly improved performance when compared to existing abstraction-based approaches: in particular, StocHy beats state-of-the-art tools in terms of precision (abstraction error) and computational effort, and finally attains scalability to large-sized models (12 continuous dimensions). StocHy is available at www.gitlab.com/natchi92/StocHy

    Quantitative Approximation of the Probability Distribution of a Markov Process by Formal Abstractions

    Full text link
    The goal of this work is to formally abstract a Markov process evolving in discrete time over a general state space as a finite-state Markov chain, with the objective of precisely approximating its state probability distribution in time, which allows for its approximate, faster computation by that of the Markov chain. The approach is based on formal abstractions and employs an arbitrary finite partition of the state space of the Markov process, and the computation of average transition probabilities between partition sets. The abstraction technique is formal, in that it comes with guarantees on the introduced approximation that depend on the diameters of the partitions: as such, they can be tuned at will. Further in the case of Markov processes with unbounded state spaces, a procedure for precisely truncating the state space within a compact set is provided, together with an error bound that depends on the asymptotic properties of the transition kernel of the original process. The overall abstraction algorithm, which practically hinges on piecewise constant approximations of the density functions of the Markov process, is extended to higher-order function approximations: these can lead to improved error bounds and associated lower computational requirements. The approach is practically tested to compute probabilistic invariance of the Markov process under study, and is compared to a known alternative approach from the literature.Comment: 29 pages, Journal of Logical Methods in Computer Scienc
    • ā€¦
    corecore