218 research outputs found

    Quantitative Verification: Formal Guarantees for Timeliness, Reliability and Performance

    Get PDF
    Computerised systems appear in almost all aspects of our daily lives, often in safety-critical scenarios such as embedded control systems in cars and aircraft or medical devices such as pacemakers and sensors. We are thus increasingly reliant on these systems working correctly, despite often operating in unpredictable or unreliable environments. Designers of such devices need ways to guarantee that they will operate in a reliable and efficient manner. Quantitative verification is a technique for analysing quantitative aspects of a system's design, such as timeliness, reliability or performance. It applies formal methods, based on a rigorous analysis of a mathematical model of the system, to automatically prove certain precisely specified properties, e.g. ``the airbag will always deploy within 20 milliseconds after a crash'' or ``the probability of both sensors failing simultaneously is less than 0.001''. The ability to formally guarantee quantitative properties of this kind is beneficial across a wide range of application domains. For example, in safety-critical systems, it may be essential to establish credible bounds on the probability with which certain failures or combinations of failures can occur. In embedded control systems, it is often important to comply with strict constraints on timing or resources. More generally, being able to derive guarantees on precisely specified levels of performance or efficiency is a valuable tool in the design of, for example, wireless networking protocols, robotic systems or power management algorithms, to name but a few. This report gives a short introduction to quantitative verification, focusing in particular on a widely used technique called model checking, and its generalisation to the analysis of quantitative aspects of a system such as timing, probabilistic behaviour or resource usage. The intended audience is industrial designers and developers of systems such as those highlighted above who could benefit from the application of quantitative verification,but lack expertise in formal verification or modelling

    Verification and control of partially observable probabilistic systems

    Get PDF
    We present automated techniques for the verification and control of partially observable, probabilistic systems for both discrete and dense models of time. For the discrete-time case, we formally model these systems using partially observable Markov decision processes; for dense time, we propose an extension of probabilistic timed automata in which local states are partially visible to an observer or controller. We give probabilistic temporal logics that can express a range of quantitative properties of these models, relating to the probability of an event’s occurrence or the expected value of a reward measure. We then propose techniques to either verify that such a property holds or synthesise a controller for the model which makes it true. Our approach is based on a grid-based abstraction of the uncountable belief space induced by partial observability and, for dense-time models, an integer discretisation of real-time behaviour. The former is necessarily approximate since the underlying problem is undecidable, however we show how both lower and upper bounds on numerical results can be generated. We illustrate the effectiveness of the approach by implementing it in the PRISM model checker and applying it to several case studies from the domains of task and network scheduling, computer security and planning

    Cooked up in the Dinner Hour?:Sir Arthur Wilson's War Plan, Reconsidered

    Get PDF

    Stress Whitening in Polyester Melamine Coatings

    Get PDF
    Stress whitening is a long-standing problem and scientific work has focused on evaluating causes of this in bulk polymer systems. In this paper we focus on this optical defect exhibited by a complex thermosetting polyester melamine coating system used extensively in the pre-coated metal industry. There are several mechanisms proposed for how stress whitening occurs and hence there is uncertainty over the causes in the systems mentioned. The most likely explanation given to date is that a number of proposed micro-mechanisms exist, which one is occurring is entirely dependent on the system being investigated. The work presented shows that the presence of dissimilar particles is the cause of the stress whitening. The proposed mechanism for whitening and its disappearance in this case is a time and temperature dependent change in density, i.e. cracking or voiding, where the cracks are outside the range that scatters light with an increase in temperature

    Automatic Verification of Concurrent Stochastic Systems

    Get PDF
    Automated verification techniques for stochastic games allow formal reasoning about systems that feature competitive or collaborative behaviour among rational agents in uncertain or probabilistic settings. Existing tools and techniques focus on turn-based games, where each state of the game is controlled by a single player, and on zero-sum properties, where two players or coalitions have directly opposing objectives. In this paper, we present automated verification techniques for concurrent stochastic games (CSGs), which provide a more natural model of concurrent decision making and interaction. We also consider (social welfare) Nash equilibria, to formally identify scenarios where two players or coalitions with distinct goals can collaborate to optimise their joint performance. We propose an extension of the temporal logic rPATL for specifying quantitative properties in this setting and present corresponding algorithms for verification and strategy synthesis for a variant of stopping games. For finite-horizon properties the computation is exact, while for infinite-horizon it is approximate using value iteration. For zero-sum properties it requires solving matrix games via linear programming, and for equilibria-based properties we find social welfare or social cost Nash equilibria of bimatrix games via the method of labelled polytopes through an SMT encoding. We implement this approach in PRISM-games, which required extending the tool's modelling language for CSGs, and apply it to case studies from domains including robotics, computer security and computer networks, explicitly demonstrating the benefits of both CSGs and equilibria-based properties

    Deposition of High Conductivity Low Silver Content Materials by Screen Printing

    Get PDF
    A comprehensive experimental investigation has been carried out into the role of film thickness variation and silver material formulation on printing capability in the screen printing process. A full factorial experiment was carried out where two formulations of silver materials were printed through a range of screens to a polyester substrate under a set of standard conditions. The materials represented a novel low silver content (45%–49%) polymer material and traditional high silver content (65%–69%) paste. The resultant prints were characterised topologically and electrically. The study shows that more cost effective use of the silver in the ink was obtained with the low silver polymer materials, but that the electrical performance was more strongly affected by the mesh being used (and hence film thickness). Thus, while optimum silver use could be obtained using materials with a lower silver content, this came with the consequence of reduced process robustness

    On the use of MTBDDs for performability analysis and verification of stochastic systems

    Get PDF
    AbstractThis paper describes how to employ multi-terminal binary decision diagrams (MTBDDs) for the construction and analysis of a general class of models that exhibit stochastic, probabilistic and non-deterministic behaviour. It is shown how the notorious problem of state space explosion can be circumvented by compositionally constructing symbolic (i.e. MTBDD-based) representations of complex systems from small-scale components. We emphasise, however, that compactness of the representation can only be achieved if heuristics are applied with insight into the structure of the system under investigation. We report on our experiences concerning compact representation, performance analysis and verification of performability properties

    The effect of plasma functionalization on the print performance and time stability of graphite nanoplatelet electrically conducting inks

    Get PDF
    Carbon-based pastes and inks are used extensively in a wide range of printed electronics because of their widespread availability, electrical conductivity and low cost. Overcoming the inherent tendency of the nano-carbon to agglomerate to form a stable dispersion is necessary if these inks are to be taken from the lab scale to industrial production. Plasma functionalization of graphite nanoplatelets (GNP) adds functional groups to their surface to improve their interaction with the polymer resin. This offers an attractive method to overcome these problems when creating next generation inks. Both dynamic and oscillatory rheology were used to evaluate the stability of inks made with different loadings of functionalized and unfunctionalized GNP in a thin resin, typical of a production ink. The rheology and the printability tests showed the same level of dispersion and electrical performance had been achieved with both functionalized and unfunctionalized GNPs. The unfunctionalized GNPs agglomerate to form larger, lower aspect particles, reducing interparticle interactions and particle–medium interactions. Over a 12-week period, the viscosity, shear thinning behavior and viscoelastic properties of the unfunctionalized GNP inks fell, with decreases in viscosity at 1.17 s−1 of 24, 30, 39% for the ϕ = 0.071, 0.098, 0.127 GNP suspensions, respectively. However, the rheological properties of the functionalized GNP suspensions remained stable as the GNPs interacted better with the polymer in the resin to create a steric barrier which prevented the GNPs from approaching close enough for van der Waals forces to be effective

    Partially Observable Stochastic Games with Neural Perception Mechanisms

    Full text link
    Stochastic games are a well established model for multi-agent sequential decision making under uncertainty. In reality, though, agents have only partial observability of their environment, which makes the problem computationally challenging, even in the single-agent setting of partially observable Markov decision processes. Furthermore, in practice, agents increasingly perceive their environment using data-driven approaches such as neural networks trained on continuous data. To tackle this problem, we propose the model of neuro-symbolic partially-observable stochastic games (NS-POSGs), a variant of continuous-space concurrent stochastic games that explicitly incorporates perception mechanisms. We focus on a one-sided setting, comprising a partially-informed agent with discrete, data-driven observations and a fully-informed agent with continuous observations. We present a new point-based method, called one-sided NS-HSVI, for approximating values of one-sided NS-POSGs and implement it based on the popular particle-based beliefs, showing that it has closed forms for computing values of interest. We provide experimental results to demonstrate the practical applicability of our method for neural networks whose preimage is in polyhedral form.Comment: 41 pages, 5 figure
    • …
    corecore