118,276 research outputs found

    Statistical Analysis of Structural Transitions in Small Systems

    Get PDF
    We discuss general thermodynamic properties of molecular structure formation processes like protein folding by means of simplified, coarse-grained models. The conformational transitions accompanying these processes exhibit similarities to thermodynamic phase transitions, but also significant differences as the systems that we investigate here are very small. The usefulness of a microcanonical statistical analysis of these transitions in comparison with a canonical interpretation is emphasized. The results are obtained by employing sophisticated generalized-ensemble Markov-chain Monte Carlo methodologies.Comment: 9 pages, 5 figures, Proceedings of the 22nd Workshop on Recent Developments in Computer Simulation Studies in Condensed Matter Physics, Feb 23-27, 2009, Athens, Georgia, US

    Verification and control of partially observable probabilistic systems

    Get PDF
    We present automated techniques for the verification and control of partially observable, probabilistic systems for both discrete and dense models of time. For the discrete-time case, we formally model these systems using partially observable Markov decision processes; for dense time, we propose an extension of probabilistic timed automata in which local states are partially visible to an observer or controller. We give probabilistic temporal logics that can express a range of quantitative properties of these models, relating to the probability of an event’s occurrence or the expected value of a reward measure. We then propose techniques to either verify that such a property holds or synthesise a controller for the model which makes it true. Our approach is based on a grid-based abstraction of the uncountable belief space induced by partial observability and, for dense-time models, an integer discretisation of real-time behaviour. The former is necessarily approximate since the underlying problem is undecidable, however we show how both lower and upper bounds on numerical results can be generated. We illustrate the effectiveness of the approach by implementing it in the PRISM model checker and applying it to several case studies from the domains of task and network scheduling, computer security and planning

    How do diabetes models measure up? A review of diabetes economic models and ADA guidelines

    Get PDF
    Introduction: Economic models and computer simulation models have been used for assessing short-term cost-effectiveness of interventions and modelling long-term outcomes and costs. Several guidelines and checklists have been published to improve the methods and reporting. This article presents an overview of published diabetes models with a focus on how well the models are described in relation to the considerations described by the American Diabetes Association (ADA) guidelines. Methods: Relevant electronic databases and National Institute for Health and Care Excellence (NICE) guidelines were searched in December 2012. Studies were included in the review if they estimated lifetime outcomes for patients with type 1 or type 2 diabetes. Only unique models, and only the original papers were included in the review. If additional information was reported in subsequent or paired articles, then additional citations were included. References and forward citations of relevant articles, including the previous systematic reviews were searched using a similar method to pearl growing. Four principal areas were included in the ADA guidance reporting for models: transparency, validation, uncertainty, and diabetes specific criteria. Results: A total 19 models were included. Twelve models investigated type 2 diabetes, two developed type 1 models, two created separate models for type 1 and type 2, and three developed joint type 1 and type 2 models. Most models were developed in the United States, United Kingdom, Europe or Canada. Later models use data or methods from earlier models for development or validation. There are four main types of models: Markov-based cohort, Markov-based microsimulations, discrete-time microsimulations, and continuous time differential equations. All models were long-term diabetes models incorporating a wide range of compilations from various organ systems. In early diabetes modelling, before the ADA guidelines were published, most models did not include descriptions of all the diabetes specific components of the ADA guidelines but this improved significantly by 2004. Conclusion: A clear, descriptive short summary of the model was often lacking. Descriptions of model validation and uncertainty were the most poorly reported of the four main areas, but there exist conferences focussing specifically on the issue of validation. Interdependence between the complications was the least well incorporated or reported of the diabetes-specific criterion

    Techniques for the Fast Simulation of Models of Highly dependable Systems

    Get PDF
    With the ever-increasing complexity and requirements of highly dependable systems, their evaluation during design and operation is becoming more crucial. Realistic models of such systems are often not amenable to analysis using conventional analytic or numerical methods. Therefore, analysts and designers turn to simulation to evaluate these models. However, accurate estimation of dependability measures of these models requires that the simulation frequently observes system failures, which are rare events in highly dependable systems. This renders ordinary Simulation impractical for evaluating such systems. To overcome this problem, simulation techniques based on importance sampling have been developed, and are very effective in certain settings. When importance sampling works well, simulation run lengths can be reduced by several orders of magnitude when estimating transient as well as steady-state dependability measures. This paper reviews some of the importance-sampling techniques that have been developed in recent years to estimate dependability measures efficiently in Markov and nonMarkov models of highly dependable system

    Construction and Verification of Performance and Reliability Models

    Get PDF
    Over the last two decades formal methods have been extended towards performance and reliability evaluation. This paper tries to provide a rather intuitive explanation of the basic concepts and features in this area. Instead of striving for mathematical rigour, the intention is to give an illustrative introduction to the basics of stochastic models, to stochastic modelling using process algebra, and to model checking as a technique to analyse stochastic models

    Quantitative Verification: Formal Guarantees for Timeliness, Reliability and Performance

    Get PDF
    Computerised systems appear in almost all aspects of our daily lives, often in safety-critical scenarios such as embedded control systems in cars and aircraft or medical devices such as pacemakers and sensors. We are thus increasingly reliant on these systems working correctly, despite often operating in unpredictable or unreliable environments. Designers of such devices need ways to guarantee that they will operate in a reliable and efficient manner. Quantitative verification is a technique for analysing quantitative aspects of a system's design, such as timeliness, reliability or performance. It applies formal methods, based on a rigorous analysis of a mathematical model of the system, to automatically prove certain precisely specified properties, e.g. ``the airbag will always deploy within 20 milliseconds after a crash'' or ``the probability of both sensors failing simultaneously is less than 0.001''. The ability to formally guarantee quantitative properties of this kind is beneficial across a wide range of application domains. For example, in safety-critical systems, it may be essential to establish credible bounds on the probability with which certain failures or combinations of failures can occur. In embedded control systems, it is often important to comply with strict constraints on timing or resources. More generally, being able to derive guarantees on precisely specified levels of performance or efficiency is a valuable tool in the design of, for example, wireless networking protocols, robotic systems or power management algorithms, to name but a few. This report gives a short introduction to quantitative verification, focusing in particular on a widely used technique called model checking, and its generalisation to the analysis of quantitative aspects of a system such as timing, probabilistic behaviour or resource usage. The intended audience is industrial designers and developers of systems such as those highlighted above who could benefit from the application of quantitative verification,but lack expertise in formal verification or modelling
    corecore