122,105 research outputs found

    Comparative Analysis of Statistical Model Checking Tools

    Get PDF
    Statistical model checking is a powerful and flexible approach for formal verification of computational models like P systems, which can have very large search spaces. Various statistical model checking tools have been developed, but choosing between them and using the most appropriate one requires a significant degree of experience, not only because different tools have different modelling and property specification languages, but also because they may be designed to support only a certain subset of property types. Furthermore, their performance can vary depending on the property types and membrane systems being verified. In this paper we evaluate the performance of various common statistical model checkers against a pool of biological models. Our aim is to help users select the most suitable SMC tools from among the available options, by comparing their modelling and property specification languages, capabilities and performances

    Multiple verification in computational modeling of bone pathologies

    Full text link
    We introduce a model checking approach to diagnose the emerging of bone pathologies. The implementation of a new model of bone remodeling in PRISM has led to an interesting characterization of osteoporosis as a defective bone remodeling dynamics with respect to other bone pathologies. Our approach allows to derive three types of model checking-based diagnostic estimators. The first diagnostic measure focuses on the level of bone mineral density, which is currently used in medical practice. In addition, we have introduced a novel diagnostic estimator which uses the full patient clinical record, here simulated using the modeling framework. This estimator detects rapid (months) negative changes in bone mineral density. Independently of the actual bone mineral density, when the decrease occurs rapidly it is important to alarm the patient and monitor him/her more closely to detect insurgence of other bone co-morbidities. A third estimator takes into account the variance of the bone density, which could address the investigation of metabolic syndromes, diabetes and cancer. Our implementation could make use of different logical combinations of these statistical estimators and could incorporate other biomarkers for other systemic co-morbidities (for example diabetes and thalassemia). We are delighted to report that the combination of stochastic modeling with formal methods motivate new diagnostic framework for complex pathologies. In particular our approach takes into consideration important properties of biosystems such as multiscale and self-adaptiveness. The multi-diagnosis could be further expanded, inching towards the complexity of human diseases. Finally, we briefly introduce self-adaptiveness in formal methods which is a key property in the regulative mechanisms of biological systems and well known in other mathematical and engineering areas.Comment: In Proceedings CompMod 2011, arXiv:1109.104

    APPLICATIONS OF STATISTICAL DATA MINING METHODS

    Get PDF
    Data mining is a collection of analytical techniques to uncover new trends and patterns in large databases. These data mining techniques stress visualization to thoroughly study the structure of data and to check the validity of statistical model fit to the data and lead to knowledge discovery. Data mining is an interdisciplinary research area spanning several disciplines such as database management, machine learning, statistical computing, and expert systems. Although data mining is a relatively new term, the technology is not. Data mining allows users to analyze data from many different dimensions or angles, explore and categorize it, and summarize the relationships identified. Large investments in technology and data collection are currently being made in the area of precision agriculture, remote sensing, and in bioinformatics. Experiments conducted in these disciplines are generating mountains of data at a rapid rate. Analyzing such massive data combined with the biological and environmental information would not be possible without automated and efficient data mining techniques. Effective statistical and graphical data mining tools can enable agricultural researchers to perform quicker and more cost-effective experiments. Commonly used statistical and graphical data mining techniques in data exploration and visualization, model selection, model development, checking for violations of statistical assumptions, and model validation are presented here

    Runtime Verification of Biological Systems

    Get PDF
    International audienceComplex computational systems are ubiquitous and their study increasingly important. Given the ease with which it is possible to construct large systems with heterogeneous technology, there is strong motivation to provide automated means to verify their safety, efficiency and reliability. In another context, biological systems are supreme examples of complex systems for which there are no design specifications. In both cases it is usually difficult to reason at the level of the description of the systems and much more convenient to investigate properties of their executions. To demonstrate runtime verification of complex systems we apply statistical model checking techniques to a model of robust biological oscillations taken from the literature. The model demonstrates some of the mechanisms used by biological systems to maintain reliable performance in the face of inherent stochasticity and is therefore instructive. To perform our investigation we use two recently developed SMC platforms: that incorporated in Uppaal and Plasma. Uppaalsmc offers a generic modeling language based on stochastic hybrid automata, while Plasma aims at domain specific support with the facility to accept biological models represented in chemical syntax

    Perils and pitfalls of mixed-effects regression models in biology

    Get PDF
    This is the final version. Available on open access from PeerJ via the DOI in this recordData Availability: The following information was supplied regarding data availability: The R code used to conduct all simulations in the paper is available in the Supplemental Files.Biological systems, at all scales of organisation from nucleic acids to ecosystems, are inherently complex and variable. Biologists therefore use statistical analyses to detect signal among this systemic noise. Statistical models infer trends, find functional relationships and detect differences that exist among groups or are caused by experimental manipulations. They also use statistical relationships to help predict uncertain futures. All branches of the biological sciences now embrace the possibilities of mixed-effects modelling and its flexible toolkit for partitioning noise and signal. The mixed-effects model is not, however, a panacea for poor experimental design, and should be used with caution when inferring or deducing the importance of both fixed and random effects. Here we describe a selection of the perils and pitfalls that are widespread in the biological literature, but can be avoided by careful reflection, modelling and model-checking. We focus on situations where incautious modelling risks exposure to these pitfalls and the drawing of incorrect conclusions. Our stance is that statements of significance, information content or credibility all have their place in biological research, as long as these statements are cautious and well-informed by checks on the validity of assumptions. Our intention is to reveal potential perils and pitfalls in mixed model estimation so that researchers can use these powerful approaches with greater awareness and confidence. Our examples are ecological, but translate easily to all branches of biology.University of Exete

    Efficient Parallel Statistical Model Checking of Biochemical Networks

    Full text link
    We consider the problem of verifying stochastic models of biochemical networks against behavioral properties expressed in temporal logic terms. Exact probabilistic verification approaches such as, for example, CSL/PCTL model checking, are undermined by a huge computational demand which rule them out for most real case studies. Less demanding approaches, such as statistical model checking, estimate the likelihood that a property is satisfied by sampling executions out of the stochastic model. We propose a methodology for efficiently estimating the likelihood that a LTL property P holds of a stochastic model of a biochemical network. As with other statistical verification techniques, the methodology we propose uses a stochastic simulation algorithm for generating execution samples, however there are three key aspects that improve the efficiency: first, the sample generation is driven by on-the-fly verification of P which results in optimal overall simulation time. Second, the confidence interval estimation for the probability of P to hold is based on an efficient variant of the Wilson method which ensures a faster convergence. Third, the whole methodology is designed according to a parallel fashion and a prototype software tool has been implemented that performs the sampling/verification process in parallel over an HPC architecture

    Statistical Model Checking for Stochastic Hybrid Systems

    Get PDF
    This paper presents novel extensions and applications of the UPPAAL-SMC model checker. The extensions allow for statistical model checking of stochastic hybrid systems. We show how our race-based stochastic semantics extends to networks of hybrid systems, and indicate the integration technique applied for implementing this semantics in the UPPAAL-SMC simulation engine. We report on two applications of the resulting tool-set coming from systems biology and energy aware buildings.Comment: In Proceedings HSB 2012, arXiv:1208.315
    • …
    corecore