268,778 research outputs found

    Statistical Model Checking : An Overview

    Full text link
    Quantitative properties of stochastic systems are usually specified in logics that allow one to compare the measure of executions satisfying certain temporal properties with thresholds. The model checking problem for stochastic systems with respect to such logics is typically solved by a numerical approach that iteratively computes (or approximates) the exact measure of paths satisfying relevant subformulas; the algorithms themselves depend on the class of systems being analyzed as well as the logic used for specifying the properties. Another approach to solve the model checking problem is to \emph{simulate} the system for finitely many runs, and use \emph{hypothesis testing} to infer whether the samples provide a \emph{statistical} evidence for the satisfaction or violation of the specification. In this short paper, we survey the statistical approach, and outline its main advantages in terms of efficiency, uniformity, and simplicity.Comment: non

    Multi-color detection of gravitational arcs

    Full text link
    Strong gravitational lensing provides fundamental insights into the understanding of the dark matter distribution in massive galaxies, galaxy clusters and the background cosmology. Despite their importance, the number of gravitational arcs discovered so far is small. The urge for more complete, large samples and unbiased methods of selecting candidates is rising. A number of methods for the automatic detection of arcs have been proposed in the literature, but large amounts of spurious detections retrieved by these methods forces observers to visually inspect thousands of candidates per square degree in order to clean the samples. This approach is largely subjective and requires a huge amount of eye-ball checking, especially considering the actual and upcoming wide field surveys, which will cover thousands of square degrees. In this paper we study the statistical properties of colours of gravitational arcs detected in the 37 deg^2 of the CARS survey. We have found that most of them lie in a relatively small region of the (g'-r',r'-i') colour-colour diagram. To explain this property, we provide a model which includes the lensing optical depth expected in a LCDM cosmology that, in combination with the sources' redshift distribution of a given survey, in our case CARS, peaks for sources at redshift z~1. By further modelling the colours derived from the SED of the galaxies dominating the population at that redshift, the model well reproduces the observed colours. By taking advantage of the colour selection suggested by both data and model, we show that this multi-band filtering returns a sample 83% complete and a contamination reduced by a factor of ~6.5 with respect to the single-band arcfinder sample. New arc candidates are also proposed.Comment: 13 pages, 7 figures, 4 tables; title modified, text extended, figures improved, error estimate improve

    Performance Evaluation of Complex Systems Using the SBIP Framework

    Get PDF
    International audienceIn this paper we survey the main experiments performed using the SBIP framework. The latter consists of a stochastic component-based modeling formalism and a probabilistic model checking engine for verification. The modeling formalism is built as an extension of BIP and enables to build complex systems in a compositional way, while the verification engine implements a set of statistical algorithms for the verification of qualitative and quantitative properties. The SBIP framework has been used to model and verify a large set of real life systems including various network protocols and multimedia applications

    Discussion of "Impact of Frequentist and Bayesian Methods on Survey Sampling Practice: A Selective Appraisal" by J. N. K. Rao

    Full text link
    Discussion of "Impact of Frequentist and Bayesian Methods on Survey Sampling Practice: A Selective Appraisal" by J. N. K. Rao [arXiv:1108.2356]Comment: Published in at http://dx.doi.org/10.1214/11-STS346C the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    The National Superficial Deposit Thickness Model. (Version 5)

    Get PDF
    The Superficial Deposits Thickness Model (SDTM) is a raster-based dataset designed to demonstrate the variation in thickness of Quaternary-age superficial deposits across Great Britain. Quaternary deposits (all unconsolidated material deposited in the last 2.6 million years) are of particular importance to environmental scientists and consultants concerned with our landscape, environment and habitats. The BGS has been generating national models of the thickness of Quaternary-age deposits since 2001, and this latest version of the model is based upon DiGMapGB-50 Version 5 geological mapping and borehole records registered with BGS before August 2008

    Philosophy and the practice of Bayesian statistics

    Full text link
    A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science. Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework.Comment: 36 pages, 5 figures. v2: Fixed typo in caption of figure 1. v3: Further typo fixes. v4: Revised in response to referee

    Discussion of "Impact of Frequentist and Bayesian Methods on Survey Sampling Practice: A Selective Appraisal" by J. N. K. Rao

    Full text link
    This comment emphasizes the importance of model checking and model fitting when making inferences about finite population quantities. It also suggests the value of using unit level models when making inferences for small subpopulations, that is, "small area" analyses [arXiv:1108.2356].Comment: Published in at http://dx.doi.org/10.1214/11-STS346B the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Quantitative Analysis of Probabilistic Models of Software Product Lines with Statistical Model Checking

    Get PDF
    We investigate the suitability of statistical model checking techniques for analysing quantitative properties of software product line models with probabilistic aspects. For this purpose, we enrich the feature-oriented language FLan with action rates, which specify the likelihood of exhibiting particular behaviour or of installing features at a specific moment or in a specific order. The enriched language (called PFLan) allows us to specify models of software product lines with probabilistic configurations and behaviour, e.g. by considering a PFLan semantics based on discrete-time Markov chains. The Maude implementation of PFLan is combined with the distributed statistical model checker MultiVeStA to perform quantitative analyses of a simple product line case study. The presented analyses include the likelihood of certain behaviour of interest (e.g. product malfunctioning) and the expected average cost of products.Comment: In Proceedings FMSPLE 2015, arXiv:1504.0301
    • …
    corecore