2,594 research outputs found

    Empirical interpretation of imprecise probabilities

    Get PDF
    This paper investigates the possibility of a frequentist interpretation of imprecise probabilities, by generalizing the approach of Bernoulli’s Ars Conjectandi. That is, by studying, in the case of games of chance, under which assumptions imprecise probabilities can be satisfactorily estimated from data. In fact, estimability on the basis of finite amounts of data is a necessary condition for imprecise probabilities in order to have a clear empirical meaning. Unfortunately, imprecise probabilities can be estimated arbitrarily well from data only in very limited settings

    Web apps and imprecise probabilities

    Get PDF
    We propose a model for the behaviour of Web apps in the unreliable WWW. Web apps are described by orchestrations. An orchestration mimics the personal use of the Web by defining the way in which Web services are invoked. The WWW is unreliable as poorly maintained Web sites are prone to fail. We model this source of unreliability trough a probabilistic approach. We assume that each site has a probability to fail. Another source of uncertainty is the traffic congestion. This can be observed as a non-deterministic behaviour induced by the variability in the response times. We model non-determinism by imprecise probabilities. We develop here an ex-ante normal to characterize the behaviour of finite orchestrations in the unreliable Web. We show the existence of a normal form under such semantics for orchestrations using asymmetric parallelism.Peer ReviewedPostprint (author's final draft

    Imprecise Probabilities

    Get PDF

    Propagating imprecise probabilities through event trees

    Get PDF
    Event trees are a graphical model of a set of possible situations and the possible paths going through them, from the initial situation to the terminal situations. With each situation, there is associated a local uncertainty model that represents beliefs about the next situation. The uncertainty models can be classical, precise probabilities; they can also be of a more general, imprecise probabilistic type, in which case they can be seen as sets of classical probabilities (yielding probability intervals). To work with such event trees, we must combine these local uncertainty models. We show this can be done efficiently by back-propagation through the tree, both for precise and imprecise probabilistic models, and we illustrate this using an imprecise probabilistic counterpart of the classical Markov chain. This allows us to perform a robustness analysis for Markov chains very efficiently

    Variable Selection Bias in Classification Trees Based on Imprecise Probabilities

    Get PDF
    Classification trees based on imprecise probabilities provide an advancement of classical classification trees. The Gini Index is the default splitting criterion in classical classification trees, while in classification trees based on imprecise probabilities, an extension of the Shannon entropy has been introduced as the splitting criterion. However, the use of these empirical entropy measures as split selection criteria can lead to a bias in variable selection, such that variables are preferred for features other than their information content. This bias is not eliminated by the imprecise probability approach. The source of variable selection bias for the estimated Shannon entropy, as well as possible corrections, are outlined. The variable selection performance of the biased and corrected estimators are evaluated in a simulation study. Additional results from research on variable selection bias in classical classification trees are incorporated, implying further investigation of alternative split selection criteria in classification trees based on imprecise probabilities

    Command line completion: an illustration of learning and decision making using the imprecise Dirichlet model

    Get PDF
    A method of command line completion based on probabilistic models is described. The method supplements the existing deterministic ones. The probabilistic models are developed within the context of imprecise probabilities. An imprecise Dirichlet model is used to represent the assessments about all possible completions and to allow for learning by observing the commands typed previously. Due to the use of imprecise probabilities a partial (instead of a linear) ordering of the possible completion actions will be constructed during decision making. Markov models can additionally be incorporated to take recurring sequences of commands into account

    Reasoning with imprecise probabilities

    Get PDF
    This special issue of the International Journal of Approximate Reasoning (IJAR) grew out of the 4th International Symposium on Imprecise Probabilities and Their Applications (ISIPTA’05), held in Pittsburgh, USA, in July 2005 (http://www.sipta.org/isipta05). The symposium was organized by Teddy Seidenfeld, Robert Nau, and Fabio G. Cozman, and brought together researchers from various branches interested in imprecision in probabilities. Research in artificial intelligence, economics, engineering, psychology, philosophy, statistics, and other fields was presented at the meeting, in a lively atmosphere that fostered communication and debate. Invited talks by Isaac Levi and Arthur Dempster enlightened the attendants, while tutorials by Gert de Cooman, Paolo Vicig, and Kurt Weichselberger introduced basic (and advanced) concepts; finally, the symposium ended with a workshop on financial risk assessment, organized by Teddy Seidenfeld

    Monte Carlo Estimation for Imprecise Probabilities: Basic Properties

    Get PDF
    We describe Monte Carlo methods for estimating lower envelopes of expectations of real random variables. We prove that the estimation bias is negative and that its absolute value shrinks with increasing sample size. We discuss fairly practical techniques for proving strong consistency of the estimators and use these to prove the consistency of an example in the literature. We also provide an example where there is no consistency
    • …
    corecore