14,333 research outputs found
Arithmetic computation with probability words and numbers
Probability information is regularly communicated to experts who must fuse multiple estimates to support decision-making. Such information is often communicated verbally (e.g., “likely”) rather than with precise numeric (point) values (e.g., “.75”), yet people are not taught to perform arithmetic on verbal probabilities. We hypothesized that the accuracy and logical coherence of averaging and multiplying probabilities will be poorer when individuals receive probability information in verbal rather than numerical point format. In four experiments (N = 213, 201, 26, and 343, respectively), we manipulated probability communication format between-subjects. Participants averaged and multiplied sets of four probabilities. Across experiments, arithmetic accuracy and coherence was significantly better with point than with verbal probabilities. These findings generalized between expert (intelligence analysts) and non-expert samples and when controlling for calculator use. Experiment 4 revealed an important qualification: whereas accuracy and coherence were better among participants presented with point probabilities than with verbal probabilities, imprecise numeric probability ranges (e.g., “.70 to .80”) afforded no computational advantage over verbal probabilities. Experiment 4 also revealed that the advantage of the point over the verbal format is partially mediated by strategy use. Participants presented with point estimates are more likely to use mental computation than guesswork, and mental computation was found to be associated with better accuracy. Our findings suggest that where computation is important, probability information should be communicated to end users with precise numeric probabilities
Efficient computation of updated lower expectations for imprecise continuous-time hidden Markov chains
We consider the problem of performing inference with imprecise
continuous-time hidden Markov chains, that is, imprecise continuous-time Markov
chains that are augmented with random output variables whose distribution
depends on the hidden state of the chain. The prefix `imprecise' refers to the
fact that we do not consider a classical continuous-time Markov chain, but
replace it with a robust extension that allows us to represent various types of
model uncertainty, using the theory of imprecise probabilities. The inference
problem amounts to computing lower expectations of functions on the state-space
of the chain, given observations of the output variables. We develop and
investigate this problem with very few assumptions on the output variables; in
particular, they can be chosen to be either discrete or continuous random
variables. Our main result is a polynomial runtime algorithm to compute the
lower expectation of functions on the state-space at any given time-point,
given a collection of observations of the output variables
Variable Selection Bias in Classification Trees Based on Imprecise Probabilities
Classification trees based on imprecise probabilities provide an advancement of classical classification trees. The Gini Index is the default splitting criterion in classical classification trees, while in classification trees based on imprecise probabilities, an extension of the Shannon entropy has been introduced as the splitting criterion. However, the use of these empirical entropy measures as split selection criteria can lead to a bias in variable selection, such that variables are preferred for features other than their information content. This bias is not eliminated by the imprecise probability approach. The source of variable selection bias for the estimated Shannon entropy, as well as possible corrections, are outlined. The variable selection performance of the biased and corrected estimators are evaluated in a simulation study. Additional results from research on variable selection bias in classical classification trees are incorporated, implying further investigation of alternative split selection criteria in classification trees based on imprecise probabilities
Credal Networks under Epistemic Irrelevance
A credal network under epistemic irrelevance is a generalised type of
Bayesian network that relaxes its two main building blocks. On the one hand,
the local probabilities are allowed to be partially specified. On the other
hand, the assessments of independence do not have to hold exactly.
Conceptually, these two features turn credal networks under epistemic
irrelevance into a powerful alternative to Bayesian networks, offering a more
flexible approach to graph-based multivariate uncertainty modelling. However,
in practice, they have long been perceived as very hard to work with, both
theoretically and computationally.
The aim of this paper is to demonstrate that this perception is no longer
justified. We provide a general introduction to credal networks under epistemic
irrelevance, give an overview of the state of the art, and present several new
theoretical results. Most importantly, we explain how these results can be
combined to allow for the design of recursive inference methods. We provide
numerous concrete examples of how this can be achieved, and use these to
demonstrate that computing with credal networks under epistemic irrelevance is
most definitely feasible, and in some cases even highly efficient. We also
discuss several philosophical aspects, including the lack of symmetry, how to
deal with probability zero, the interpretation of lower expectations, the
axiomatic status of graphoid properties, and the difference between updating
and conditioning
Web apps and imprecise probabilities
We propose a model for the behaviour of Web apps in the unreliable WWW. Web apps are described by orchestrations. An orchestration mimics the personal use of the Web by defining the way in which Web services are invoked. The WWW is unreliable as poorly maintained Web sites are prone to fail. We model this source of unreliability trough a probabilistic approach. We assume that each site has a probability to fail. Another source of uncertainty is the traffic congestion. This can be observed as a non-deterministic behaviour induced by the variability in the response times. We model non-determinism by imprecise probabilities. We develop here an ex-ante normal to characterize the behaviour of finite orchestrations in the unreliable Web. We show the existence of a normal form under such semantics for orchestrations using asymmetric parallelism.Peer ReviewedPostprint (author's final draft
A Recursive Algorithm for Computing Inferences in Imprecise Markov Chains
We present an algorithm that can efficiently compute a broad class of
inferences for discrete-time imprecise Markov chains, a generalised type of
Markov chains that allows one to take into account partially specified
probabilities and other types of model uncertainty. The class of inferences
that we consider contains, as special cases, tight lower and upper bounds on
expected hitting times, on hitting probabilities and on expectations of
functions that are a sum or product of simpler ones. Our algorithm exploits the
specific structure that is inherent in all these inferences: they admit a
general recursive decomposition. This allows us to achieve a computational
complexity that scales linearly in the number of time points on which the
inference depends, instead of the exponential scaling that is typical for a
naive approach
A statistical inference method for the stochastic reachability analysis.
The main contribution of this paper is the characterization of reachability problem associated to stochastic hybrid systems in terms of imprecise probabilities. This provides the connection between reachability problem and Bayesian statistics. Using generalised Bayesian statistical inference, a new concept of conditional reach set probabilities is defined. Then possible algorithms to compute the reach set probabilities are derived
- …