182,394 research outputs found
Numerical and experimental study of agglomerate dispersion in polymer extrusion
A model for agglomerate dispersion in screw extruders was developed and superimposed on the flow
patterns as simulated using the FIDAP software. A particle tracking algorithm with an adaptive time step was used to
follow the agglomerates trajectory. Along this flow path, the breakup probability was estimated using a Monte Carlo
method and in conjunction with the local fragmentation number. Particle size distributions and Shannon entropy were
computed along the screw channel. The results show good qualitative agreement between model predictions and
experimental data
An Ordinal View of Independence with Application to Plausible Reasoning
An ordinal view of independence is studied in the framework of possibility
theory. We investigate three possible definitions of dependence, of increasing
strength. One of them is the counterpart to the multiplication law in
probability theory, and the two others are based on the notion of conditional
possibility. These two have enough expressive power to support the whole
possibility theory, and a complete axiomatization is provided for the strongest
one. Moreover we show that weak independence is well-suited to the problems of
belief change and plausible reasoning, especially to address the problem of
blocking of property inheritance in exception-tolerant taxonomic reasoning.Comment: Appears in Proceedings of the Tenth Conference on Uncertainty in
Artificial Intelligence (UAI1994
Recommended from our members
A quantum theoretical explanation for probability judgment errors
A quantum probability model is introduced and used to explain human probability judgment errors including the conjunction, disjunction, inverse, and conditional fallacies, as well as unpacking effects and partitioning effects. Quantum probability theory is a general and coherent theory based on a set of (von Neumann) axioms which relax some of the constraints underlying classic (Kolmogorov) probability theory. The quantum model is compared and contrasted with other competing explanations for these judgment errors including the representativeness heuristic, the averaging model, and a memory retrieval model for probability judgments. The quantum model also provides ways to extend Bayesian, fuzzy set, and fuzzy trace theories. We conclude that quantum information processing principles provide a viable and promising new way to understand human judgment and reasoning
Probably Safe or Live
This paper presents a formal characterisation of safety and liveness
properties \`a la Alpern and Schneider for fully probabilistic systems. As for
the classical setting, it is established that any (probabilistic tree) property
is equivalent to a conjunction of a safety and liveness property. A simple
algorithm is provided to obtain such property decomposition for flat
probabilistic CTL (PCTL). A safe fragment of PCTL is identified that provides a
sound and complete characterisation of safety properties. For liveness
properties, we provide two PCTL fragments, a sound and a complete one. We show
that safety properties only have finite counterexamples, whereas liveness
properties have none. We compare our characterisation for qualitative
properties with the one for branching time properties by Manolios and Trefler,
and present sound and complete PCTL fragments for characterising the notions of
strong safety and absolute liveness coined by Sistla
The Bayesian sampler : generic Bayesian inference causes incoherence in human probability
Human probability judgments are systematically biased, in apparent tension with Bayesian models of cognition. But perhaps the brain does not represent probabilities explicitly, but approximates probabilistic calculations through a process of sampling, as used in computational probabilistic models in statistics. Naïve probability estimates can be obtained by calculating the relative frequency of an event within a sample, but these estimates tend to be extreme when the sample size is small. We propose instead that people use a generic prior to improve the accuracy of their probability estimates based on samples, and we call this model the Bayesian sampler. The Bayesian sampler trades off the coherence of probabilistic judgments for improved accuracy, and provides a single framework for explaining phenomena associated with diverse biases and heuristics such as conservatism and the conjunction fallacy. The approach turns out to provide a rational reinterpretation of “noise” in an important recent model of probability judgment, the probability theory plus noise model (Costello & Watts, 2014, 2016a, 2017; Costello & Watts, 2019; Costello, Watts, & Fisher, 2018), making equivalent average predictions for simple events, conjunctions, and disjunctions. The Bayesian sampler does, however, make distinct predictions for conditional probabilities and distributions of probability estimates. We show in 2 new experiments that this model better captures these mean judgments both qualitatively and quantitatively; which model best fits individual distributions of responses depends on the assumed size of the cognitive sample
A Stronger Bell Argument for (Some Kind of) Parameter Dependence
It is widely accepted that the violation of Bell inequalities excludes local
theories of the quantum realm. This paper presents a new derivation of the
inequalities from non-trivial non-local theories and formulates a stronger Bell
argument excluding also these non-local theories. Taking into account all
possible theories, the conclusion of this stronger argument provably is the
strongest possible consequence from the violation of Bell inequalities on a
qualitative probabilistic level (given usual background assumptions). Among the
forbidden theories is a subset of outcome dependent theories showing that
outcome dependence is not sufficient for explaining a violation of Bell
inequalities. Non-local theories which can violate Bell inequalities (among
them quantum theory) are rather characterised by the fact that at least one of
the measurement outcomes in some sense (which is made precise)
probabilistically depends both on its local as well as on its distant
measurement setting ('parameter'). When Bell inequalities are found to be
violated, the true choice is not 'outcome dependence or parameter dependence'
but between two kinds of parameter dependences, one of them being what is
usually called 'parameter dependence'. Against the received view established by
Jarrett and Shimony that on a probabilistic level quantum non-locality amounts
to outcome dependence, this result confirms and makes precise Maudlin's claim
that some kind of parameter dependence is required.Comment: forthcoming in: Studies in the History and Philosophy of Modern
Physic
The Problem of Measure Sensitivity Redux
Fitelson (1999) demonstrates that the validity of various arguments within Bayesian confirmation theory depends on which confirmation measure is adopted. The present paper adds to the results set out in Fitelson (1999), expanding on them in two principal respects. First, it considers more confirmation measures. Second, it shows that there are important arguments within Bayesian confirmation theory and that there is no confirmation measure that renders them all valid. Finally, the paper reviews the ramifications that this "strengthened problem of measure sensitivity" has for Bayesian confirmation theory and discusses whether it points at pluralism about notions of confirmation
Solving a Paradox of Evidential Equivalence
David Builes presents a paradox concerning how confident you should be that any given member of an infinite collection of fair coins landed heads, conditional on the information that they were all flipped and only finitely many of them landed heads. We argue that if you should have any conditional credence at all, it should be 1/2
Rare Event Probability Learning by Normalizing Flows
A rare event is defined by a low probability of occurrence. Accurate
estimation of such small probabilities is of utmost importance across diverse
domains. Conventional Monte Carlo methods are inefficient, demanding an
exorbitant number of samples to achieve reliable estimates. Inspired by the
exact sampling capabilities of normalizing flows, we revisit this challenge and
propose normalizing flow assisted importance sampling, termed NOFIS. NOFIS
first learns a sequence of proposal distributions associated with predefined
nested subset events by minimizing KL divergence losses. Next, it estimates the
rare event probability by utilizing importance sampling in conjunction with the
last proposal. The efficacy of our NOFIS method is substantiated through
comprehensive qualitative visualizations, affirming the optimality of the
learned proposal distribution, as well as a series of quantitative experiments
encompassing distinct test cases, which highlight NOFIS's superiority over
baseline approaches.Comment: 16 pages, 5 figures, 2 table
- …