12,284 research outputs found
The Goodman-Nguyen Relation within Imprecise Probability Theory
The Goodman-Nguyen relation is a partial order generalising the implication
(inclusion) relation to conditional events. As such, with precise probabilities
it both induces an agreeing probability ordering and is a key tool in a certain
common extension problem. Most previous work involving this relation is
concerned with either conditional event algebras or precise probabilities. We
investigate here its role within imprecise probability theory, first in the
framework of conditional events and then proposing a generalisation of the
Goodman-Nguyen relation to conditional gambles. It turns out that this relation
induces an agreeing ordering on coherent or C-convex conditional imprecise
previsions. In a standard inferential problem with conditional events, it lets
us determine the natural extension, as well as an upper extension. With
conditional gambles, it is useful in deriving a number of inferential
inequalities.Comment: Published version:
http://www.sciencedirect.com/science/article/pii/S0888613X1400101
Default Logic in a Coherent Setting
In this talk - based on the results of a forthcoming paper (Coletti,
Scozzafava and Vantaggi 2002), presented also by one of us at the Conference on
"Non Classical Logic, Approximate Reasoning and Soft-Computing" (Anacapri,
Italy, 2001) - we discuss the problem of representing default rules by means of
a suitable coherent conditional probability, defined on a family of conditional
events. An event is singled-out (in our approach) by a proposition, that is a
statement that can be either true or false; a conditional event is consequently
defined by means of two propositions and is a 3-valued entity, the third value
being (in this context) a conditional probability
Another Approach to Consensus and Maximally Informed Opinions with Increasing Evidence
Merging of opinions results underwrite Bayesian rejoinders to complaints about the subjective nature of personal probability. Such results establish that sufficiently similar priors achieve consensus in the long run when fed the same increasing stream of evidence. Initial subjectivity, the line goes, is of mere transient significance, giving way to intersubjective agreement eventually. Here, we establish a merging result for sets of probability measures that are updated by Jeffrey conditioning. This generalizes a number of different merging results in the literature. We also show that such sets converge to a shared, maximally informed opinion. Convergence to a maximally informed opinion is a (weak) Jeffrey conditioning analogue of Bayesian “convergence to the truth” for conditional probabilities. Finally, we demonstrate the philosophical significance of our study by detailing applications to the topics of dynamic coherence, imprecise probabilities, and probabilistic opinion pooling
Modelling and feedback control design for quantum state preparation
The goal of this article is to provide a largely self-contained introduction to the modelling of controlled quantum systems under continuous observation, and to the design of feedback controls that prepare particular quantum states. We describe a bottom-up approach, where a field-theoretic model is subjected to statistical inference and is ultimately controlled. As an example, the formalism is applied to a highly idealized interaction of an atomic ensemble with an optical field. Our aim is to provide a unified outline for the modelling, from first principles, of realistic experiments in quantum control
Updating beliefs with incomplete observations
Currently, there is renewed interest in the problem, raised by Shafer in
1985, of updating probabilities when observations are incomplete. This is a
fundamental problem in general, and of particular interest for Bayesian
networks. Recently, Grunwald and Halpern have shown that commonly used updating
strategies fail in this case, except under very special assumptions. In this
paper we propose a new method for updating probabilities with incomplete
observations. Our approach is deliberately conservative: we make no assumptions
about the so-called incompleteness mechanism that associates complete with
incomplete observations. We model our ignorance about this mechanism by a
vacuous lower prevision, a tool from the theory of imprecise probabilities, and
we use only coherence arguments to turn prior into posterior probabilities. In
general, this new approach to updating produces lower and upper posterior
probabilities and expectations, as well as partially determinate decisions.
This is a logical consequence of the existing ignorance about the
incompleteness mechanism. We apply the new approach to the problem of
classification of new evidence in probabilistic expert systems, where it leads
to a new, so-called conservative updating rule. In the special case of Bayesian
networks constructed using expert knowledge, we provide an exact algorithm for
classification based on our updating rule, which has linear-time complexity for
a class of networks wider than polytrees. This result is then extended to the
more general framework of credal networks, where computations are often much
harder than with Bayesian nets. Using an example, we show that our rule appears
to provide a solid basis for reliable updating with incomplete observations,
when no strong assumptions about the incompleteness mechanism are justified.Comment: Replaced with extended versio
An introduction to quantum filtering
This paper provides an introduction to quantum filtering theory. An
introduction to quantum probability theory is given, focusing on the spectral
theorem and the conditional expectation as a least squares estimate, and
culminating in the construction of Wiener and Poisson processes on the Fock
space. We describe the quantum It\^o calculus and its use in the modelling of
physical systems. We use both reference probability and innovations methods to
obtain quantum filtering equations for system-probe models from quantum optics.Comment: 41 pages, 1 figur
- …