6,167 research outputs found
Relative Entropy and Inductive Inference
We discuss how the method of maximum entropy, MaxEnt, can be extended beyond
its original scope, as a rule to assign a probability distribution, to a
full-fledged method for inductive inference. The main concept is the (relative)
entropy S[p|q] which is designed as a tool to update from a prior probability
distribution q to a posterior probability distribution p when new information
in the form of a constraint becomes available. The extended method goes beyond
the mere selection of a single posterior p, but also addresses the question of
how much less probable other distributions might be. Our approach clarifies how
the entropy S[p|q] is used while avoiding the question of its meaning.
Ultimately, entropy is a tool for induction which needs no interpretation.
Finally, being a tool for generalization from special examples, we ask whether
the functional form of the entropy depends on the choice of the examples and we
find that it does. The conclusion is that there is no single general theory of
inductive inference and that alternative expressions for the entropy are
possible.Comment: Presented at MaxEnt23, the 23rd International Workshop on Bayesian
Inference and Maximum Entropy Methods (August 3-8, 2003, Jackson Hole, WY,
USA
An introduction to DSmT
The management and combination of uncertain, imprecise, fuzzy and even
paradoxical or high conflicting sources of information has always been, and
still remains today, of primal importance for the development of reliable
modern information systems involving artificial reasoning. In this
introduction, we present a survey of our recent theory of plausible and
paradoxical reasoning, known as Dezert-Smarandache Theory (DSmT), developed for
dealing with imprecise, uncertain and conflicting sources of information. We
focus our presentation on the foundations of DSmT and on its most important
rules of combination, rather than on browsing specific applications of DSmT
available in literature. Several simple examples are given throughout this
presentation to show the efficiency and the generality of this new approach
Hidden-Markov Program Algebra with iteration
We use Hidden Markov Models to motivate a quantitative compositional
semantics for noninterference-based security with iteration, including a
refinement- or "implements" relation that compares two programs with respect to
their information leakage; and we propose a program algebra for source-level
reasoning about such programs, in particular as a means of establishing that an
"implementation" program leaks no more than its "specification" program.
This joins two themes: we extend our earlier work, having iteration but only
qualitative, by making it quantitative; and we extend our earlier quantitative
work by including iteration. We advocate stepwise refinement and
source-level program algebra, both as conceptual reasoning tools and as targets
for automated assistance. A selection of algebraic laws is given to support
this view in the case of quantitative noninterference; and it is demonstrated
on a simple iterated password-guessing attack
- …