947 research outputs found
Context-Specific Approximation in Probabilistic Inference
There is evidence that the numbers in probabilistic inference don't really
matter. This paper considers the idea that we can make a probabilistic model
simpler by making fewer distinctions. Unfortunately, the level of a Bayesian
network seems too coarse; it is unlikely that a parent will make little
difference for all values of the other parents. In this paper we consider an
approximation scheme where distinctions can be ignored in some contexts, but
not in other contexts. We elaborate on a notion of a parent context that allows
a structured context-specific decomposition of a probability distribution and
the associated probabilistic inference scheme called probabilistic partial
evaluation (Poole 1997). This paper shows a way to simplify a probabilistic
model by ignoring distinctions which have similar probabilities, a method to
exploit the simpler model, a bound on the resulting errors, and some
preliminary empirical results on simple networks.Comment: Appears in Proceedings of the Fourteenth Conference on Uncertainty in
Artificial Intelligence (UAI1998
Exploiting the Rule Structure for Decision Making within the Independent Choice Logic
This paper introduces the independent choice logic, and in particular the
"single agent with nature" instance of the independent choice logic, namely
ICLdt. This is a logical framework for decision making uncertainty that extends
both logic programming and stochastic models such as influence diagrams. This
paper shows how the representation of a decision problem within the independent
choice logic can be exploited to cut down the combinatorics of dynamic
programming. One of the main problems with influence diagram evaluation
techniques is the need to optimise a decision for all values of the 'parents'
of a decision variable. In this paper we show how the rule based nature of the
ICLdt can be exploited so that we only make distinctions in the values of the
information available for a decision that will make a difference to utility.Comment: Appears in Proceedings of the Eleventh Conference on Uncertainty in
Artificial Intelligence (UAI1995
The use of conflicts in searching Bayesian networks
This paper discusses how conflicts (as used by the consistency-based
diagnosis community) can be adapted to be used in a search-based algorithm for
computing prior and posterior probabilities in discrete Bayesian Networks. This
is an "anytime" algorithm, that at any stage can estimate the probabilities and
give an error bound. Whereas the most popular Bayesian net algorithms exploit
the structure of the network for efficiency, we exploit probability
distributions for efficiency; this algorithm is most suited to the case with
extreme probabilities. This paper presents a solution to the inefficiencies
found in naive algorithms, and shows how the tools of the consistency-based
diagnosis community (namely conflicts) can be used effectively to improve the
efficiency. Empirical results with networks having tens of thousands of nodes
are presented.Comment: Appears in Proceedings of the Ninth Conference on Uncertainty in
Artificial Intelligence (UAI1993
A Framework for Decision-Theoretic Planning I: Combining the Situation Calculus, Conditional Plans, Probability and Utility
This paper shows how we can combine logical representations of actions and
decision theory in such a manner that seems natural for both. In particular we
assume an axiomatization of the domain in terms of situation calculus, using
what is essentially Reiter's solution to the frame problem, in terms of the
completion of the axioms defining the state change. Uncertainty is handled in
terms of the independent choice logic, which allows for independent choices and
a logic program that gives the consequences of the choices. As part of the
consequences are a specification of the utility of (final) states. The robot
adopts robot plans, similar to the GOLOG programming language. Within this
logic, we can define the expected utility of a conditional plan, based on the
axiomatization of the actions, the uncertainty and the utility. The ?planning'
problem is to find the plan with the highest expected utility. This is related
to recent structured representations for POMDPs; here we use stochastic
situation calculus rules to specify the state transition function and the
reward/value function. Finally we show that with stochastic frame axioms,
actions representations in probabilistic STRIPS are exponentially larger than
using the representation proposed here.Comment: Appears in Proceedings of the Twelfth Conference on Uncertainty in
Artificial Intelligence (UAI1996
Constraint Processing in Lifted Probabilistic Inference
First-order probabilistic models combine representational power of
first-order logic with graphical models. There is an ongoing effort to design
lifted inference algorithms for first-order probabilistic models. We analyze
lifted inference from the perspective of constraint processing and, through
this viewpoint, we analyze and compare existing approaches and expose their
advantages and limitations. Our theoretical results show that the wrong choice
of constraint processing method can lead to exponential increase in
computational complexity. Our empirical tests confirm the importance of
constraint processing in lifted inference. This is the first theoretical and
empirical study of constraint processing in lifted inference.Comment: Appears in Proceedings of the Twenty-Fifth Conference on Uncertainty
in Artificial Intelligence (UAI2009
Towards Solving the Multiple Extension Problem: Combining Defaults and Probabilities
The multiple extension problem arises frequently in diagnostic and default
inference. That is, we can often use any of a number of sets of defaults or
possible hypotheses to explain observations or make Predictions. In default
inference, some extensions seem to be simply wrong and we use qualitative
techniques to weed out the unwanted ones. In the area of diagnosis, however,
the multiple explanations may all seem reasonable, however improbable. Choosing
among them is a matter of quantitative preference. Quantitative preference
works well in diagnosis when knowledge is modelled causally. Here we suggest a
framework that combines probabilities and defaults in a single unified
framework that retains the semantics of diagnosis as construction of
explanations from a fixed set of possible hypotheses. We can then compute
probabilities incrementally as we construct explanations. Here we describe a
branch and bound algorithm that maintains a set of all partial explanations
while exploring a most promising one first. A most probable explanation is
found first if explanations are partially ordered.Comment: Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987
Efficient Inference in Large Discrete Domains
In this paper we examine the problem of inference in Bayesian Networks with
discrete random variables that have very large or even unbounded domains. For
example, in a domain where we are trying to identify a person, we may have
variables that have as domains, the set of all names, the set of all postal
codes, or the set of all credit card numbers. We cannot just have big tables of
the conditional probabilities, but need compact representations. We provide an
inference algorithm, based on variable elimination, for belief networks
containing both large domain and normal discrete random variables. We use
intensional (i.e., in terms of procedures) and extensional (in terms of listing
the elements) representations of conditional probabilities and of the
intermediate factors.Comment: Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003
What is an Optimal Diagnosis?
Within diagnostic reasoning there have been a number of proposed definitions
of a diagnosis, and thus of the most likely diagnosis, including most probable
posterior hypothesis, most probable interpretation, most probable covering
hypothesis, etc. Most of these approaches assume that the most likely diagnosis
must be computed, and that a definition of what should be computed can be made
a priori, independent of what the diagnosis is used for. We argue that the
diagnostic problem, as currently posed, is incomplete: it does not consider how
the diagnosis is to be used, or the utility associated with the treatment of
the abnormalities. In this paper we analyze several well-known definitions of
diagnosis, showing that the different definitions of the most likely diagnosis
have different qualitative meanings, even given the same input data. We argue
that the most appropriate definition of (optimal) diagnosis needs to take into
account the utility of outcomes and what the diagnosis is used for.Comment: Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990
Building a Stochastic Dynamic Model of Application Use
Many intelligent user interfaces employ application and user models to
determine the user's preferences, goals and likely future actions. Such models
require application analysis, adaptation and expansion. Building and
maintaining such models adds a substantial amount of time and labour to the
application development cycle. We present a system that observes the interface
of an unmodified application and records users' interactions with the
application. From a history of such observations we build a coarse state space
of observed interface states and actions between them. To refine the space, we
hypothesize sub-states based upon the histories that led users to a given
state. We evaluate the information gain of possible state splits, varying the
length of the histories considered in such splits. In this way, we
automatically produce a stochastic dynamic model of the application and of how
it is used. To evaluate our approach, we present models derived from real-world
application usage data.Comment: Appears in Proceedings of the Sixteenth Conference on Uncertainty in
Artificial Intelligence (UAI2000
An Anytime Algorithm for Decision Making under Uncertainty
We present an anytime algorithm which computes policies for decision problems
represented as multi-stage influence diagrams. Our algorithm constructs
policies incrementally, starting from a policy which makes no use of the
available information. The incremental process constructs policies which
includes more of the information available to the decision maker at each step.
While the process converges to the optimal policy, our approach is designed for
situations in which computing the optimal policy is infeasible. We provide
examples of the process on several large decision problems, showing that, for
these examples, the process constructs valuable (but sub-optimal) policies
before the optimal policy would be available by traditional methods.Comment: Appears in Proceedings of the Fourteenth Conference on Uncertainty in
Artificial Intelligence (UAI1998
- …