14,279 research outputs found
Symbolic Decision Theory and Autonomous Systems
The ability to reason under uncertainty and with incomplete information is a
fundamental requirement of decision support technology. In this paper we argue
that the concentration on theoretical techniques for the evaluation and
selection of decision options has distracted attention from many of the wider
issues in decision making. Although numerical methods of reasoning under
uncertainty have strong theoretical foundations, they are representationally
weak and only deal with a small part of the decision process. Knowledge based
systems, on the other hand, offer greater flexibility but have not been
accompanied by a clear decision theory. We describe here work which is under
way towards providing a theoretical framework for symbolic decision procedures.
A central proposal is an extended form of inference which we call
argumentation; reasoning for and against decision options from generalised
domain theories. The approach has been successfully used in several decision
support applications, but it is argued that a comprehensive decision theory
must cover autonomous decision making, where the agent can formulate questions
as well as take decisions. A major theoretical challenge for this theory is to
capture the idea of reflection to permit decision agents to reason about their
goals, what they believe and why, and what they need to know or do in order to
achieve their goals.Comment: Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991
On the Relation between Kappa Calculus and Probabilistic Reasoning
We study the connection between kappa calculus and probabilistic reasoning in
diagnosis applications. Specifically, we abstract a probabilistic belief
network for diagnosing faults into a kappa network and compare the ordering of
faults computed using both methods. We show that, at least for the example
examined, the ordering of faults coincide as long as all the causal relations
in the original probabilistic network are taken into account. We also provide a
formal analysis of some network structures where the two methods will differ.
Both kappa rankings and infinitesimal probabilities have been used extensively
to study default reasoning and belief revision. But little has been done on
utilizing their connection as outlined above. This is partly because the
relation between kappa and probability calculi assumes that probabilities are
arbitrarily close to one (or zero). The experiments in this paper investigate
this relation when this assumption is not satisfied. The reported results have
important implications on the use of kappa rankings to enhance the knowledge
engineering of uncertainty models.Comment: Appears in Proceedings of the Tenth Conference on Uncertainty in
Artificial Intelligence (UAI1994
Refining Reasoning in Qualitative Probabilistic Networks
In recent years there has been a spate of papers describing systems for
probabilisitic reasoning which do not use numerical probabilities. In some
cases the simple set of values used by these systems make it impossible to
predict how a probability will change or which hypothesis is most likely given
certain evidence. This paper concentrates on such situations, and suggests a
number of ways in which they may be resolved by refining the representation.Comment: Appears in Proceedings of the Eleventh Conference on Uncertainty in
Artificial Intelligence (UAI1995
Dealing with the Fuzziness of Human Reasoning
Reasoning, the most important human brain operation, is charactrized by a
degree fuzziness. In the present paper we construct a fuzzy model for the
reasoning process giving through the calculation of the possibilities of all
possible individuals' profiles a quantitative/qualitative view of their
behaviour during the above process and we use the centroid defuzzification
technique for measuring the reasoning skills. We also present a number of
classroom experiments illustrating our results in practice.Comment: 16 pages, 3 figures, 1 table. arXiv admin note: substantial text
overlap with arXiv:1212.261
Generating Decision Structures and Causal Explanations for Decision Making
This paper examines two related problems that are central to developing an
autonomous decision-making agent, such as a robot. Both problems require
generating structured representafions from a database of unstructured
declarative knowledge that includes many facts and rules that are irrelevant in
the problem context. The first problem is how to generate a well structured
decision problem from such a database. The second problem is how to generate,
from the same database, a well-structured explanation of why some possible
world occurred. In this paper it is shown that the problem of generating the
appropriate decision structure or explanation is intractable without
introducing further constraints on the knowledge in the database. The paper
proposes that the problem search space can be constrained by adding knowledge
to the database about causal relafions between events. In order to determine
the causal knowledge that would be most useful, causal theories for
deterministic and indeterministic universes are proposed. A program that uses
some of these causal constraints has been used to generate explanations about
faulty plans. The program shows the expected increase in efficiency as the
causal constraints are introduced.Comment: Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988
Qualitative MDPs and POMDPs: An Order-Of-Magnitude Approximation
We develop a qualitative theory of Markov Decision Processes (MDPs) and
Partially Observable MDPs that can be used to model sequential decision making
tasks when only qualitative information is available. Our approach is based
upon an order-of-magnitude approximation of both probabilities and utilities,
similar to epsilon-semantics. The result is a qualitative theory that has close
ties with the standard maximum-expected-utility theory and is amenable to
general planning techniques.Comment: Appears in Proceedings of the Eighteenth Conference on Uncertainty in
Artificial Intelligence (UAI2002
Practical Uses of Belief Functions
We present examples where the use of belief functions provided sound and
elegant solutions to real life problems. These are essentially characterized by
?missing' information. The examples deal with 1) discriminant analysis using a
learning set where classes are only partially known; 2) an information
retrieval systems handling inter-documents relationships; 3) the combination of
data from sensors competent on partially overlapping frames; 4) the
determination of the number of sources in a multi-sensor environment by
studying the inter-sensors contradiction. The purpose of the paper is to report
on such applications where the use of belief functions provides a convenient
tool to handle ?messy' data problems.Comment: Appears in Proceedings of the Fifteenth Conference on Uncertainty in
Artificial Intelligence (UAI1999
Action Networks: A Framework for Reasoning about Actions and Change under Uncertainty
This work proposes action networks as a semantically well-founded framework
for reasoning about actions and change under uncertainty. Action networks add
two primitives to probabilistic causal networks: controllable variables and
persistent variables. Controllable variables allow the representation of
actions as directly setting the value of specific events in the domain, subject
to preconditions. Persistent variables provide a canonical model of persistence
according to which both the state of a variable and the causal mechanism
dictating its value persist over time unless intervened upon by an action (or
its consequences). Action networks also allow different methods for quantifying
the uncertainty in causal relationships, which go beyond traditional
probabilistic quantification. This paper describes both recent results and work
in progress.Comment: Appears in Proceedings of the Tenth Conference on Uncertainty in
Artificial Intelligence (UAI1994
Conditional Plausibility Measures and Bayesian Networks
A general notion of algebraic conditional plausibility measures is defined.
Probability measures, ranking functions, possibility measures, and (under the
appropriate definitions) sets of probability measures can all be viewed as
defining algebraic conditional plausibility measures. It is shown that the
technology of Bayesian networks can be applied to algebraic conditional
plausibility measures.Comment: Appears in Proceedings of the Sixteenth Conference on Uncertainty in
Artificial Intelligence (UAI2000
Probabilistic Temporal Reasoning with Endogenous Change
This paper presents a probabilistic model for reasoning about the state of a
system as it changes over time, both due to exogenous and endogenous
influences. Our target domain is a class of medical prediction problems that
are neither so urgent as to preclude careful diagnosis nor progress so slowly
as to allow arbitrary testing and treatment options. In these domains there is
typically enough time to gather information about the patient's state and
consider alternative diagnoses and treatments, but the temporal interaction
between the timing of tests, treatments, and the course of the disease must
also be considered. Our approach is to elicit a qualitative structural model of
the patient from a human expert---the model identifies important attributes,
the way in which exogenous changes affect attribute values, and the way in
which the patient's condition changes endogenously. We then elicit
probabilistic information to capture the expert's uncertainty about the effects
of tests and treatments and the nature and timing of endogenous state changes.
This paper describes the model in the context of a problem in treating vehicle
accident trauma, and suggests a method for solving the model based on the
technique of sequential imputation. A complementary goal of this work is to
understand and synthesize a disparate collection of research efforts all using
the name ?probabilistic temporal reasoning.? This paper analyzes related work
and points out essential differences between our proposed model and other
approaches in the literature.Comment: Appears in Proceedings of the Eleventh Conference on Uncertainty in
Artificial Intelligence (UAI1995
- …