1,096 research outputs found
Reasoning About Beliefs and Actions Under Computational Resource Constraints
Although many investigators affirm a desire to build reasoning systems that
behave consistently with the axiomatic basis defined by probability theory and
utility theory, limited resources for engineering and computation can make a
complete normative analysis impossible. We attempt to move discussion beyond
the debate over the scope of problems that can be handled effectively to cases
where it is clear that there are insufficient computational resources to
perform an analysis deemed as complete. Under these conditions, we stress the
importance of considering the expected costs and benefits of applying
alternative approximation procedures and heuristics for computation and
knowledge acquisition. We discuss how knowledge about the structure of user
utility can be used to control value tradeoffs for tailoring inference to
alternative contexts. We address the notion of real-time rationality, focusing
on the application of knowledge about the expected timewise-refinement
abilities of reasoning strategies to balance the benefits of additional
computation with the costs of acting with a partial result. We discuss the
benefits of applying decision theory to control the solution of difficult
problems given limitations and uncertainty in reasoning resources.Comment: Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987
A Synthesis of Logical and Probabilistic Reasoning for Program Understanding and Debugging
We describe the integration of logical and uncertain reasoning methods to
identify the likely source and location of software problems. To date, software
engineers have had few tools for identifying the sources of error in complex
software packages. We describe a method for diagnosing software problems
through combining logical and uncertain reasoning analyses. Our preliminary
results suggest that such methods can be of value in directing the attention of
software engineers to paths of an algorithm that have the highest likelihood of
harboring a programming error.Comment: Appears in Proceedings of the Ninth Conference on Uncertainty in
Artificial Intelligence (UAI1993
The Myth of Modularity in Rule-Based Systems
In this paper, we examine the concept of modularity, an often cited advantage
of the ruled-based representation methodology. We argue that the notion of
modularity consists of two distinct concepts which we call syntactic modularity
and semantic modularity. We argue that when reasoning under certainty, it is
reasonable to regard the rule-based approach as both syntactically and
semantically modular. However, we argue that in the case of plausible
reasoning, rules are syntactically modular but are rarely semantically modular.
To illustrate this point, we examine a particular approach for managing
uncertainty in rule-based systems called the MYCIN certainty factor model. We
formally define the concept of semantic modularity with respect to the
certainty factor model and discuss logical consequences of the definition. We
show that the assumption of semantic modularity imposes strong restrictions on
rules in a knowledge base. We argue that such restrictions are rarely valid in
practical applications. Finally, we suggest how the concept of semantic
modularity can be relaxed in a manner that makes it appropriate for plausible
reasoning.Comment: Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986
Utility-Based Abstraction and Categorization
We take a utility-based approach to categorization. We construct
generalizations about events and actions by considering losses associated with
failing to distinguish among detailed distinctions in a decision model. The
utility-based methods transform detailed states of the world into more abstract
categories comprised of disjunctions of the states. We show how we can cluster
distinctions into groups of distinctions at progressively higher levels of
abstraction, and describe rules for decision making with the abstractions. The
techniques introduce a utility-based perspective on the nature of concepts, and
provide a means of simplifying decision models used in automated reasoning
systems. We demonstrate the techniques by describing the capabilities and
output of TUBA, a program for utility-based abstraction.Comment: Appears in Proceedings of the Ninth Conference on Uncertainty in
Artificial Intelligence (UAI1993
Exploiting System Hierarchy to Compute Repair Plans in Probabilistic Model-based Diagnosis
The goal of model-based diagnosis is to isolate causes of anomalous system
behavior and recommend inexpensive repair actions in response. In general,
precomputing optimal repair policies is intractable. To date, investigators
addressing this problem have explored approximations that either impose
restrictions on the system model (such as a single fault assumption) or compute
an immediate best action with limited lookahead. In this paper, we develop a
formulation of repair in model-based diagnosis and a repair algorithm that
computes optimal sequences of actions. This optimal approach is costly but can
be applied to precompute an optimal repair strategy for compact systems. We
show how we can exploit a hierarchical system specification to make this
approach tractable for large systems. When introducing hierarchy, we also
consider the tradeoff between simply replacing a component and decomposing it
to repair its subcomponents. The hierarchical repair algorithm is suitable for
off-line precomputation of an optimal repair strategy. A modification of the
algorithm takes advantage of an iterative deepening scheme to trade off
inference time and the quality of the computed strategy.Comment: Appears in Proceedings of the Eleventh Conference on Uncertainty in
Artificial Intelligence (UAI1995
Conversation as Action Under Uncertainty
Conversations abound with uncetainties of various kinds. Treating
conversation as inference and decision making under uncertainty, we propose a
task independent, multimodal architecture for supporting robust continuous
spoken dialog called Quartet. We introduce four interdependent levels of
analysis, and describe representations, inference procedures, and decision
strategies for managing uncertainties within and between the levels. We
highlight the approach by reviewing interactions between a user and two spoken
dialog systems developed using the Quartet architecture: Prsenter, a prototype
system for navigating Microsoft PowerPoint presentations, and the Bayesian
Receptionist, a prototype system for dealing with tasks typically handled by
front desk receptionists at the Microsoft corporate campus.Comment: Appears in Proceedings of the Sixteenth Conference on Uncertainty in
Artificial Intelligence (UAI2000
Time-Dependent Utility and Action Under Uncertainty
We discuss representing and reasoning with knowledge about the time-dependent
utility of an agent's actions. Time-dependent utility plays a crucial role in
the interaction between computation and action under bounded resources. We
present a semantics for time-dependent utility and describe the use of
time-dependent information in decision contexts. We illustrate our discussion
with examples of time-pressured reasoning in Protos, a system constructed to
explore the ideal control of inference by reasoners with limit abilities.Comment: Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991
Time-Critical Reasoning: Representations and Application
We review the problem of time-critical action and discuss a reformulation
that shifts knowledge acquisition from the assessment of complex temporal
probabilistic dependencies to the direct assessment of time-dependent utilities
over key outcomes of interest. We dwell on a class of decision problems
characterized by the centrality of diagnosing and reacting in a timely manner
to pathological processes. We motivate key ideas in the context of trauma-care
triage and transportation decisions.Comment: Appears in Proceedings of the Thirteenth Conference on Uncertainty in
Artificial Intelligence (UAI1997
Perception, Attention, and Resources: A Decision-Theoretic Approach to Graphics Rendering
We describe work to control graphics rendering under limited computational
resources by taking a decision-theoretic perspective on perceptual costs and
computational savings of approximations. The work extends earlier work on the
control of rendering by introducing methods and models for computing the
expected cost associated with degradations of scene components. The expected
cost is computed by considering the perceptual cost of degradations and a
probability distribution over the attentional focus of viewers. We review the
critical literature describing findings on visual search and attention, discuss
the implications of the findings, and introduce models of expected perceptual
cost. Finally, we discuss policies that harness information about the expected
cost of scene components.Comment: Appears in Proceedings of the Thirteenth Conference on Uncertainty in
Artificial Intelligence (UAI1997
Reasoning about the Value of Decision-Model Refinement: Methods and Application
We investigate the value of extending the completeness of a decision model
along different dimensions of refinement. Specifically, we analyze the expected
value of quantitative, conceptual, and structural refinement of decision
models. We illustrate the key dimensions of refinement with examples. The
analyses of value of model refinement can be used to focus the attention of an
analyst or an automated reasoning system on extensions of a decision model
associated with the greatest expected value.Comment: Appears in Proceedings of the Ninth Conference on Uncertainty in
Artificial Intelligence (UAI1993
- …