5 research outputs found
Handling Defeasibilities in Action Domains
Representing defeasibility is an important issue in common sense reasoning.
In reasoning about action and change, this issue becomes more difficult because
domain and action related defeasible information may conflict with general
inertia rules. Furthermore, different types of defeasible information may also
interfere with each other during the reasoning. In this paper, we develop a
prioritized logic programming approach to handle defeasibilities in reasoning
about action. In particular, we propose three action languages {\cal AT}^{0},
{\cal AT}^{1} and {\cal AT}^{2} which handle three types of defeasibilities in
action domains named defeasible constraints, defeasible observations and
actions with defeasible and abnormal effects respectively. Each language with a
higher superscript can be viewed as an extension of the language with a lower
superscript. These action languages inherit the simple syntax of {\cal A}
language but their semantics is developed in terms of transition systems where
transition functions are defined based on prioritized logic programs. By
illustrating various examples, we show that our approach eventually provides a
powerful mechanism to handle various defeasibilities in temporal prediction and
postdiction. We also investigate semantic properties of these three action
languages and characterize classes of action domains that present more
desirable solutions in reasoning about action within the underlying action
languages.Comment: 49 pages, 1 figure, to be appeared in journal Theory and Practice
Logic Programmin
Argumentation-based methods for multi-perspective cooperative planning
Through cooperation, agents can transcend their individual capabilities and achieve
goals that would be unattainable otherwise. Existing multiagent planning work considers
each agent’s action capabilities, but does not account for distributed knowledge
and the incompatible views agents may have of the planning domain. These divergent
views can be a result of faulty sensors, local and incomplete knowledge, and outdated
information, or simply because each agent has conducted different inferences and their
beliefs are not aligned.
This thesis is concerned with Multi-Perspective Cooperative Planning (MPCP), the
problem of synthesising a plan for multiple agents which share a goal but hold different
views about the state of the environment and the specification of the actions they can
perform to affect it. Reaching agreement on a mutually acceptable plan is important,
since cautious autonomous agents will not subscribe to plans that they individually
believe to be inappropriate or even potentially hazardous.
We specify the MPCP problem by adapting standard set-theoretic planning notation.
Based on argumentation theory we define a new notion of plan acceptability, and
introduce a novel formalism that combines defeasible logic programming and situation
calculus that enables the succinct axiomatisation of contradictory planning theories and
allows deductive argumentation-based inference.
Our work bridges research in argumentation, reasoning about action and classical
planning. We present practical methods for reasoning and planning with MPCP
problems that exploit the inherent structure of planning domains and efficient planning
heuristics. Finally, in order to allow distribution of tasks, we introduce a family of
argumentation-based dialogue protocols that enable the agents to reach agreement on
plans in a decentralised manner.
Based on the concrete foundation of deductive argumentation we analytically investigate
important properties of our methods illustrating the correctness of the proposed
planning mechanisms. We also empirically evaluate the efficiency of our algorithms
in benchmark planning domains. Our results illustrate that our methods can
synthesise acceptable plans within reasonable time in large-scale domains, while maintaining
a level of expressiveness comparable to that of modern automated planning
Representing Defeasible Constraints and Observations in Action Theories
. We propose a general formulation of reasoning about action based on prioritized logic programming, where defeasibility handling is explicitly taken into account. In particular, we consider two types of defeasibilities in our problem domains: defeasible constraints and defeasible observations. By introducing the notion of priority in action formulation, we show that our approach provides a unified framework to handle these defeasibilities in temporal prediction and postdiction reasoning with incomplete information. Key words: temporal reasoning, commonsense reasoning, knowledge representation, reasoning about action 1 Introduction Representing defeasibility is an important issue in commonsense reasoning. In reasoning about action, this issue becomes more difficult because domain and action related defeasible information may conflict with general inertia rules -- that are necessary to specify things that persist with respect to actions and usually defeasible. Furthermore, different t..