141 research outputs found
Anytime planning for agent behaviour
For an agent to act successfully in a complex and dynamic environment (such as a computer game)it must have a method of generating future behaviour that meets the demands of its environment. One such method is anytime planning. This paper discusses the problems and benefits associated with making a planning system work under the anytime paradigm, and introduces Anytime-UMCP (A-UMCP), an anytime version of the UMCP hierarchical task network (HTN) planner [Erol, 1995]. It also covers the necessary abilities an agent must have in order to execute plans produced by an anytime hierarchical task network planner
SHOP2: An HTN Planning System
The SHOP2 planning system received one of the awards for distinguished
performance in the 2002 International Planning Competition. This paper
describes the features of SHOP2 which enabled it to excel in the competition,
especially those aspects of SHOP2 that deal with temporal and metric planning
domains
Knowledge-Based Task Structure Planning for an Information Gathering Agent
An effective solution to model and apply planning domain knowledge for deliberation and action in probabilistic, agent-oriented control is presented. Specifically, the addition of a task structure planning component and supporting components to an agent-oriented architecture and agent implementation is described. For agent control in risky or uncertain environments, an approach and method of goal reduction to task plan sets and schedules of action is presented. Additionally, some issues related to component-wise, situation-dependent control of a task planning agent that schedules its tasks separately from planning them are motivated and discussed
SHOP2: An HTN planning system
The SHOP2 planning system received one of the awards for distinguished performance in the 2002 International Planning Competition. This paper describes the features of SHOP2 which enabled it to excel in the competition, especially those aspects of SHOP2 that deal with temporal and metric planning domains.open17833
Hierarchical Goal Networks: Formalisms and Algorithms for Planning and Acting
In real-world applications of AI and automation such as in
robotics, computer game playing and web-services, agents need to make
decisions in unstructured environments that are open-world, dynamic and
partially observable. In the AI and Robotics research communities in
particular, there is much interest in equipping robots to operate with
minimal human intervention in diverse scenarios such as in manufacturing
plants, homes, hospitals, etc. Enabling agents to operate in these
environments requires advanced planning and acting capabilities, some of
which are not well supported by the current state of the art automated
planning formalisms and algorithms. To address this problem, in my thesis I
propose a new planning formalism that addresses some of the inadequacies in
current planning frameworks, and a suite of planning and acting algorithms
that operate under this planning framework.
The main contributions of this thesis are:
- Hierarchical Goal Network (HGN) Planning Formalism. This planning
formalism combines aspects (and therefore harnesses advantages) of Classical
Planning and Hierarchical Task Network (HTN) Planning, two of the most
prominent planning formalisms currently in use. In particular, HGN planning
algorithms, while retaining the efficiency and scalability advantages of
HTNs, also allows incorporation of heuristics and other reasoning techniques
from Classical Planning.
- Planning Algorithms. Goal Decomposition Planner (GDP) and the Goal
Decomposition with Landmarks (GoDeL) planner are two HGN planning algorithms
that combines hierarchical decomposition with classical planning heuristics
to outperform state-of-the-art HTN planners like SHOP and SHOP2.
- Integration with Robotics. The Combined HGN and Motion Planning
(CHaMP) algorithm integrates GoDeL with low-level motion and manipulation
planning algorithms in Robotics to generate plans directly executable by
robots.
Given the need for autonomous agents to operate in open, dynamic and
unstructured environments and the obvious need for high-level deliberation
capabilities to enable intelligent behavior, the planning-and-acting systems
that are developed as part of this thesis may provide unique insights into
ways to realize these systems in the real world
A Review of Symbolic, Subsymbolic and Hybrid Methods for Sequential Decision Making
The field of Sequential Decision Making (SDM) provides tools for solving
Sequential Decision Processes (SDPs), where an agent must make a series of
decisions in order to complete a task or achieve a goal. Historically, two
competing SDM paradigms have view for supremacy. Automated Planning (AP)
proposes to solve SDPs by performing a reasoning process over a model of the
world, often represented symbolically. Conversely, Reinforcement Learning (RL)
proposes to learn the solution of the SDP from data, without a world model, and
represent the learned knowledge subsymbolically. In the spirit of
reconciliation, we provide a review of symbolic, subsymbolic and hybrid methods
for SDM. We cover both methods for solving SDPs (e.g., AP, RL and techniques
that learn to plan) and for learning aspects of their structure (e.g., world
models, state invariants and landmarks). To the best of our knowledge, no other
review in the field provides the same scope. As an additional contribution, we
discuss what properties an ideal method for SDM should exhibit and argue that
neurosymbolic AI is the current approach which most closely resembles this
ideal method. Finally, we outline several proposals to advance the field of SDM
via the integration of symbolic and subsymbolic AI
Using explanation structures to speed up local-search-based planning
Master'sMASTER OF ENGINEERIN
- …