Skip to main content
Article thumbnail
Location of Repository

Situation-Dependent Learning for Interleaved Planning and Robot Execution

By Karen Zita Haigh


This dissertation presents the complete integrated planning, executing and learning robotic agent Rogue. Physical domains are notoriously hard to model completely and correctly. Robotics researchers have developed learning algorithms to successfully tune operational parameters. Instead of improving low-level actuator control, our work focusses instead at the planning stages of the system. The thesis provides techniques to directly process execution experience, and to learn to improve planning and execution performance. Rogue accepts multiple, asynchronous task requests, and interleaves task planning with real-world robot execution. This dissertation describes how Rogue prioritizes tasks, suspends and interrupts tasks, and opportunistically achieves compatible tasks. We present how Rogue interleaves planning and execution to accomplish its tasks, monitoring and compensating for failure and changes in the environment. Rogue analyzes execution experience to detect patterns in the environment that affect plan quality. Rogue extracts learning opportunities from massive, continual, probabilistic execution traces. Rogue then correlates these learning opportunities with environmental features, thus detecting patterns in the form of situation-dependent rules. We present the development and use of these rules for two very different planners: the path planner and the task planner. We present empirical data to show the effectiveness of Rogue's novel learning approach. Our learning approach is applicable for any planner operating in any physical domain. Our empirical results show that situation-dependent rules effectively improve the planner's model of the environment, thus allowing the planner to predict and avoid failures, to respond to a changing environment, and to create plans that are tailored to the real world. Physical systems should adapt to changing situations and absorb any information that will improve their performance

Year: 1998
OAI identifier: oai:CiteSeerX.psu:
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • (external link)
  • (external link)
  • Suggested articles

    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.