7 research outputs found

    Expanding Task Diversity in Explanation-Based Interactive Task Learning

    Full text link
    The possibility of having artificial agents that can interact with humans and learn completely new tasks through instruction and demonstration is an exciting prospect. This is the goal of the emerging research area of Interactive Task Learning. Solving this problem requires integrating many capabilities across AI to create general robot learns that can operate in a variety of environments. One particular challenge is that the space of possible tasks is extremely large and varied. Developing approaches that cover this space is a difficult challenge, made more so by having to learn from a limited, though high-quality, number of examples given through interaction with a teacher. In this dissertation, we identify three major dimensions of task complexity (diverse types of actions, task formulations, and task modifiers), and describe extensions that demonstrate greater learning capabilities for each dimension than previous work. First, we extend the representations and learning mechanism for innate tasks so the agent can learn tasks that utilize many different types of actions beyond physical object manipulation, such as communication and mental operations. Second, we implement a novel goal-graph representation that supports both goal-based and procedural tasks. Thus the instructor can formulate a task as achieving a goal and let the agent use planning to execute it, or can formulate the task as executing a procedure, or sequence of steps, when it is not easy to define a goal. This also allows interesting cases of a task that blends elements of a procedure and goal. Third, we added support for learning subtasks with various modifying clauses, such as temporal constraints, conditions, or looping structures. Crucially, we show that the agent can learn and generalize a canonical version of a task and then combine it with these various modifiers within a task hierarchy without requiring additional instruction. This is done in the context of Rosie -- an agent implemented within the Soar cognitive architecture that can learn completely new tasks in one shot through situated interactive instruction. By leveraging explanation-based generalization and domain knowledge, the agent quickly learns new hierarchical tasks, including their structure, arguments, goals, execution policies, and task decompositions, through natural language instruction. It has been used with various robotic platforms, though most of the learning demonstrations and evaluations in this work use a simulated mobile robot in a multi-room, partially-observable environment. In the end, we show that the agent can combine all of these extensions while learning complex hierarchical tasks that cover extended periods of time and demonstrate significant flexibility.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/168026/1/mininger_1.pd
    corecore