5 research outputs found

    Towards an Indexical Model of Situated Language Comprehension for Cognitive Agents in Physical Worlds

    Full text link
    We propose a computational model of situated language comprehension based on the Indexical Hypothesis that generates meaning representations by translating amodal linguistic symbols to modal representations of beliefs, knowledge, and experience external to the linguistic system. This Indexical Model incorporates multiple information sources, including perceptions, domain knowledge, and short-term and long-term experiences during comprehension. We show that exploiting diverse information sources can alleviate ambiguities that arise from contextual use of underspecific referring expressions and unexpressed argument alternations of verbs. The model is being used to support linguistic interactions in Rosie, an agent implemented in Soar that learns from instruction.Comment: Advances in Cognitive Systems 3 (2014

    Expanding Task Diversity in Explanation-Based Interactive Task Learning

    Full text link
    The possibility of having artificial agents that can interact with humans and learn completely new tasks through instruction and demonstration is an exciting prospect. This is the goal of the emerging research area of Interactive Task Learning. Solving this problem requires integrating many capabilities across AI to create general robot learns that can operate in a variety of environments. One particular challenge is that the space of possible tasks is extremely large and varied. Developing approaches that cover this space is a difficult challenge, made more so by having to learn from a limited, though high-quality, number of examples given through interaction with a teacher. In this dissertation, we identify three major dimensions of task complexity (diverse types of actions, task formulations, and task modifiers), and describe extensions that demonstrate greater learning capabilities for each dimension than previous work. First, we extend the representations and learning mechanism for innate tasks so the agent can learn tasks that utilize many different types of actions beyond physical object manipulation, such as communication and mental operations. Second, we implement a novel goal-graph representation that supports both goal-based and procedural tasks. Thus the instructor can formulate a task as achieving a goal and let the agent use planning to execute it, or can formulate the task as executing a procedure, or sequence of steps, when it is not easy to define a goal. This also allows interesting cases of a task that blends elements of a procedure and goal. Third, we added support for learning subtasks with various modifying clauses, such as temporal constraints, conditions, or looping structures. Crucially, we show that the agent can learn and generalize a canonical version of a task and then combine it with these various modifiers within a task hierarchy without requiring additional instruction. This is done in the context of Rosie -- an agent implemented within the Soar cognitive architecture that can learn completely new tasks in one shot through situated interactive instruction. By leveraging explanation-based generalization and domain knowledge, the agent quickly learns new hierarchical tasks, including their structure, arguments, goals, execution policies, and task decompositions, through natural language instruction. It has been used with various robotic platforms, though most of the learning demonstrations and evaluations in this work use a simulated mobile robot in a multi-room, partially-observable environment. In the end, we show that the agent can combine all of these extensions while learning complex hierarchical tasks that cover extended periods of time and demonstrate significant flexibility.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/168026/1/mininger_1.pd

    Interactively Learning a Blend of Goal-Based and Procedural Tasks

    No full text
    Agents that can learn new tasks through interactive instruction can utilize goal information to search for and learn flexible policies. This approach can be resilient to variations in initial conditions or issues that arise during execution. However, if a task is not easily formulated as achieving a goal or if the agent lacks sufficient domain knowledge for planning, other methods are required. We present a hybrid approach to interactive task learning that can learn both goal-oriented and procedural tasks, and mixtures of the two, from human natural language instruction. We describe this approach, go through two examples of learning tasks, and outline the space of tasks that the system can learn. We show that our approach can learn a variety of goal-oriented and procedural tasks from a single example and is robust to different amounts of domain knowledge

    A Demonstration of Compositional, Hierarchical Interactive Task Learning

    No full text
    We present a demonstration of the interactive task learning agent Rosie, where it learns the task of patrolling a simulated barracks environment through situated natural language instruction. In doing so, it builds a sizable task hierarchy composed of both innate and learned tasks, tasks formulated as achieving a goal or following a procedure, tasks with conditional branches and loops, and involving communicative and mental actions. Rosie is implemented in the Soar cognitive architecture, and represents tasks using a declarative task network which it compiles into procedural rules through chunking. This is key to allowing it to learn from a single training episode and generalize quickly
    corecore