13,883 research outputs found

    DAC-h3: A Proactive Robot Cognitive Architecture to Acquire and Express Knowledge About the World and the Self

    Get PDF
    This paper introduces a cognitive architecture for a humanoid robot to engage in a proactive, mixed-initiative exploration and manipulation of its environment, where the initiative can originate from both the human and the robot. The framework, based on a biologically-grounded theory of the brain and mind, integrates a reactive interaction engine, a number of state-of-the art perceptual and motor learning algorithms, as well as planning abilities and an autobiographical memory. The architecture as a whole drives the robot behavior to solve the symbol grounding problem, acquire language capabilities, execute goal-oriented behavior, and express a verbal narrative of its own experience in the world. We validate our approach in human-robot interaction experiments with the iCub humanoid robot, showing that the proposed cognitive architecture can be applied in real time within a realistic scenario and that it can be used with naive users

    Optimising robot personalities for symbiotic interaction

    Get PDF
    The Expressive Agents for Symbiotic Education and Learning (EASEL) project will explore human-robot symbiotic interaction (HRSI) with the aim of developing an understanding of symbiosis over long term tutoring interactions. The EASEL system will be built upon an established and neurobiologically grounded architecture - Distributed Adaptive Control (DAC). Here we present the design of an initial experiment in which our facially expressive humanoid robot will interact with children at a public exhibition. We discuss the range of measurements we will employ to explore the effects our robot's expressive ability has on interaction with children during HRSI, with the aim of contributing optimal robot personality parameters to the final EASEL model. © 2014 Springer International Publishing

    Language-based sensing descriptors for robot object grounding

    Get PDF
    In this work, we consider an autonomous robot that is required to understand commands given by a human through natural language. Specifically, we assume that this robot is provided with an internal representation of the environment. However, such a representation is unknown to the user. In this context, we address the problem of allowing a human to understand the robot internal representation through dialog. To this end, we introduce the concept of sensing descriptors. Such representations are used by the robot to recognize unknown object properties in the given commands and warn the user about them. Additionally, we show how these properties can be learned over time by leveraging past interactions in order to enhance the grounding capabilities of the robot
    • …
    corecore