11 research outputs found

    Integration of Action and Language Knowledge: A Roadmap for Developmental Robotics

    Get PDF
    “This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.”This position paper proposes that the study of embodied cognitive agents, such as humanoid robots, can advance our understanding of the cognitive development of complex sensorimotor, linguistic, and social learning skills. This in turn will benefit the design of cognitive robots capable of learning to handle and manipulate objects and tools autonomously, to cooperate and communicate with other robots and humans, and to adapt their abilities to changing internal, environmental, and social conditions. Four key areas of research challenges are discussed, specifically for the issues related to the understanding of: 1) how agents learn and represent compositional actions; 2) how agents learn and represent compositional lexica; 3) the dynamics of social interaction and learning; and 4) how compositional action and language representations are integrated to bootstrap the cognitive system. The review of specific issues and progress in these areas is then translated into a practical roadmap based on a series of milestones. These milestones provide a possible set of cognitive robotics goals and test scenarios, thus acting as a research roadmap for future work on cognitive developmental robotics.Peer reviewe

    Contingency scaffolds language learning

    No full text
    In human robot interaction the question how to communicate is an important one. The answer to this question can be approached through several perspectives. One approach to study the best way how a robot should behave in an interaction with a human is by providing a consistent robotic behavior. From this we can gain insights into what parameters are triggering what responsive behavior in an user. This method allows us as roboticists to investigate how we can elicit a specific behavior in users in order to facilitate robot's learning. In previous studies, we have shown how responsive eye gaze and feedback on a looming detection is modifying the human tutoring behavior [1]. In this paper, we present a study was carried out within the ITALK project. The study is targeting, how we can tune robotic feedback strategies of the iCub robot to evoke a tutoring behavior in a human tutor that is supporting a language acquisition system. We used a longitudinal approach for the study to also verify the verbal feedback given by the robot

    The impact of the contingency of robot feedback on HRI

    No full text
    In this paper, we investigate the impact the contingency of robot feedback may have on the quality of verbal human-robot interaction. In order to assess not only what the effects are but also what they are caused by, we carried out experiments in which naïve participants instructed the humanoid robot iCub on a set of shapes and on a stacking task in two conditions, once with socially contingent, nonverbal feedback implemented in response to different gaze and demonstrating behaviors of the human tutor, and once with non-contingent, saliency-based feedback. The results of the analysis of participants' linguistic behaviors in the two conditions show that contingency has an impact on the complexity and the pre-structuring of the task for the robot, i.e. on the participants' tutoring behaviors. Contingency thus plays a considerable role for learning by demonstration

    Better be reactive at the beginning. Implications of the first seconds of an encounter for the tutoring style in human-robot-interaction

    Get PDF
    The paper investigates the effects of a robot's on-line feedback during a tutoring situation with a human tutor. Analysis is based on a study conducted with an iCub robot that autonomously generates its feedback (gaze, pointing gesture) based on the system's perception of the tutor's actions using the idea of reciprocity of actions. Sequential micro-analysis of two opposite cases reveals how the robot's behavior (responsive vs. non-responsive) pro-actively shapes the tutor's conduct and thus co-produces the way in which it is being tutored. A dialogic and a monologic tutoring style are distinguished. The first 20 seconds of an encounter are found to shape the user's perception and expectations of the system's competences and lead to a relatively stable tutoring style even if the robot's reactivity and appropriateness of feedback changes

    Contingency allows the robot to spot the tutor and to learn from interaction

    No full text
    Aiming at artificial system learning from a human tutor elicit tutoring behavior, which we implemented on the robotic platform iCub. For the evaluation of the system with users, we considered a contingency module that is developed to elicit tutoring behavior, which we then evaluate by implementing this module on the robotic platform iCub and within an interaction with the users. For the evaluation of our system, we consider not only the participant's behavior but also the system's log-files as dependent variables (as it was suggested in [15] for the improvement of HRI design). We further applied Sequential Analysis as a qualitative method that provides micro-analytical insights into the sequential structure of the interaction. This way, we are able to investigate a closer interrelationship between robot's and tutor's actions and how they respond to each other. We focus on two cases: In the first case, the system module was reacting to the interaction partner appropriately; in the second case, the contingency module failed to spot the tutor. We found that the contingency module enables the robot to engage in an interaction with the human tutor who orients to the robot's conduct as appropriate and responsive. In contrast, when the robot did not engage in an appropriate responsive interaction, the tutor oriented more towards the object while gazing less at the robot
    corecore