2,755 research outputs found

    Interactive Robot Learning of Gestures, Language and Affordances

    Full text link
    A growing field in robotics and Artificial Intelligence (AI) research is human-robot collaboration, whose target is to enable effective teamwork between humans and robots. However, in many situations human teams are still superior to human-robot teams, primarily because human teams can easily agree on a common goal with language, and the individual members observe each other effectively, leveraging their shared motor repertoire and sensorimotor resources. This paper shows that for cognitive robots it is possible, and indeed fruitful, to combine knowledge acquired from interacting with elements of the environment (affordance exploration) with the probabilistic observation of another agent's actions. We propose a model that unites (i) learning robot affordances and word descriptions with (ii) statistical recognition of human gestures with vision sensors. We discuss theoretical motivations, possible implementations, and we show initial results which highlight that, after having acquired knowledge of its surrounding environment, a humanoid robot can generalize this knowledge to the case when it observes another agent (human partner) performing the same motor actions previously executed during training.Comment: code available at https://github.com/gsaponaro/glu-gesture

    The Effects of Visual Affordances and Feedback on a Gesture-based Interaction with Novice Users

    Get PDF
    This dissertation studies the roles and effects of visual affordances and feedback in a general-purpose gesture interface for novice users. Gesture interfaces are popularly viewed as intuitive and user-friendly modes of interacting with computers and robots, but they in fact introduce many challenges for users not already familiar with the system. Affordances and feedback – two fundamental building blocks of interface design – are perfectly suited to address the most important challenges and questions for novices using a gesture interface: what can they do? how do they do it? are they being understood? has anything gone wrong? Yet gesture interfaces rarely incorporate these features in a deliberate manner, and there are presently no well-adopted guidelines for designing affordances and feedback for gesture interaction, nor any clear understanding of their effects on such an interaction. A general-purpose gesture interaction system was developed based on a virtual touchscreen paradigm, and guided by a novel gesture interaction framework. This framework clarifies the relationship between gesture interfaces and the application interfaces they support, and it provides guidance for selecting and designing appropriate affordances and feedback. Using this gesture system, a 40-person (all novices) user study was conducted to evaluate the effects on interaction performance and user satisfaction of four categories of affordances and feedback. The experimental results demonstrated that affordances indicating how to do something in a gesture interaction are more important to interaction performance than affordances indicating what can be done, and also that system status is more important than feedback acknowledging user actions. However, the experiments also showed unexpectedly high interaction performance when affordances and feedback were omitted. The explanation for this result remains an open question, though several potential causes are analyzed, and a tentative interpretation is provided. The main contributions of this dissertation to the HRI and HCI research communities are 1) the design of a virtual touchscreen-based interface for general-purpose gesture interaction, to serve as a case study for identifying and designing affordances and feedback for gesture interfaces; 2) the method and surprising results of an evaluation of distinct affordance and feedback categories, in particular their effects on a gesture interaction with novice users; and 3) a set of guidelines and insights about the relationship between a user, a gesture interface, and a generic application interface, centered on a novel interaction framework that may be used to design and study other gesture systems. In addition to the intellectual contributions, this work is useful to the general public because it may influence how future assistive robots are designed to interact with people in various settings including search and rescue, healthcare and elderly care

    Integration of Action and Language Knowledge: A Roadmap for Developmental Robotics

    Get PDF
    “This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.”This position paper proposes that the study of embodied cognitive agents, such as humanoid robots, can advance our understanding of the cognitive development of complex sensorimotor, linguistic, and social learning skills. This in turn will benefit the design of cognitive robots capable of learning to handle and manipulate objects and tools autonomously, to cooperate and communicate with other robots and humans, and to adapt their abilities to changing internal, environmental, and social conditions. Four key areas of research challenges are discussed, specifically for the issues related to the understanding of: 1) how agents learn and represent compositional actions; 2) how agents learn and represent compositional lexica; 3) the dynamics of social interaction and learning; and 4) how compositional action and language representations are integrated to bootstrap the cognitive system. The review of specific issues and progress in these areas is then translated into a practical roadmap based on a series of milestones. These milestones provide a possible set of cognitive robotics goals and test scenarios, thus acting as a research roadmap for future work on cognitive developmental robotics.Peer reviewe

    The ITALK project : A developmental robotics approach to the study of individual, social, and linguistic learning

    Get PDF
    This is the peer reviewed version of the following article: Frank Broz et al, “The ITALK Project: A Developmental Robotics Approach to the Study of Individual, Social, and Linguistic Learning”, Topics in Cognitive Science, Vol 6(3): 534-544, June 2014, which has been published in final form at doi: http://dx.doi.org/10.1111/tops.12099 This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archiving." Copyright © 2014 Cognitive Science Society, Inc.This article presents results from a multidisciplinary research project on the integration and transfer of language knowledge into robots as an empirical paradigm for the study of language development in both humans and humanoid robots. Within the framework of human linguistic and cognitive development, we focus on how three central types of learning interact and co-develop: individual learning about one's own embodiment and the environment, social learning (learning from others), and learning of linguistic capability. Our primary concern is how these capabilities can scaffold each other's development in a continuous feedback cycle as their interactions yield increasingly sophisticated competencies in the agent's capacity to interact with others and manipulate its world. Experimental results are summarized in relation to milestones in human linguistic and cognitive development and show that the mutual scaffolding of social learning, individual learning, and linguistic capabilities creates the context, conditions, and requisites for learning in each domain. Challenges and insights identified as a result of this research program are discussed with regard to possible and actual contributions to cognitive science and language ontogeny. In conclusion, directions for future work are suggested that continue to develop this approach toward an integrated framework for understanding these mutually scaffolding processes as a basis for language development in humans and robots.Peer reviewe

    Dynamic mapping strategies for interactive art installations: an embodied combined HCI HRI HHI approach

    Get PDF
    This paper proposes a theoretical framework for dealing with the paradigm of interactivity in new media art, and how the broad use of the term in different research fields can lead to some misunderstandings. The paper addresses a conceptual view on how we can implement interaction in new media art from an embodied approach that unites views from HCI, HRI and HHI. The focus is on an intuitive mapping of a multitude of sensor data and to extend upon this using the paradigm of (1) finite state machines (FSM) to address dynamic mapping strategies, (2) mediality to address aisthesis and (3) embodiment to address valid mapping strategies originated from natural body movements. The theory put forward is illustrated by a case study

    Beyond Gazing, Pointing, and Reaching: A Survey of Developmental Robotics

    Get PDF
    Developmental robotics is an emerging field located at the intersection of developmental psychology and robotics, that has lately attracted quite some attention. This paper gives a survey of a variety of research projects dealing with or inspired by developmental issues, and outlines possible future directions
    • 

    corecore