27,718 research outputs found

    Human-centred design methods : developing scenarios for robot assisted play informed by user panels and field trials

    Get PDF
    Original article can be found at: http://www.sciencedirect.com/ Copyright ElsevierThis article describes the user-centred development of play scenarios for robot assisted play, as part of the multidisciplinary IROMEC1 project that develops a novel robotic toy for children with special needs. The project investigates how robotic toys can become social mediators, encouraging children with special needs to discover a range of play styles, from solitary to collaborative play (with peers, carers/teachers, parents, etc.). This article explains the developmental process of constructing relevant play scenarios for children with different special needs. Results are presented from consultation with panel of experts (therapists, teachers, parents) who advised on the play needs for the various target user groups and who helped investigate how robotic toys could be used as a play tool to assist in the children’s development. Examples from experimental investigations are provided which have informed the development of scenarios throughout the design process. We conclude by pointing out the potential benefit of this work to a variety of research projects and applications involving human–robot interactions.Peer reviewe

    Symbol Emergence in Robotics: A Survey

    Full text link
    Humans can learn the use of language through physical interaction with their environment and semiotic communication with other people. It is very important to obtain a computational understanding of how humans can form a symbol system and obtain semiotic skills through their autonomous mental development. Recently, many studies have been conducted on the construction of robotic systems and machine-learning methods that can learn the use of language through embodied multimodal interaction with their environment and other systems. Understanding human social interactions and developing a robot that can smoothly communicate with human users in the long term, requires an understanding of the dynamics of symbol systems and is crucially important. The embodied cognition and social interaction of participants gradually change a symbol system in a constructive manner. In this paper, we introduce a field of research called symbol emergence in robotics (SER). SER is a constructive approach towards an emergent symbol system. The emergent symbol system is socially self-organized through both semiotic communications and physical interactions with autonomous cognitive developmental agents, i.e., humans and developmental robots. Specifically, we describe some state-of-art research topics concerning SER, e.g., multimodal categorization, word discovery, and a double articulation analysis, that enable a robot to obtain words and their embodied meanings from raw sensory--motor information, including visual information, haptic information, auditory information, and acoustic speech signals, in a totally unsupervised manner. Finally, we suggest future directions of research in SER.Comment: submitted to Advanced Robotic

    Dance Teaching by a Robot: Combining Cognitive and Physical Human-Robot Interaction for Supporting the Skill Learning Process

    Full text link
    This letter presents a physical human-robot interaction scenario in which a robot guides and performs the role of a teacher within a defined dance training framework. A combined cognitive and physical feedback of performance is proposed for assisting the skill learning process. Direct contact cooperation has been designed through an adaptive impedance-based controller that adjusts according to the partner's performance in the task. In measuring performance, a scoring system has been designed using the concept of progressive teaching (PT). The system adjusts the difficulty based on the user's number of practices and performance history. Using the proposed method and a baseline constant controller, comparative experiments have shown that the PT presents better performance in the initial stage of skill learning. An analysis of the subjects' perception of comfort, peace of mind, and robot performance have shown a significant difference at the p < .01 level, favoring the PT algorithm.Comment: Presented at IEEE International Conference on Robotics and Automation ICRA-201

    A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

    Get PDF
    In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-looking conclusion

    Towards an Architecture for Semiautonomous Robot Telecontrol Systems.

    Get PDF
    The design and development of a computational system to support robot–operator collaboration is a challenging task, not only because of the overall system complexity, but furthermore because of the involvement of different technical and scientific disciplines, namely, Software Engineering, Psychology and Artificial Intelligence, among others. In our opinion the approach generally used to face this type of project is based on system architectures inherited from the development of autonomous robots and therefore fails to incorporate explicitly the role of the operator, i.e. these architectures lack a view that help the operator to see him/herself as an integral part of the system. The goal of this paper is to provide a human-centered paradigm that makes it possible to create this kind of view of the system architecture. This architectural description includes the definition of the role of operator and autonomous behaviour of the robot, it identifies the shared knowledge, and it helps the operator to see the robot as an intentional being as himself/herself

    Humanoid Theory Grounding

    Get PDF
    In this paper we consider the importance of using a humanoid physical form for a certain proposed kind of robotics, that of theory grounding. Theory grounding involves grounding the theory skills and knowledge of an embodied artificially intelligent (AI) system by developing theory skills and knowledge from the bottom up. Theory grounding can potentially occur in a variety of domains, and the particular domain considered here is that of language. Language is taken to be another “problem space” in which a system can explore and discover solutions. We argue that because theory grounding necessitates robots experiencing domain information, certain behavioral-form aspects, such as abilities to socially smile, point, follow gaze, and generate manual gestures, are necessary for robots grounding a humanoid theory of language

    The ITALK project : A developmental robotics approach to the study of individual, social, and linguistic learning

    Get PDF
    This is the peer reviewed version of the following article: Frank Broz et al, “The ITALK Project: A Developmental Robotics Approach to the Study of Individual, Social, and Linguistic Learning”, Topics in Cognitive Science, Vol 6(3): 534-544, June 2014, which has been published in final form at doi: http://dx.doi.org/10.1111/tops.12099 This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archiving." Copyright © 2014 Cognitive Science Society, Inc.This article presents results from a multidisciplinary research project on the integration and transfer of language knowledge into robots as an empirical paradigm for the study of language development in both humans and humanoid robots. Within the framework of human linguistic and cognitive development, we focus on how three central types of learning interact and co-develop: individual learning about one's own embodiment and the environment, social learning (learning from others), and learning of linguistic capability. Our primary concern is how these capabilities can scaffold each other's development in a continuous feedback cycle as their interactions yield increasingly sophisticated competencies in the agent's capacity to interact with others and manipulate its world. Experimental results are summarized in relation to milestones in human linguistic and cognitive development and show that the mutual scaffolding of social learning, individual learning, and linguistic capabilities creates the context, conditions, and requisites for learning in each domain. Challenges and insights identified as a result of this research program are discussed with regard to possible and actual contributions to cognitive science and language ontogeny. In conclusion, directions for future work are suggested that continue to develop this approach toward an integrated framework for understanding these mutually scaffolding processes as a basis for language development in humans and robots.Peer reviewe

    "Sticky Hands": learning and generalization for cooperative physical interactions with a humanoid robot

    Get PDF
    "Sticky Hands" is a physical game for two people involving gentle contact with the hands. The aim is to develop relaxed and elegant motion together, achieve physical sensitivity-improving reactions, and experience an interaction at an intimate yet comfortable level for spiritual development and physical relaxation. We developed a control system for a humanoid robot allowing it to play Sticky Hands with a human partner. We present a real implementation including a physical system, robot control, and a motion learning algorithm based on a generalizable intelligent system capable itself of generalizing observed trajectories' translation, orientation, scale and velocity to new data, operating with scalable speed and storage efficiency bounds, and coping with contact trajectories that evolve over time. Our robot control is capable of physical cooperation in a force domain, using minimal sensor input. We analyze robot-human interaction and relate characteristics of our motion learning algorithm with recorded motion profiles. We discuss our results in the context of realistic motion generation and present a theoretical discussion of stylistic and affective motion generation based on, and motivating cross-disciplinary research in computer graphics, human motion production and motion perception
    corecore