20,138 research outputs found

    Acceptability of the transitional wearable companion “+me” in typical children: a pilot study

    Get PDF
    This work presents the results of the first experimentation of +me-the first prototype of Transitional Wearable Companion–run on 15 typically developed (TD) children with ages between 8 and 34 months. +me is an interactive device that looks like a teddy bear that can be worn around the neck, has touch sensors, can emit appealing lights and sounds, and has input-output contingencies that can be regulated with a tablet via Bluetooth. The participants were engaged in social play activities involving both the device and an adult experimenter. +me was designed with the objective of exploiting its intrinsic allure as an attractive toy to stimulate social interactions (e.g., eye contact, turn taking, imitation, social smiles), an aspect potentially helpful in the therapy of Autism Spectrum Disorders (ASD) and other Pervasive Developmental Disorders (PDD). The main purpose of this preliminary study is to evaluate the general acceptability of the toy by TD children, observing the elicited behaviors in preparation for future experiments involving children with ASD and other PDD. First observations, based on video recording and scoring, show that +me stimulates good social engagement in TD children, especially when their age is higher than 24 months

    Tactile Interactions with a Humanoid Robot : Novel Play Scenario Implementations with Children with Autism

    Get PDF
    Acknowledgments: This work has been partially supported by the European Commission under contract number FP7-231500-ROBOSKIN. Open Access: This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.The work presented in this paper was part of our investigation in the ROBOSKIN project. The project has developed new robot capabilities based on the tactile feedback provided by novel robotic skin, with the aim to provide cognitive mechanisms to improve human-robot interaction capabilities. This article presents two novel tactile play scenarios developed for robot-assisted play for children with autism. The play scenarios were developed against specific educational and therapeutic objectives that were discussed with teachers and therapists. These objectives were classified with reference to the ICF-CY, the International Classification of Functioning – version for Children and Youth. The article presents a detailed description of the play scenarios, and case study examples of their implementation in HRI studies with children with autism and the humanoid robot KASPAR.Peer reviewedFinal Published versio

    Learning and Acting in Peripersonal Space: Moving, Reaching, and Grasping

    Get PDF
    The young infant explores its body, its sensorimotor system, and the immediately accessible parts of its environment, over the course of a few months creating a model of peripersonal space useful for reaching and grasping objects around it. Drawing on constraints from the empirical literature on infant behavior, we present a preliminary computational model of this learning process, implemented and evaluated on a physical robot. The learning agent explores the relationship between the configuration space of the arm, sensing joint angles through proprioception, and its visual perceptions of the hand and grippers. The resulting knowledge is represented as the peripersonal space (PPS) graph, where nodes represent states of the arm, edges represent safe movements, and paths represent safe trajectories from one pose to another. In our model, the learning process is driven by intrinsic motivation. When repeatedly performing an action, the agent learns the typical result, but also detects unusual outcomes, and is motivated to learn how to make those unusual results reliable. Arm motions typically leave the static background unchanged, but occasionally bump an object, changing its static position. The reach action is learned as a reliable way to bump and move an object in the environment. Similarly, once a reliable reach action is learned, it typically makes a quasi-static change in the environment, moving an object from one static position to another. The unusual outcome is that the object is accidentally grasped (thanks to the innate Palmar reflex), and thereafter moves dynamically with the hand. Learning to make grasps reliable is more complex than for reaches, but we demonstrate significant progress. Our current results are steps toward autonomous sensorimotor learning of motion, reaching, and grasping in peripersonal space, based on unguided exploration and intrinsic motivation.Comment: 35 pages, 13 figure

    Explorations in engagement for humans and robots

    Get PDF
    This paper explores the concept of engagement, the process by which individuals in an interaction start, maintain and end their perceived connection to one another. The paper reports on one aspect of engagement among human interactors--the effect of tracking faces during an interaction. It also describes the architecture of a robot that can participate in conversational, collaborative interactions with engagement gestures. Finally, the paper reports on findings of experiments with human participants who interacted with a robot when it either performed or did not perform engagement gestures. Results of the human-robot studies indicate that people become engaged with robots: they direct their attention to the robot more often in interactions where engagement gestures are present, and they find interactions more appropriate when engagement gestures are present than when they are not.Comment: 31 pages, 5 figures, 3 table

    Choreographic and Somatic Approaches for the Development of Expressive Robotic Systems

    Full text link
    As robotic systems are moved out of factory work cells into human-facing environments questions of choreography become central to their design, placement, and application. With a human viewer or counterpart present, a system will automatically be interpreted within context, style of movement, and form factor by human beings as animate elements of their environment. The interpretation by this human counterpart is critical to the success of the system's integration: knobs on the system need to make sense to a human counterpart; an artificial agent should have a way of notifying a human counterpart of a change in system state, possibly through motion profiles; and the motion of a human counterpart may have important contextual clues for task completion. Thus, professional choreographers, dance practitioners, and movement analysts are critical to research in robotics. They have design methods for movement that align with human audience perception, can identify simplified features of movement for human-robot interaction goals, and have detailed knowledge of the capacity of human movement. This article provides approaches employed by one research lab, specific impacts on technical and artistic projects within, and principles that may guide future such work. The background section reports on choreography, somatic perspectives, improvisation, the Laban/Bartenieff Movement System, and robotics. From this context methods including embodied exercises, writing prompts, and community building activities have been developed to facilitate interdisciplinary research. The results of this work is presented as an overview of a smattering of projects in areas like high-level motion planning, software development for rapid prototyping of movement, artistic output, and user studies that help understand how people interpret movement. Finally, guiding principles for other groups to adopt are posited.Comment: Under review at MDPI Arts Special Issue "The Machine as Artist (for the 21st Century)" http://www.mdpi.com/journal/arts/special_issues/Machine_Artis
    • …
    corecore