102 research outputs found

    Therapeutic and educational objectives in robot assisted play for children with autism

    Get PDF
    “This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.” DOI: 10.1109/ROMAN.2009.5326251This article is a methodological paper that describes the therapeutic and educational objectives that were identified during the design process of a robot aimed at robot assisted play. The work described in this paper is part of the IROMEC project (Interactive Robotic Social Mediators as Companions) that recognizes the important role of play in child development and targets children who are prevented from or inhibited in playing. The project investigates the role of an interactive, autonomous robotic toy in therapy and education for children with special needs. This paper specifically addresses the therapeutic and educational objectives related to children with autism. In recent years, robots have already been used to teach basic social interaction skills to children with autism. The added value of the IROMEC robot is that play scenarios have been developed taking children's specific strengths and needs into consideration and covering a wide range of objectives in children's development areas (sensory, communicational and interaction, motor, cognitive and social and emotional). The paper describes children's developmental areas and illustrates how different experiences and interactions with the IROMEC robot are designed to target objectives in these areas.Final Published versio

    Therapeutic and educational objectives in robot assisted play for children with autism

    Get PDF
    “This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.” DOI: 10.1109/ROMAN.2009.5326251This article is a methodological paper that describes the therapeutic and educational objectives that were identified during the design process of a robot aimed at robot assisted play. The work described in this paper is part of the IROMEC project (Interactive Robotic Social Mediators as Companions) that recognizes the important role of play in child development and targets children who are prevented from or inhibited in playing. The project investigates the role of an interactive, autonomous robotic toy in therapy and education for children with special needs. This paper specifically addresses the therapeutic and educational objectives related to children with autism. In recent years, robots have already been used to teach basic social interaction skills to children with autism. The added value of the IROMEC robot is that play scenarios have been developed taking children's specific strengths and needs into consideration and covering a wide range of objectives in children's development areas (sensory, communicational and interaction, motor, cognitive and social and emotional). The paper describes children's developmental areas and illustrates how different experiences and interactions with the IROMEC robot are designed to target objectives in these areas

    Adaptive multivariable intermittent control: theory, development, and applications to real-time systems

    Get PDF
    Intermittent Control, as a control scheme that switches between open and closed-loop configurations, has been suggested as an alternative model to describe human control and to explain the intermittency observed during sustained control tasks. Additionally, IC might be beneficial in the following scenarios: 1 - in the field of robotics, where open-loop evolution could be used for computationally intensive tasks such as constrained optimisation routines, 2 - in an adaptation context, helping to detect system and environmental variations. Based on these ideas, this thesis explored the application of real-time multivariable intermittent controllers in humanoid robotics as well as adaptive versions of IC implemented on inverted pendulum structures

    Whole-body multi-contact motion in humans and humanoids: Advances of the CoDyCo European project

    Get PDF
    International audienceTraditional industrial applications involve robots with limited mobility. Consequently, interaction (e.g. manipulation) was treated separately from whole-body posture (e.g. balancing), assuming the robot firmly connected to the ground. Foreseen applications involve robots with augmented autonomy and physical mobility. Within this novel context, physical interaction influences stability and balance. To allow robots to surpass barriers between interaction and posture control, forthcoming robotic research needs to investigate the principles governing whole-body motion and coordination with contact dynamics. There is a need to investigate the principles of motion and coordination of physical interaction, including the aspects related to unpredictability. Recent developments in compliant actuation and touch sensing allow safe and robust physical interaction from unexpected contact including humans. The next advancement for cognitive robots, however, is the ability not only to cope with unpredictable contact, but also to exploit predictable contact in ways that will assist in goal achievement. Last but not least, theoretical results needs to be validated in real-world scenarios with humanoid robots engaged in whole-body goal-directed tasks. Robots should be capable of exploiting rigid supportive contacts, learning to compensate for compliant contacts, and utilising assistive physical interaction from humans. The work presented in this paper presents state-of-the-art in these domains as well as some recent advances made within the framework of the CoDyCo European project

    A comprehensive gaze stabilization controller based on cerebellar internal models

    Get PDF
    Gaze stabilization is essential for clear vision; it is the combined effect of two reflexes relying on vestibular inputs: the vestibulocollic reflex (VCR), which stabilizes the head in space and the vestibulo-ocular reflex (VOR), which stabilizes the visual axis to minimize retinal image motion. The VOR works in conjunction with the opto-kinetic reflex (OKR), which is a visual feedback mechanism that allows the eye to move at the same speed as the observed scene. Together they keep the image stationary on the retina. In this work, we implement on a humanoid robot a model of gaze stabilization based on the coordination of VCR, VOR and OKR. The model, inspired by neuroscientific cerebellar theories, is provided with learning and adaptation capabilities based on internal models. We present the results for the gaze stabilization model on three sets of experiments conducted on the SABIAN robot and on the iCub simulator, validating the robustness of the proposed control method. The first set of experiments focused on the controller response to a set of disturbance frequencies along the vertical plane. The second shows the performances of the system under three-dimensional disturbances. The last set of experiments was carried out to test the capability of the proposed model to stabilize the gaze in locomotion tasks. The results confirm that the proposed model is beneficial in all cases reducing the retinal slip (velocity of the image on the retina) and keeping the orientation of the head stable

    Development of Cognitive Capabilities in Humanoid Robots

    Get PDF
    Merged with duplicate record 10026.1/645 on 03.04.2017 by CS (TIS)Building intelligent systems with human level of competence is the ultimate grand challenge for science and technology in general, and especially for the computational intelligence community. Recent theories in autonomous cognitive systems have focused on the close integration (grounding) of communication with perception, categorisation and action. Cognitive systems are essential for integrated multi-platform systems that are capable of sensing and communicating. This thesis presents a cognitive system for a humanoid robot that integrates abilities such as object detection and recognition, which are merged with natural language understanding and refined motor controls. The work includes three studies; (1) the use of generic manipulation of objects using the NMFT algorithm, by successfully testing the extension of the NMFT to control robot behaviour; (2) a study of the development of a robotic simulator; (3) robotic simulation experiments showing that a humanoid robot is able to acquire complex behavioural, cognitive, and linguistic skills through individual and social learning. The robot is able to learn to handle and manipulate objects autonomously, to cooperate with human users, and to adapt its abilities to changes in internal and environmental conditions. The model and the experimental results reported in this thesis, emphasise the importance of embodied cognition, i.e. the humanoid robot's physical interaction between its body and the environment

    Adaptive sensorimotor peripersonal space representation and motor learning for a humanoid robot

    Get PDF
    This thesis presents possible computational mechanisms by which a humanoid robot can develop a coherent representation of the space within its reach (its peripersonal space), and use it to control its movements. Those mechanisms are inspired by current theories of peripersonal space representation and motor control in humans, targeting a cross-fertilization between robotics on one side, and cognitive science on the other side. This research addresses the issue of adaptivity the sensorimotor level, at the control level and at the level of simple task learning. First, this work considers the concept of body schema and suggests a computational translation of this concept, appropriate for controlling a humanoid robot. This model of the body schema is adaptive and evolves as a result of the robot sensory experience. It suggests new avenues for understanding various psychophysical and neuropsychological phenomenons of human peripersonal space representation such as adaptation to distorted vision and tool use, fake limbs experiments, body-part centered receptive fields, and multimodal neurons. Second, it is shown how the motor modality can be added to the body schema. The suggested controller is inspired by the dynamical system theory of motor control and allows the robot to simultaneously and robustly control its limbs in joint angles space and in end-effector location space. This amounts to controlling the robot in both proprioceptive and visual modalities. This multimodal control can benefit from the advantages offered by each modality and is better than traditional robotic controllers in several respects. It offers a simple and elegant solution to the singularity and joint limit avoidance problems and can be seen as a generalization of the Damped Least Square approach to robot control. The controller exhibits several properties of human reaching movements, such as quasi-straight hand paths and bell-shaped velocity profiles and non-equifinality. In a third step, the motor modalities is endowed with a statistical learning mechanism, based on Gaussian Mixture Models, that enables the humanoid to learn motor primitives from demonstrations. The robot is thus able to learn simple manipulation tasks and generalize them to various context, in a way that is robust to perturbations occurring during task execution. In addition to simulation results, the whole model has been implemented and validated on two humanoid robots, the Hoap3 and the iCub, enabling them to learn their arm and head geometries, perform reaching movements, adapt to unknown tools, and visual distortions, and learn simple manipulation tasks in a smooth, robust and adaptive way. Finally, this work hints at possible computational interpretations of the concepts of body schema, motor perception and motor primitives

    Attention and Social Cognition in Virtual Reality:The effect of engagement mode and character eye-gaze

    Get PDF
    Technical developments in virtual humans are manifest in modern character design. Specifically, eye gaze offers a significant aspect of such design. There is need to consider the contribution of participant control of engagement. In the current study, we manipulated participants’ engagement with an interactive virtual reality narrative called Coffee without Words. Participants sat over coffee opposite a character in a virtual café, where they waited for their bus to be repaired. We manipulated character eye-contact with the participant. For half the participants in each condition, the character made no eye-contact for the duration of the story. For the other half, the character responded to participant eye-gaze by making and holding eye contact in return. To explore how participant engagement interacted with this manipulation, half the participants in each condition were instructed to appraise their experience as an artefact (i.e., drawing attention to technical features), while the other half were introduced to the fictional character, the narrative, and the setting as though they were real. This study allowed us to explore the contributions of character features (interactivity through eye-gaze) and cognition (attention/engagement) to the participants’ perception of realism, feelings of presence, time duration, and the extent to which they engaged with the character and represented their mental states (Theory of Mind). Importantly it does so using a highly controlled yet ecologically valid virtual experience
    corecore