574 research outputs found

    Peripersonal Space in the Humanoid Robot iCub

    Get PDF
    Developing behaviours for interaction with objects close to the body is a primary goal for any organism to survive in the world. Being able to develop such behaviours will be an essential feature in autonomous humanoid robots in order to improve their integration into human environments. Adaptable spatial abilities will make robots safer and improve their social skills, human-robot and robot-robot collaboration abilities. This work investigated how a humanoid robot can explore and create action-based representations of its peripersonal space, the region immediately surrounding the body where reaching is possible without location displacement. It presents three empirical studies based on peripersonal space findings from psychology, neuroscience and robotics. The experiments used a visual perception system based on active-vision and biologically inspired neural networks. The first study investigated the contribution of binocular vision in a reaching task. Results indicated the signal from vergence is a useful embodied depth estimation cue in the peripersonal space in humanoid robots. The second study explored the influence of morphology and postural experience on confidence levels in reaching assessment. Results showed that a decrease of confidence when assessing targets located farther from the body, possibly in accordance to errors in depth estimation from vergence for longer distances. Additionally, it was found that a proprioceptive arm-length signal extends the robot’s peripersonal space. The last experiment modelled development of the reaching skill by implementing motor synergies that progressively unlock degrees of freedom in the arm. The model was advantageous when compared to one that included no developmental stages. The contribution to knowledge of this work is extending the research on biologically-inspired methods for building robots, presenting new ways to further investigate the robotic properties involved in the dynamical adaptation to body and sensing characteristics, vision-based action, morphology and confidence levels in reaching assessment.CONACyT, Mexico (National Council of Science and Technology

    Haptic guidance keyboard system for facilitating sensorimotor training and rehabilitation

    Get PDF
    Thesis (Ph. D.)--Harvard-MIT Division of Health Sciences and Technology, 2009.Includes bibliographical references (p. 111-118).The Magnetic Guidance Keyboard System (MaGKeyS) embodies a new haptic guidance technology designed to facilitate sensorimotor training and rehabilitation. MaGKeyS works by employing active magnetic force to guide finger pressing movements during sensorimotor learning that involves sequential key presses, such as playing the piano. By combining this haptic guidance with an audiovisual learning paradigm, we have created a core technology with possible applications to such diverse fields as musical training, physical rehabilitation, and scientific investigation of sensorimotor learning. Two embodiments of this new technology were realized in this thesis. The first embodiment, the MaGKeyS Prototype, is a 5-key acrylic USB keyboard designed for a stationary right hand. A set of three behavioral experiments were executed to investigate the manner in which haptic guidance, via the MaGKeyS Prototype, facilitates rhythmic motor learning. In particular, the experiments examined the independent effects of haptic guidance on ordinal learning, which is the order of notes in a sequence, and temporal learning, which is the order of timing variations in a rhythmic sequence. A transfer test and 24-hour retention test were also administered. Our results provide conclusive evidence that haptic guidance can facilitate learning the ordinal pattern of a key press sequence. Furthermore, our results suggest that the advantage gained with haptic guidance can both transfer to learning a new rhythmic sequence, as well as extend to a demonstrable advantage a day later. The second embodiment, the MaGKeyS Trainer Piano, is an upright piano in which the keyboard has been modified and outfitted with electromagnets in a manner similar to the MaGKeyS Prototype. The Trainer Piano helps to teach by "feel" by providing an experience in which the user feels his or her fingers being pulled down into the correct piano keystrokes as the piano plays itself.by Craig Edwin Lewiston.Ph.D

    Machine Performers: Agents in a Multiple Ontological State

    Get PDF
    In this thesis, the author explores and develops new attributes for machine performers and merges the trans-disciplinary fields of the performing arts and artificial intelligence. The main aim is to redefine the term “embodiment” for robots on the stage and to demonstrate that this term requires broadening in various fields of research. This redefining has required a multifaceted theoretical analysis of embodiment in the field of artificial intelligence (e.g. the uncanny valley), as well as the construction of new robots for the stage by the author. It is hoped that these practical experimental examples will generate more research by others in similar fields. Even though the historical lineage of robotics is engraved with theatrical strategies and dramaturgy, further application of constructive principles from the performing arts and evidence from psychology and neurology can shift the perception of robotic agents both on stage and in other cultural environments. In this light, the relation between representation, movement and behaviour of bodies has been further explored to establish links between constructed bodies (as in artificial intelligence) and perceived bodies (as performers on the theatrical stage). In the course of this research, several practical works have been designed and built, and subsequently presented to live audiences and research communities. Audience reactions have been analysed with surveys and discussions. Interviews have also been conducted with choreographers, curators and scientists about the value of machine performers. The main conclusions from this study are that fakery and mystification can be used as persuasive elements to enhance agency. Morphologies can also be applied that tightly couple brain and sensorimotor actions and lead to a stronger stage presence. In fact, if this lack of presence is left out of human replicants, it causes an “uncanny” lack of agency. Furthermore, the addition of stage presence leads to stronger identification from audiences, even for bodies dissimilar to their own. The author demonstrates that audience reactions are enhanced by building these effects into machine body structures: rather than identification through mimicry, this causes them to have more unambiguously biological associations. Alongside these traits, atmospheres such as those created by a cast of machine performers tend to cause even more intensely visceral responses. In this thesis, “embodiment” has emerged as a paradigm shift – as well as within this shift – and morphological computing has been explored as a method to deepen this visceral immersion. Therefore, this dissertation considers and builds machine performers as “true” performers for the stage, rather than mere objects with an aura. Their singular and customized embodiment can enable the development of non-anthropocentric performances that encompass the abstract and conceptual patterns in motion and generate – as from human performers – empathy, identification and experiential reactions in live audiences

    Sensorimotor experience in virtual environments

    Get PDF
    The goal of rehabilitation is to reduce impairment and provide functional improvements resulting in quality participation in activities of life, Plasticity and motor learning principles provide inspiration for therapeutic interventions including movement repetition in a virtual reality environment, The objective of this research work was to investigate functional specific measurements (kinematic, behavioral) and neural correlates of motor experience of hand gesture activities in virtual environments stimulating sensory experience (VE) using a hand agent model. The fMRI compatible Virtual Environment Sign Language Instruction (VESLI) System was designed and developed to provide a number of rehabilitation and measurement features, to identify optimal learning conditions for individuals and to track changes in performance over time. Therapies and measurements incorporated into VESLI target and track specific impairments underlying dysfunction. The goal of improved measurement is to develop targeted interventions embedded in higher level tasks and to accurately track specific gains to understand the responses to treatment, and the impact the response may have upon higher level function such as participation in life. To further clarify the biological model of motor experiences and to understand the added value and role of virtual sensory stimulation and feedback which includes seeing one\u27s own hand movement, functional brain mapping was conducted with simultaneous kinematic analysis in healthy controls and in stroke subjects. It is believed that through the understanding of these neural activations, rehabilitation strategies advantaging the principles of plasticity and motor learning will become possible. The present research assessed successful practice conditions promoting gesture learning behavior in the individual. For the first time, functional imaging experiments mapped neural correlates of human interactions with complex virtual reality hands avatars moving synchronously with the subject\u27s own hands, Findings indicate that healthy control subjects learned intransitive gestures in virtual environments using the first and third person avatars, picture and text definitions, and while viewing visual feedback of their own hands, virtual hands avatars, and in the control condition, hidden hands. Moreover, exercise in a virtual environment with a first person avatar of hands recruited insular cortex activation over time, which might indicate that this activation has been associated with a sense of agency. Sensory augmentation in virtual environments modulated activations of important brain regions associated with action observation and action execution. Quality of the visual feedback was modulated and brain areas were identified where the amount of brain activation was positively or negatively correlated with the visual feedback, When subjects moved the right hand and saw unexpected response, the left virtual avatar hand moved, neural activation increased in the motor cortex ipsilateral to the moving hand This visual modulation might provide a helpful rehabilitation therapy for people with paralysis of the limb through visual augmentation of skills. A model was developed to study the effects of sensorimotor experience in virtual environments, and findings of the effect of sensorimotor experience in virtual environments upon brain activity and related behavioral measures. The research model represents a significant contribution to neuroscience research, and translational engineering practice, A model of neural activations correlated with kinematics and behavior can profoundly influence the delivery of rehabilitative services in the coming years by giving clinicians a framework for engaging patients in a sensorimotor environment that can optimally facilitate neural reorganization

    Human Machine Interaction

    Get PDF
    In this book, the reader will find a set of papers divided into two sections. The first section presents different proposals focused on the human-machine interaction development process. The second section is devoted to different aspects of interaction, with a special emphasis on the physical interaction

    Muscle activation mapping of skeletal hand motion: an evolutionary approach.

    Get PDF
    Creating controlled dynamic character animation consists of mathe- matical modelling of muscles and solving the activation dynamics that form the key to coordination. But biomechanical simulation and control is com- putationally expensive involving complex di erential equations and is not suitable for real-time platforms like games. Performing such computations at every time-step reduces frame rate. Modern games use generic soft- ware packages called physics engines to perform a wide variety of in-game physical e ects. The physics engines are optimized for gaming platforms. Therefore, a physics engine compatible model of anatomical muscles and an alternative control architecture is essential to create biomechanical charac- ters in games. This thesis presents a system that generates muscle activations from captured motion by borrowing principles from biomechanics and neural con- trol. A generic physics engine compliant muscle model primitive is also de- veloped. The muscle model primitive forms the motion actuator and is an integral part of the physical model used in the simulation. This thesis investigates a stochastic solution to create a controller that mimics the neural control system employed in the human body. The control system uses evolutionary neural networks that evolve its weights using genetic algorithms. Examples and guidance often act as templates in muscle training during all stages of human life. Similarly, the neural con- troller attempts to learn muscle coordination through input motion samples. The thesis also explores the objective functions developed that aids in the genetic evolution of the neural network. Character interaction with the game world is still a pre-animated behaviour in most current games. Physically-based procedural hand ani- mation is a step towards autonomous interaction of game characters with the game world. The neural controller and the muscle primitive developed are used to animate a dynamic model of a human hand within a real-time physics engine environment

    Proceedings of the NASA Conference on Space Telerobotics, volume 1

    Get PDF
    The theme of the Conference was man-machine collaboration in space. Topics addressed include: redundant manipulators; man-machine systems; telerobot architecture; remote sensing and planning; navigation; neural networks; fundamental AI research; and reasoning under uncertainty

    Almost Like Being There: Embodiment, Social Presence, and Engagement Using Telepresence Robots in Blended Courses

    Get PDF
    As students’ online learning opportunities continue to increase in higher education, students are choosing not to come back to campus in-person for a variety of personal, health, safety, and financial reasons. The growing use of video conferencing technology during the COVID-19 pandemic allowed classes to continue, but students reported a sense of disconnectedness and lack of engagement with their classes. Telepresence robots may be an alternative to video conferencing that can provide learning experiences closer to the in-person experience, which also provides a stronger sense of embodiment, social presence, and engagement in the classroom. This study explored the use of telepresence robots in four undergraduate, humanities, blended learning courses. Sixty-nine students, 43 in-person and 26 remote students, were surveyed using the Telepresence and Engagement Measurement Scale (TEMS) and provided written feedback about their experience. The TEMS measured embodiment, social presence, psychological involvement, and three indicators of engagement: behavioral, affective, and cognitive. Embodiment and social presence were positively correlated as were embodiment and behavioral engagement. There was no significant difference between the two groups’ perceptions of social presence but there was a significant difference between groups’ perceptions of engagement. Qualitative data and effect sizes greater than 0.80 supported the reliability and validity of the TEMS instrument as a measurement instrument for future study of blended learning environments using remote tools such as telepresence robots. Provided that technological issues such as connectivity and audio and video quality are addressed, telepresence robots can be a useful tool to help students feel more embodied and socially present in today’s blended learning classrooms
    corecore