13,168 research outputs found

    Toward More Versatile and Intuitive Cortical Brain–Machine Interfaces

    Get PDF
    Brain–machine interfaces have great potential for the development of neuroprosthetic applications to assist patients suffering from brain injury or neurodegenerative disease. One type of brain–machine interface is a cortical motor prosthetic, which is used to assist paralyzed subjects. Motor prosthetics to date have typically used the motor cortex as a source of neural signals for controlling external devices. The review will focus on several new topics in the arena of cortical prosthetics. These include using: recordings from cortical areas outside motor cortex; local field potentials as a source of recorded signals; somatosensory feedback for more dexterous control of robotics; and new decoding methods that work in concert to form an ecology of decode algorithms. These new advances promise to greatly accelerate the applicability and ease of operation of motor prosthetics

    Cortical Models for Movement Control

    Full text link
    Defense Advanced Research Projects Agency and Office of Naval Research (N0014-95-l-0409)

    Reaching Performance in Heathy Individuals and Stroke Survivors Improves after Practice with Vibrotactile State Feedback

    Get PDF
    Stroke causes deficits of cognition, motor, and/or somatosensory functions. These deficits degrade the capability to perform activities of daily living (ADLs). Many research investigations have focused on mitigating the motor deficits of stroke through motor rehabilitation. However, somatosensory deficits are common and may contribute importantly to impairments in the control of functional arm movement. This dissertation advances the goal of promoting functional motor recovery after stroke by investigating the use of a vibrotactile feedback (VTF) body-machine interface (BMI). The VTF BMI is intended to improve control of the contralesional arm of stroke survivors by delivering supplemental limb-state feedback to the ipsilesional arm, where somatosensory feedback remains intact. To develop and utilize a VTF BMI, we first investigated how vibrotactile stimuli delivered on the arm are perceived and discriminated. We determined that stimuli are better perceived sequentially than those delivered simultaneously. Such stimuli can propagate up to 8 cm from the delivery site, so future applications should consider adequate spacing between stimulation sites. We applied these findings to create a multi-channel VTF interface to guide the arm in the absence of vision. In healthy people, we found that short-term practice, less than 2.5 hrs, allows for small improvements in the accuracy of horizontal planar reaching. Long-term practice, about 10 hrs, engages motor learning such that the accuracy and efficiency of reaching is improved and cognitive loading of VTF-guided reaching is reduced. During practice, participants adopted a movement strategy whereby BMI feedback changed in just one channel at a time. From this observation, we sought to develop a practice paradigm that might improve stroke survivors’ learning of VTF-guided reaching without vision. We investigated the effects of practice methods (whole practice vs part practice) in stroke survivors’ capability to make VTF-guided arm movements. Stroke survivors were able to improve the accuracy of VTF-guided reaching with practice, however there was no inherent differences between practice methods. In conclusion, practice on VTF-guided 2D reaching can be used by healthy people and stroke survivors. Future studies should investigate long-term practice in stroke survivors and their capability to use VTF BMIs to improve performance of unconstrained actions, including ADLs

    The dynamics of motor learning through the formation of internal models

    Get PDF
    A medical student learning to perform a laparoscopic procedure or a recently paralyzed user of a powered wheelchair must learn to operate machinery via interfaces that translate their actions into commands for an external device. Since the user\u2019s actions are selected from a number of alternatives that would result in the same effect in the control space of the external device, learning to use such interfaces involves dealing with redundancy. Subjects need to learn an externally chosen many-to-one map that transforms their actions into device commands. Mathematically, we describe this type of learning as a deterministic dynamical process, whose state is the evolving forward and inverse internal models of the interface. The forward model predicts the outcomes of actions, while the inverse model generates actions designed to attain desired outcomes. Both the mathematical analysis of the proposed model of learning dynamics and the learning performance observed in a group of subjects demonstrate a first-order exponential convergence of the learning process toward a particular state that depends only on the initial state of the inverse and forward models and on the sequence of targets supplied to the users. Noise is not only present but necessary for the convergence of learning through the minimization of the difference between actual and predicted outcomes

    A brain-machine interface for assistive robotic control

    Get PDF
    Brain-machine interfaces (BMIs) are the only currently viable means of communication for many individuals suffering from locked-in syndrome (LIS) – profound paralysis that results in severely limited or total loss of voluntary motor control. By inferring user intent from task-modulated neurological signals and then translating those intentions into actions, BMIs can enable LIS patients increased autonomy. Significant effort has been devoted to developing BMIs over the last three decades, but only recently have the combined advances in hardware, software, and methodology provided a setting to realize the translation of this research from the lab into practical, real-world applications. Non-invasive methods, such as those based on the electroencephalogram (EEG), offer the only feasible solution for practical use at the moment, but suffer from limited communication rates and susceptibility to environmental noise. Maximization of the efficacy of each decoded intention, therefore, is critical. This thesis addresses the challenge of implementing a BMI intended for practical use with a focus on an autonomous assistive robot application. First an adaptive EEG- based BMI strategy is developed that relies upon code-modulated visual evoked potentials (c-VEPs) to infer user intent. As voluntary gaze control is typically not available to LIS patients, c-VEP decoding methods under both gaze-dependent and gaze- independent scenarios are explored. Adaptive decoding strategies in both offline and online task conditions are evaluated, and a novel approach to assess ongoing online BMI performance is introduced. Next, an adaptive neural network-based system for assistive robot control is presented that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. Exploratory learning, or “learning by doing,” is an unsupervised method in which the robot is able to build an internal model for motor planning and coordination based on real-time sensory inputs received during exploration. Finally, a software platform intended for practical BMI application use is developed and evaluated. Using online c-VEP methods, users control a simple 2D cursor control game, a basic augmentative and alternative communication tool, and an assistive robot, both manually and via high-level goal-oriented commands

    Peripersonal Space in the Humanoid Robot iCub

    Get PDF
    Developing behaviours for interaction with objects close to the body is a primary goal for any organism to survive in the world. Being able to develop such behaviours will be an essential feature in autonomous humanoid robots in order to improve their integration into human environments. Adaptable spatial abilities will make robots safer and improve their social skills, human-robot and robot-robot collaboration abilities. This work investigated how a humanoid robot can explore and create action-based representations of its peripersonal space, the region immediately surrounding the body where reaching is possible without location displacement. It presents three empirical studies based on peripersonal space findings from psychology, neuroscience and robotics. The experiments used a visual perception system based on active-vision and biologically inspired neural networks. The first study investigated the contribution of binocular vision in a reaching task. Results indicated the signal from vergence is a useful embodied depth estimation cue in the peripersonal space in humanoid robots. The second study explored the influence of morphology and postural experience on confidence levels in reaching assessment. Results showed that a decrease of confidence when assessing targets located farther from the body, possibly in accordance to errors in depth estimation from vergence for longer distances. Additionally, it was found that a proprioceptive arm-length signal extends the robot’s peripersonal space. The last experiment modelled development of the reaching skill by implementing motor synergies that progressively unlock degrees of freedom in the arm. The model was advantageous when compared to one that included no developmental stages. The contribution to knowledge of this work is extending the research on biologically-inspired methods for building robots, presenting new ways to further investigate the robotic properties involved in the dynamical adaptation to body and sensing characteristics, vision-based action, morphology and confidence levels in reaching assessment.CONACyT, Mexico (National Council of Science and Technology

    Brain-machine interfaces for rehabilitation in stroke: A review

    Get PDF
    BACKGROUND: Motor paralysis after stroke has devastating consequences for the patients, families and caregivers. Although therapies have improved in the recent years, traditional rehabilitation still fails in patients with severe paralysis. Brain-machine interfaces (BMI) have emerged as a promising tool to guide motor rehabilitation interventions as they can be applied to patients with no residual movement. OBJECTIVE: This paper reviews the efficiency of BMI technologies to facilitate neuroplasticity and motor recovery after stroke. METHODS: We provide an overview of the existing rehabilitation therapies for stroke, the rationale behind the use of BMIs for motor rehabilitation, the current state of the art and the results achieved so far with BMI-based interventions, as well as the future perspectives of neural-machine interfaces. RESULTS: Since the first pilot study by Buch and colleagues in 2008, several controlled clinical studies have been conducted, demonstrating the efficacy of BMIs to facilitate functional recovery in completely paralyzed stroke patients with noninvasive technologies such as the electroencephalogram (EEG). CONCLUSIONS: Despite encouraging results, motor rehabilitation based on BMIs is still in a preliminary stage, and further improvements are required to boost its efficacy. Invasive and hybrid approaches are promising and might set the stage for the next generation of stroke rehabilitation therapies.This study was funded by the Bundesministerium für Bildung und Forschung BMBF MOTORBIC (FKZ13GW0053)andAMORSA(FKZ16SV7754), the Deutsche Forschungsgemeinschaft (DFG), the fortüne-Program of the University of Tübingen (2422-0-0 and 2452-0-0), and the Basque GovernmentScienceProgram(EXOTEK:KK2016/00083). NIL was supported by the Basque Government’s scholarship for predoctoral students

    Introduction: The Fourth International Workshop on Epigenetic Robotics

    Get PDF
    As in the previous editions, this workshop is trying to be a forum for multi-disciplinary research ranging from developmental psychology to neural sciences (in its widest sense) and robotics including computational studies. This is a two-fold aim of, on the one hand, understanding the brain through engineering embodied systems and, on the other hand, building artificial epigenetic systems. Epigenetic contains in its meaning the idea that we are interested in studying development through interaction with the environment. This idea entails the embodiment of the system, the situatedness in the environment, and of course a prolonged period of postnatal development when this interaction can actually take place. This is still a relatively new endeavor although the seeds of the developmental robotics community were already in the air since the nineties (Berthouze and Kuniyoshi, 1998; Metta et al., 1999; Brooks et al., 1999; Breazeal, 2000; Kozima and Zlatev, 2000). A few had the intuition – see Lungarella et al. (2003) for a comprehensive review – that, intelligence could not be possibly engineered simply by copying systems that are “ready made” but rather that the development of the system fills a major role. This integration of disciplines raises the important issue of learning on the multiple scales of developmental time, that is, how to build systems that eventually can learn in any environment rather than program them for a specific environment. On the other hand, the hope is that robotics might become a new tool for brain science similarly to what simulation and modeling have become for the study of the motor system. Our community is still pretty much evolving and “under construction” and for this reason, we tried to encourage submissions from the psychology community. Additionally, we invited four neuroscientists and no roboticists for the keynote lectures. We received a record number of submissions (more than 50), and given the overall size and duration of the workshop together with our desire to maintain a single-track format, we had to be more selective than ever in the review process (a 20% acceptance rate on full papers). This is, if not an index of quality, at least an index of the interest that gravitates around this still new discipline
    • …
    corecore