47 research outputs found

    The dynamics of motor learning through the formation of internal models

    Get PDF
    A medical student learning to perform a laparoscopic procedure or a recently paralyzed user of a powered wheelchair must learn to operate machinery via interfaces that translate their actions into commands for an external device. Since the user\u2019s actions are selected from a number of alternatives that would result in the same effect in the control space of the external device, learning to use such interfaces involves dealing with redundancy. Subjects need to learn an externally chosen many-to-one map that transforms their actions into device commands. Mathematically, we describe this type of learning as a deterministic dynamical process, whose state is the evolving forward and inverse internal models of the interface. The forward model predicts the outcomes of actions, while the inverse model generates actions designed to attain desired outcomes. Both the mathematical analysis of the proposed model of learning dynamics and the learning performance observed in a group of subjects demonstrate a first-order exponential convergence of the learning process toward a particular state that depends only on the initial state of the inverse and forward models and on the sequence of targets supplied to the users. Noise is not only present but necessary for the convergence of learning through the minimization of the difference between actual and predicted outcomes

    Decoding Information From Neural Signals Recorded Using Intraneural Electrodes: Toward the Development of a Neurocontrolled Hand Prosthesis

    Get PDF
    The possibility of controlling dexterous hand prostheses by using a direct connection with the nervous system is particularly interesting for the significant improvement of the quality of life of patients, which can derive from this achievement. Among the various approaches, peripheral nerve based intrafascicular electrodes are excellent neural interface candidates, representing an excellent compromise between high selectivity and relatively low invasiveness. Moreover, this approach has undergone preliminary testing in human volunteers and has shown promise. In this paper, we investigate whether the use of intrafascicular electrodes can be used to decode multiple sensory and motor information channels with the aim to develop a finite state algorithm that may be employed to control neuroprostheses and neurocontrolled hand prostheses. The results achieved both in animal and human experiments show that the combination of multiple sites recordings and advanced signal processing techniques (such as wavelet denoising and spike sorting algorithms) can be used to identify both sensory stimuli (in animal models) and motor commands (in a human volunteer). These findings have interesting implications, which should be investigated in future experiments. © 2006 IEEE

    A Symbiotic Brain-Machine Interface through Value-Based Decision Making

    Get PDF
    BACKGROUND: In the development of Brain Machine Interfaces (BMIs), there is a great need to enable users to interact with changing environments during the activities of daily life. It is expected that the number and scope of the learning tasks encountered during interaction with the environment as well as the pattern of brain activity will vary over time. These conditions, in addition to neural reorganization, pose a challenge to decoding neural commands for BMIs. We have developed a new BMI framework in which a computational agent symbiotically decoded users' intended actions by utilizing both motor commands and goal information directly from the brain through a continuous Perception-Action-Reward Cycle (PARC). METHODOLOGY: The control architecture designed was based on Actor-Critic learning, which is a PARC-based reinforcement learning method. Our neurophysiology studies in rat models suggested that Nucleus Accumbens (NAcc) contained a rich representation of goal information in terms of predicting the probability of earning reward and it could be translated into an evaluative feedback for adaptation of the decoder with high precision. Simulated neural control experiments showed that the system was able to maintain high performance in decoding neural motor commands during novel tasks or in the presence of reorganization in the neural input. We then implanted a dual micro-wire array in the primary motor cortex (M1) and the NAcc of rat brain and implemented a full closed-loop system in which robot actions were decoded from the single unit activity in M1 based on an evaluative feedback that was estimated from NAcc. CONCLUSIONS: Our results suggest that adapting the BMI decoder with an evaluative feedback that is directly extracted from the brain is a possible solution to the problem of operating BMIs in changing environments with dynamic neural signals. During closed-loop control, the agent was able to solve a reaching task by capturing the action and reward interdependency in the brain

    A Review of Control Strategies in Closed-Loop Neuroprosthetic Systems

    Get PDF
    It has been widely recognized that closed-loop neuroprosthetic systems achieve more favourable outcomes for users then equivalent open-loop devices. Improved performance of tasks, better usability and greater embodiment have all been reported in systems utilizing some form of feedback. However the interdisciplinary work on neuroprosthetic systems can lead to miscommunication due to similarities in well established nomenclature in different fields. Here we present a review of control strategies in existing experimental, investigational and clinical neuroprosthetic systems in order to establish a baseline and promote a common understanding of different feedback modes and closed loop controllers. The first section provides a brief discussion of feedback control and control theory. The second section reviews the control strategies of recent Brain Machine Interfaces, neuromodulatory implants, neuroprosthetic systems and assistive neurorobotic devices. The final section examines the different approaches to feedback in current neuroprosthetic and neurorobotic systems

    Biocybernetic Adaptation Strategies: Machine Awareness of Human Engagement for Improved Operational Performance

    Get PDF
    Human operators interacting with machines or computers continually adapt to the needs of the system ideally resulting in optimal performance. In some cases, however, deteriorated performance is an outcome. Adaptation to the situation is a strength expected of the human operator which is often accomplished by the human through self-regulation of mental state. Adaptation is at the core of the human operator's activity, and research has demonstrated that the implementation of a feedback loop can enhance this natural skill to improve training and human/machine interaction. Biocybernetic adaptation involves a loop upon a loop, which may be visualized as a superimposed loop which senses a physiological signal and influences the operators task at some point. Biocybernetic adaptation in, for example, physiologically adaptive automation employs the steering sense of cybernetic, and serves a transitory adaptive purpose to better serve the human operator by more fully representing their responses to the sys- tem. The adaptation process usually makes use of an assessment of transient cog- nitive state to steer a functional aspect of a system that is external to the operators physiology from which the state assessment is derived. Therefore, the objective of this paper is to detail the structure of biocybernetic systems regarding the level of engagement of interest for adaptive systems, their processing pipeline, and the adaptation strategies employed for training purposes, in an effort to pave the way towards machine awareness of human state for self-regulation and improved operational performance

    Use of a Bayesian maximum-likelihood classifier to generate training data for brain–machine interfaces

    Full text link
    Brain–machine interface decoding algorithms need to be predicated on assumptions that are easily met outside of an experimental setting to enable a practical clinical device. Given present technological limitations, there is a need for decoding algorithms which (a) are not dependent upon a large number of neurons for control, (b) are adaptable to alternative sources of neuronal input such as local field potentials (LFPs), and (c) require only marginal training data for daily calibrations. Moreover, practical algorithms must recognize when the user is not intending to generate a control output and eliminate poor training data. In this paper, we introduce and evaluate a Bayesian maximum-likelihood estimation strategy to address the issues of isolating quality training data and self-paced control. Six animal subjects demonstrate that a multiple state classification task, loosely based on the standard center-out task, can be accomplished with fewer than five engaged neurons while requiring less than ten trials for algorithm training. In addition, untrained animals quickly obtained accurate device control, utilizing LFPs as well as neurons in cingulate cortex, two non-traditional neural inputs.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/90824/1/1741-2552_8_4_046009.pd

    On the Impact of Gravity Compensation on Reinforcement Learning in Goal-Reaching Tasks for Robotic Manipulators

    Get PDF
    Advances in machine learning technologies in recent years have facilitated developments in autonomous robotic systems. Designing these autonomous systems typically requires manually specified models of the robotic system and world when using classical control-based strategies, or time consuming and computationally expensive data-driven training when using learning-based strategies. Combination of classical control and learning-based strategies may mitigate both requirements. However, the performance of the combined control system is not obvious given that there are two separate controllers. This paper focuses on one such combination, which uses gravity-compensation together with reinforcement learning (RL). We present a study of the effects of gravity compensation on the performance of two reinforcement learning algorithms when solving reaching tasks using a simulated seven-degree-of-freedom robotic arm. The results of our study demonstrate that gravity compensation coupled with RL can reduce the training required in reaching tasks involving elevated target locations, but not all target locations

    Biocybernetic Adaptation Strategies: Machine awareness of human state for improved operational performance

    Get PDF
    Human operators interacting with machines or computers continually adapt to the needs of the system ideally resulting in optimal performance. In some cases, however, deteriorated performance is an outcome. Adaptation to the situation is a strength expected of the human operator which is often accomplished by the human through self-regulation of mental state. Adaptation is at the core of the human operator’s activity, and research has demonstrated that the implementation of a feedback loop can enhance this natural skill to improve training and human/machine interaction. Biocybernetic adaptation involves a “loop upon a loop,” which may be visualized as a superimposed loop which senses a physiological signal and influences the operator’s task at some point. Biocybernetic adaptation in, for example, physiologically adaptive automation employs the “steering” sense of “cybernetic,” and serves a transitory adaptive purpose – to better serve the human operator by more fully representing their responses to the system. The adaptation process usually makes use of an assessment of transient cognitive state to steer a functional aspect of a system that is external to the operator’s physiology from which the state assessment is derived. Therefore, the objective of this paper is to detail the structure of biocybernetic systems regarding the level of engagement of interest for adaptive systems, their processing pipeline, and the adaptation strategies employed for training purposes, in an effort to pave the way towards machine awareness of human state for self-regulation and improved operational performance

    The feedback dynamics of brain-computer interfaces in a distributed processing environment

    Get PDF
    This paper describes a distributed paradigm for human brain-computer interfaces that can incorporate machine learning-directly stimulus feedback to the subject. Specifically, we use OpenBCI hardware and software to capture real-time EEG (Electroencephalography) waveforms from a subject on a host ''client" computer and stream them to another ''server" computer which could perform complex analyses on the waveforms prior to sending commands back to the OpenBCI interface directing alterations to the stimulus. In addition to describing the conceptual system framework, we present here the test results quantifying the closed-loop system latencies under various conditions. Quantifying latency in any feedback control loop (in this case, one that actually contains the human subject's brain) is vital since excess latency can destabilize a system
    corecore