328 research outputs found

    Decoding Lower Limb Muscle Activity and Kinematics from Cortical Neural Spike Trains during Monkey Performing Stand and Squat Movements

    Get PDF
    abstract: Extensive literatures have shown approaches for decoding upper limb kinematics or muscle activity using multichannel cortical spike recordings toward brain machine interface (BMI) applications. However, similar topics regarding lower limb remain relatively scarce. We previously reported a system for training monkeys to perform visually guided stand and squat tasks. The current study, as a follow-up extension, investigates whether lower limb kinematics and muscle activity characterized by electromyography (EMG) signals during monkey performing stand/squat movements can be accurately decoded from neural spike trains in primary motor cortex (M1). Two monkeys were used in this study. Subdermal intramuscular EMG electrodes were implanted to 8 right leg/thigh muscles. With ample data collected from neurons from a large brain area, we performed a spike triggered average (SpTA) analysis and got a series of density contours which revealed the spatial distributions of different muscle-innervating neurons corresponding to each given muscle. Based on the guidance of these results, we identified the locations optimal for chronic electrode implantation and subsequently carried on chronic neural data recordings. A recursive Bayesian estimation framework was proposed for decoding EMG signals together with kinematics from M1 spike trains. Two specific algorithms were implemented: a standard Kalman filter and an unscented Kalman filter. For the latter one, an artificial neural network was incorporated to deal with the nonlinearity in neural tuning. High correlation coefficient and signal to noise ratio between the predicted and the actual data were achieved for both EMG signals and kinematics on both monkeys. Higher decoding accuracy and faster convergence rate could be achieved with the unscented Kalman filter. These results demonstrate that lower limb EMG signals and kinematics during monkey stand/squat can be accurately decoded from a group of M1 neurons with the proposed algorithms. Our findings provide new insights for extending current BMI design concepts and techniques on upper limbs to lower limb circumstances. Brain controlled exoskeleton, prostheses or neuromuscular electrical stimulators for lower limbs are expected to be developed, which enables the subject to manipulate complex biomechatronic devices with mind in more harmonized manner.View the article as published at http://journal.frontiersin.org/article/10.3389/fnins.2017.00044/ful

    Efficient Universal Computing Architectures for Decoding Neural Activity

    Get PDF
    The ability to decode neural activity into meaningful control signals for prosthetic devices is critical to the development of clinically useful brain– machine interfaces (BMIs). Such systems require input from tens to hundreds of brain-implanted recording electrodes in order to deliver robust and accurate performance; in serving that primary function they should also minimize power dissipation in order to avoid damaging neural tissue; and they should transmit data wirelessly in order to minimize the risk of infection associated with chronic, transcutaneous implants. Electronic architectures for brain– machine interfaces must therefore minimize size and power consumption, while maximizing the ability to compress data to be transmitted over limited-bandwidth wireless channels. Here we present a system of extremely low computational complexity, designed for real-time decoding of neural signals, and suited for highly scalable implantable systems. Our programmable architecture is an explicit implementation of a universal computing machine emulating the dynamics of a network of integrate-and-fire neurons; it requires no arithmetic operations except for counting, and decodes neural signals using only computationally inexpensive logic operations. The simplicity of this architecture does not compromise its ability to compress raw neural data by factors greater than . We describe a set of decoding algorithms based on this computational architecture, one designed to operate within an implanted system, minimizing its power consumption and data transmission bandwidth; and a complementary set of algorithms for learning, programming the decoder, and postprocessing the decoded output, designed to operate in an external, nonimplanted unit. The implementation of the implantable portion is estimated to require fewer than 5000 operations per second. A proof-of-concept, 32-channel field-programmable gate array (FPGA) implementation of this portion is consequently energy efficient. We validate the performance of our overall system by decoding electrophysiologic data from a behaving rodent.United States. National Institutes of Health (Grant NS056140

    A model-based approach to robot kinematics and control using discrete factor graphs with belief propagation

    Get PDF
    Much of recent researches in robotics have shifted the focus from traditionally-specific industrial tasks to investigations of new types of robots with alternative ways of controlling them. In this paper, we describe the development of a generic method based on factor graphs to model robot kinematics. We focused on the kinematics aspect of robot control because it provides a fast and systematic solution for the robot agent to move in a dynamic environment. We developed neurally-inspired factor graph models that can be applied on two different robotic systems: a mobile platform and a robotic arm. We also demonstrated that we can extend the static model of the robotic arm into a dynamic model useful for imitating natural movements of a human hand. We tested our methods in a simulation environment as well as in scenarios involving real robots. The experimental results proved the flexibility of our proposed methods in terms of remodeling and learning, which enabled the modeled robot to perform reliably during the execution of given tasks

    Towards Subject Independent Sign Language Recognition : A Segment-Based Probabilistic Approach

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    A quantitative investigation of natural head movement and its contribution to spatial orientation perception.

    Get PDF
    Movement is ubiquitous in everyday life. As we exist in a physical world, we constantly account for our position in it relative to other physical features: both at a conscious, volitional level and an unconscious one. Our experience estimating our own position accumulates over the lifespan, and it is thought that this experience (often referred to as a prior) informs current perception of spatial orientation. Broadly, this perception of spatial orientation is rapidly performed by the nervous system by monitoring, interpreting, and integrated sensory information from multiple sense organs. To do this efficiently, the nervous system likely represents this sensory information in a statistically optimal manner. Some of the most important information for spatial orientation perception comes from visual and vestibular sensation, which rely on sensory organs located in the head. While statistical information about natural visual and vestibular stimuli have been characterized, natural head movement and position, which likely drives correlated dynamics across head-located senses, has not. Furthermore, sensory cues essential to spatial orientation perception are directly affected by head movement specifically. It is likely that measurement of these sensory cues taken during natural behaviors sample a significant portion of the total behaviors that comprise ones’ prior. In this dissertation, I present work quantifying characteristics of head orientation and heading, two dimensions of spatial orientation, over long-duration recordings of natural behavior in humans. Then, I use these to generate priors for Bayesian modeling frameworks which successfully predict observed patterns of orientation and heading perception bias. Given the ability to predict some patterns of bias (head roll and heading azimuth) particularly well, it is likely our data are representative of real behaviors that comprise previous experience the nervous system may have. Natural head orientation and heading distributions reveal several interesting trends that open future lines of research. First, head pitch demonstrates large amounts of inter-subject variability; likely this is due to biomechanical differences, but as these remain relatively stable over the lifespan these should bias head movements. Second, heading azimuth appears to vary significantly as a function of task. Heading azimuth distributions during low velocities (which predominantly consist of stationary activities like standing or sitting) are strongly multimodal across all subjects, while azimuth distributions during high velocities (predominantly consisting of locomotion) are unimodal with relatively low variance. Future work investigating these trends, as well as implications these trends and data have for sensory processing and other applications is discussed

    Glucose-powered neuroelectronics

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 157-164).A holy grail of bioelectronics is to engineer biologically implantable systems that can be embedded without disturbing their local environments, while harvesting from their surroundings all of the power they require. As implantable electronic devices become increasingly prevalent in scientific research and in the diagnosis, management, and treatment of human disease, there is correspondingly increasing demand for devices with unlimited functional lifetimes that integrate seamlessly with their hosts in these two ways. This thesis presents significant progress toward establishing the feasibility of one such system: A brain-machine interface powered by a bioimplantable fuel cell that harvests energy from extracellular glucose in the cerebrospinal fluid surrounding the brain. The first part of this thesis describes a set of biomimetic algorithms and low-power circuit architectures for decoding electrical signals from ensembles of neurons in the brain. The decoders are intended for use in the context of neural rehabilitation, to provide paralyzed or otherwise disabled patients with instantaneous, natural, thought-based control of robotic prosthetic limbs and other external devices. This thesis presents a detailed discussion of the decoding algorithms, descriptions of the low-power analog and digital circuit architectures used to implement the decoders, and results validating their performance when applied to decode real neural data. A major constraint on brain-implanted electronic devices is the requirement that they consume and dissipate very little power, so as not to damage surrounding brain tissue. The systems described here address that constraint, computing in the style of biological neural networks, and using arithmetic-free, purely logical primitives to establish universal computing architectures for neural decoding. The second part of this thesis describes the development of an implantable fuel cell powered by extracellular glucose at concentrations such as those found in the cerebrospinal fluid surrounding the brain. The theoretical foundations, details of design and fabrication, mechanical and electrochemical characterization, as well as in vitro performance data for the fuel cell are presented.by Benjamin Isaac Rapoport.Ph.D

    Parametric Human Movements:Learning, Synthesis, Recognition, and Tracking

    Get PDF

    Cortical Decoding of Individual Finger Group Motions Using ReFIT Kalman Filter

    Get PDF
    Objective: To date, many brain-machine interface (BMI) studies have developed decoding algorithms for neuroprostheses that provide users with precise control of upper arm reaches with some limited grasping capabilities. However, comparatively few have focused on quantifying the performance of precise finger control. Here we expand upon this work by investigating online control of individual finger groups.Approach: We have developed a novel training manipulandum for non-human primate (NHP) studies to isolate the movements of two specific finger groups: index and middle-ring-pinkie (MRP) fingers. We use this device in combination with the ReFIT (Recalibrated Feedback Intention-Trained) Kalman filter to decode the position of each finger group during a single degree of freedom task in two rhesus macaques with Utah arrays in motor cortex. The ReFIT Kalman filter uses a two-stage training approach that improves online control of upper arm tasks with substantial reductions in orbiting time, thus making it a logical first choice for precise finger control.Results: Both animals were able to reliably acquire fingertip targets with both index and MRP fingers, which they did in blocks of finger group specific trials. Decoding from motor signals online, the ReFIT Kalman filter reliably outperformed the standard Kalman filter, measured by bit rate, across all tested finger groups and movements by 31.0 and 35.2%. These decoders were robust when the manipulandum was removed during online control. While index finger movements and middle-ring-pinkie finger movements could be differentiated from each other with 81.7% accuracy across both subjects, the linear Kalman filter was not sufficient for decoding both finger groups together due to significant unwanted movement in the stationary finger, potentially due to co-contraction.Significance: To our knowledge, this is the first systematic and biomimetic separation of digits for continuous online decoding in a NHP as well as the first demonstration of the ReFIT Kalman filter improving the performance of precise finger decoding. These results suggest that novel nonlinear approaches, apparently not necessary for center out reaches or gross hand motions, may be necessary to achieve independent and precise control of individual fingers

    Generative Models for Learning Robot Manipulation Skills from Humans

    Get PDF
    A long standing goal in artificial intelligence is to make robots seamlessly interact with humans in performing everyday manipulation skills. Learning from demonstrations or imitation learning provides a promising route to bridge this gap. In contrast to direct trajectory learning from demonstrations, many problems arise in interactive robotic applications that require higher contextual level understanding of the environment. This requires learning invariant mappings in the demonstrations that can generalize across different environmental situations such as size, position, orientation of objects, viewpoint of the observer, etc. In this thesis, we address this challenge by encapsulating invariant patterns in the demonstrations using probabilistic learning models for acquiring dexterous manipulation skills. We learn the joint probability density function of the demonstrations with a hidden semi-Markov model, and smoothly follow the generated sequence of states with a linear quadratic tracking controller. The model exploits the invariant segments (also termed as sub-goals, options or actions) in the demonstrations and adapts the movement in accordance with the external environmental situations such as size, position and orientation of the objects in the environment using a task-parameterized formulation. We incorporate high-dimensional sensory data for skill acquisition by parsimoniously representing the demonstrations using statistical subspace clustering methods and exploit the coordination patterns in latent space. To adapt the models on the fly and/or teach new manipulation skills online with the streaming data, we formulate a non-parametric scalable online sequence clustering algorithm with Bayesian non-parametric mixture models to avoid the model selection problem while ensuring tractability under small variance asymptotics. We exploit the developed generative models to perform manipulation skills with remotely operated vehicles over satellite communication in the presence of communication delays and limited bandwidth. A set of task-parameterized generative models are learned from the demonstrations of different manipulation skills provided by the teleoperator. The model captures the intention of teleoperator on one hand and provides assistance in performing remote manipulation tasks on the other hand under varying environmental situations. The assistance is formulated under time-independent shared control, where the model continuously corrects the remote arm movement based on the current state of the teleoperator; and/or time-dependent autonomous control, where the model synthesizes the movement of the remote arm for autonomous skill execution. Using the proposed methodology with the two-armed Baxter robot as a mock-up for semi-autonomous teleoperation, we are able to learn manipulation skills such as opening a valve, pick-and-place an object by obstacle avoidance, hot-stabbing (a specialized underwater task akin to peg-in-a-hole task), screw-driver target snapping, and tracking a carabiner in as few as 4 - 8 demonstrations. Our study shows that the proposed manipulation assistance formulations improve the performance of the teleoperator by reducing the task errors and the execution time, while catering for the environmental differences in performing remote manipulation tasks with limited bandwidth and communication delays

    Toward a Full Prehension Decoding from Dorsomedial Area V6A

    Get PDF
    Neural prosthetics represent a promising approach to restore movements in patients affected by spinal cord lesions. To drive a full capable, brain controlled, prosthetic arm, reaching and grasping components of prehension have to be accurately reconstructed from neural activity. Neurons in the dorsomedial area V6A of macaque show sensitivity to reaching direction accounting also for depth dimension, thus encoding positions in the entire 3D space. Moreover, many neurons are sensible to grips types and wrist orientations. To assess whether these signals are adequate to drive a full capable neural prosthetic arm, we recorded spiking activity of neurons in area V6A, spike counts were used to train machine learning algorithms to reconstruct reaching and grasping. In a first work, two Macaca fascicularis monkeys were trained to perform an instructed-delay reach-to-grasp task in the dark and in the light toward objects of different shapes. The activity of 89 neurons was used to train and validate a Bayes classifier used for decoding objects and grip types. Recognition rates were well above chance level for all the epochs analyzed in this study. In a second work, monkeys were trained to perform reaches to targets located at various depths and directions and the classifier was tested whether it could correctly predict the reach goal position from V6A signals. The reach goal location was reliably decoded with accuracy close to optimal (>90%) throughout the task. Together these results, show a reliable decoding of hand grips and spatial location of reaching goals in the same area, suggesting that V6A is a suitable site to decode the entire prehension action with obvious advantages in terms of implant invasiveness. This new PPC site useful for decoding both reaching and grasping opens new perspectives in the development of human brain-computer interfaces
    corecore