33 research outputs found

    Watch, Imagine, Attempt: Motor Cortex Single-Unit Activity Reveals Context-Dependent Movement Encoding in Humans With Tetraplegia

    Get PDF
    Planning and performing volitional movement engages widespread networks in the human brain, with motor cortex considered critical to the performance of skilled limb actions. Motor cortex is also engaged when actions are observed or imagined, but the manner in which ensembles of neurons represent these volitional states (VoSs) is unknown. Here we provide direct demonstration that observing, imagining or attempting action activates shared neural ensembles in human motor cortex. Two individuals with tetraplegia (due to brainstem stroke or amyotrophic lateral sclerosis, ALS) were verbally instructed to watch, imagine, or attempt reaching actions displayed on a computer screen. Neural activity in the precentral gyrus incorporated information about both cognitive state and movement kinematics; the three conditions presented overlapping but unique, statistically distinct activity patterns. These findings demonstrate that individual neurons in human motor cortex reflect information related to sensory inputs and VoS in addition to movement features, and are a key part of a broader network linking perception and cognition to action

    Reach and grasp by people with tetraplegia using a neurally controlled robotic arm

    Get PDF
    Paralysis following spinal cord injury (SCI), brainstem stroke, amyotrophic lateral sclerosis (ALS) and other disorders can disconnect the brain from the body, eliminating the ability to carry out volitional movements. A neural interface system (NIS)1–5 could restore mobility and independence for people with paralysis by translating neuronal activity directly into control signals for assistive devices. We have previously shown that people with longstanding tetraplegia can use an NIS to move and click a computer cursor and to control physical devices6–8. Able-bodied monkeys have used an NIS to control a robotic arm9, but it is unknown whether people with profound upper extremity paralysis or limb loss could use cortical neuronal ensemble signals to direct useful arm actions. Here, we demonstrate the ability of two people with long-standing tetraplegia to use NIS-based control of a robotic arm to perform three-dimensional reach and grasp movements. Participants controlled the arm over a broad space without explicit training, using signals decoded from a small, local population of motor cortex (MI) neurons recorded from a 96-channel microelectrode array. One of the study participants, implanted with the sensor five years earlier, also used a robotic arm to drink coffee from a bottle. While robotic reach and grasp actions were not as fast or accurate as those of an able-bodied person, our results demonstrate the feasibility for people with tetraplegia, years after CNS injury, to recreate useful multidimensional control of complex devices directly from a small sample of neural signals

    Neural control of computer cursor velocity by decoding motor cortical spiking activity in humans with tetraplegia

    No full text
    Computer-mediated connections between human motor cortical neurons and assistive devices promise to improve or restore lost function in people with paralysis. Recently, a pilot clinical study of an intracortical neural interface system demonstrated that a tetraplegic human was able to obtain continuous two-dimensional control of a computer cursor using neural activity recorded from his motor cortex. This control, however, was not sufficiently accurate for reliable use in many common computer control tasks. Here, we studied several central design choices for such a system including the kinematic representation for cursor movement, the decoding method that translates neuronal ensemble spiking activity into a control signal and the cursor control task used during training for optimizing the parameters of the decoding method. In two tetraplegic participants, we found that controlling a cursor's velocity resulted in more accurate closed-loop control than controlling its position directly and that cursor velocity control was achieved more rapidly than position control. Control quality was further improved over conventional linear filters by using a probabilistic method, the Kalman filter, to decode human motor cortical activity. Performance assessment based on standard metrics used for the evaluation of a wide range of pointing devices demonstrated significantly improved cursor control with velocity rather than position decoding. © 2008 IOP Publishing Ltdclose13715415

    Point-and-Click Cursor Control With an Intracortical Neural Interface System by Humans With Tetraplegia

    No full text
    We present a point-and-click intracortical neural interface system (NIS) that enables humans with tetraplegia to volitionally move a 2-D computer cursor in any desired direction on a computer screen, hold it still, and click on the area of interest. This direct brain-computer interface extracts both discrete (click) and continuous (cursor velocity) signals from a single small population of neurons in human motor cortex. A key component of this system is a multi-state probabilistic decoding algorithm that simultaneously decodes neural spiking activity of a small population of neurons and outputs either a click signal or the velocity of the cursor. The algorithm combines a linear classifier, which determines whether the user is intending to click or move the cursor, with a Kalman filter that translates the neural population activity into cursor velocity. We present a paradigm for training the multi-state decoding algorithm using neural activity observed during imagined actions. Two human participants with tetraplegia (paralysis of the four limbs) performed a closed-loop radial target acquisition task using the point-and-click NIS over multiple sessions. We quantified point-and-click performance using various human-computer interaction measurements for pointing devices. We found that participants could control the cursor motion and click on specified targets with a small error rate (<3% in one participant). This study suggests that signals from a small ensemble of motor cortical neurons (similar to 40) can be used for natural point-and-click 2-D cursor control of a personal computer.close4

    An assistive decision-and-control architecture for force-sensitive hand–arm systems driven by human–machine interfaces

    No full text
    Fully autonomous applications of modern robotic systems are still constrained by limitations in sensory data processing, scene interpretation, and automated reasoning. However, their use as assistive devices for people with upper-limb disabilities has become possible with recent advances in “soft robotics”, that is, interaction control, physical human–robot interaction, and reflex planning. In this context, impedance and reflex-based control has generally been understood to be a promising approach to safe interaction robotics. To create semi-autonomous assistive devices, we propose a decision-and-control architecture for hand–arm systems with “soft robotics” capabilities, which can then be used via human–machine interfaces (HMIs). We validated the functionality of our approach within the BrainGate2 clinical trial, in which an individual with tetraplegia used our architecture to control a robotic hand–arm system under neural control via a multi-electrode array implanted in the motor cortex. The neuroscience results of this research have previously been published by Hochberg et al. In this paper we present our assistive decision-and-control architecture and demonstrate how the semi-autonomous assistive behavior can help the user. In our framework the robot is controlled through a multi-priority Cartesian impedance controller and its behavior is extended with collision detection and reflex reaction. Furthermore, virtual workspaces are added to ensure safety. On top of this we employ a decision-and-control architecture that uses sensory information available from the robotic system to evaluate the current state of task execution. Based on a set of available assistive skills, our architecture provides support in object interaction and manipulation and thereby enhances the usability of the robotic system for use with HMIs. The goal of our development is to provide an easy-to-use robotic system for people with physical disabilities and thereby enable them to perform simple tasks of daily living. In an exemplary real-world task, the participant was able to serve herself a beverage autonomously for the first time since her brainstem stroke, which she suffered approximately 14 years prior to this research
    corecore