3,133 research outputs found

    Cortical motor prosthetics: the development and use for paralysis

    Full text link
    The emerging research field of Brain Computer Interfaces (BCIs) has created an invasive type of BCI, the Cortical Motor Prosthetic (CMP) or invasive BCI (iBCI). The goal is to restore lost motor function via prosthetic control signals to individuals who have long-term paralysis. The development of the CMP consists of two major entities: the implantable, chronic microelectrode array (MEA) and the data acquisition hardware (DAQ) specifically the decoder. The iBCI's function is to record primary motor cortex (M1) neural signals via chronic MEA and translate into a motor command via decoder extraction algorithms that can control a prosthetic to perform the intended movement. The ultimate goal is to use the iBCI as a clinical tool for individuals with long-term paralysis to regain lost motor functioning. Thus, the iBCI is a beacon of hope that could enable individuals to independently perform daily activities and interact once again with their environment. This review seeks to accomplish two major goals. First, elaborate upon the development of the iBCI and focus on the advancements and efforts to create a viable system. Second, illustrate the exciting improvements in the iBCI's use for reaching and grasping actions and in human clinical trials. The ultimate goal is to use the iBCI as a clinical tool for individuals with long-term paralysis to regain movement control. Despite the promise in the iBCI, many challenges, which are described in this review, persist and must be overcome before the iBCI can be a viable tool for individuals with long-term. iBCI future endeavors aim to overcome the challenges and develop an efficient system enhancing the lives of many living with paralysis. Standard terms: Intracortical Brain Computer Interface (iBCI), Intracortical Brain Machine Interface (iBMI), Cortical Motor Prosthetic (CMP), Neuromotor Prostheses (NMP), Intracortical Neural Prosthetics, Invasive Neural Prosthetic all terms used interchangeabl

    Complementary Actions

    Get PDF
    Human beings come into the world wired for social interaction. At the fourteenth week of gestation, twin fetuses already display interactive movements specifically directed towards their co- twin. Readiness for social interaction is also clearly expressed by the newborn who imitate facial gestures, suggesting that there is a common representation mediating action observation and execution. While actions that are observed and those that are planned seem to be functionally equivalent, it is unclear if the visual representation of an observed action inevitably leads to its motor representation. This is particularly true with regard to complementary actions (from the Latin complementum ; i.e. that fills up), a specific class of movements which differ, while interacting, with observed ones. In geometry, angles are defined as complementary if they form a right angle. In art and design, complementary colors are color pairs that, when combined in the right proportions, produce white or black. As a working definition, complementary actions refer here to any form of social interaction wherein two (or more) individuals complete each other\u2019s actions in a balanced way. Successful complementary interactions are founded on the abilities:\ua0 (1)\ua0 to simulate another person\u2019s movements; (2)\ua0 to predict another person\u2019s future action/ s; (3)\ua0to produce an appropriate congruent/ incongruent response that completes the other person\u2019s action/ s; and (4)\ua0to integrate the predicted effects of one\u2019s own and another person\u2019s actions. It is the neurophysiological mechanism that underlies this process which forms the main theme of this chapte

    Influence of Gaze Position on Grasp Parameters For Reaches to Visible and Remembered Stimuli

    Get PDF
    In order to pick up or manipulate a seen object, one must use visual signals to aim and transport the hand to the object’s location (reach), and configure the digits to the shape of the object (grasp). It has been shown that reach and grasp are controlled by separate neural pathways. In real world conditions, however, all of these signals (gaze, reach, grasp) must interact to provide accurate eye-hand coordination. The interactions between gaze, reach, and grasp parameters have not been comprehensively studied in humans. The purpose of the study was to investigate 1) the effect of gaze and target positions on grasp location, amplitude, and orientation, and 2) the influence of visual feedback of the hand and target on the final grasp components and on the spatial deviations associated with gaze direction and target position. Seven subjects reached to grasp a rectangular “virtual” target presented at three orientations, three locations, and with three gaze fixation positions during open- and closed-loop conditions. Participants showed gaze- and target-dependent deviations in grasp parameters that could not be predicted from previous studies. Our results showed that both reach- and grasp-related deviations were affected by stimulus position. The interaction effects of gaze and reach position revealed complex mechanisms, and their impacts were different in each grasp parameter. The impacts of gaze direction on grasp deviation were dependent on target position in space, especially for grasp location and amplitude. Gaze direction had little impact on grasp orientation. Visual feedback about the hand and target modulated the reach- and gaze- related impacts. The results suggest that the brain uses both control signal interactions and sensorimotor strategies to control and plan reach-and-grasp movements

    Causative role of left aIPS in coding shared goals during human-avatar complementary joint actions

    Get PDF
    Successful motor interactions require agents to anticipate what a partner is doing in order to predictively adjust their own movements. Although the neural underpinnings of the ability to predict others' action goals have been well explored during passive action observation, no study has yet clarified any critical neural substrate supporting interpersonal coordination during active, non-imitative (complementary) interactions. Here, we combine non-invasive inhibitory brain stimulation (continuous Theta Burst Stimulation) with a novel human-avatar interaction task to investigate a causal role for higher-order motor cortical regions in supporting the ability to predict and adapt to others' actions. We demonstrate that inhibition of left anterior intraparietal sulcus (aIPS), but not ventral premotor cortex, selectively impaired individuals' performance during complementary interactions. Thus, in addition to coding observed and executed action goals, aIPS is crucial in coding 'shared goals', that is, integrating predictions about one's and others' complementary actions

    Enhancing Nervous System Recovery through Neurobiologics, Neural Interface Training, and Neurorehabilitation.

    Get PDF
    After an initial period of recovery, human neurological injury has long been thought to be static. In order to improve quality of life for those suffering from stroke, spinal cord injury, or traumatic brain injury, researchers have been working to restore the nervous system and reduce neurological deficits through a number of mechanisms. For example, neurobiologists have been identifying and manipulating components of the intra- and extracellular milieu to alter the regenerative potential of neurons, neuro-engineers have been producing brain-machine and neural interfaces that circumvent lesions to restore functionality, and neurorehabilitation experts have been developing new ways to revitalize the nervous system even in chronic disease. While each of these areas holds promise, their individual paths to clinical relevance remain difficult. Nonetheless, these methods are now able to synergistically enhance recovery of native motor function to levels which were previously believed to be impossible. Furthermore, such recovery can even persist after training, and for the first time there is evidence of functional axonal regrowth and rewiring in the central nervous system of animal models. To attain this type of regeneration, rehabilitation paradigms that pair cortically-based intent with activation of affected circuits and positive neurofeedback appear to be required-a phenomenon which raises new and far reaching questions about the underlying relationship between conscious action and neural repair. For this reason, we argue that multi-modal therapy will be necessary to facilitate a truly robust recovery, and that the success of investigational microscopic techniques may depend on their integration into macroscopic frameworks that include task-based neurorehabilitation. We further identify critical components of future neural repair strategies and explore the most updated knowledge, progress, and challenges in the fields of cellular neuronal repair, neural interfacing, and neurorehabilitation, all with the goal of better understanding neurological injury and how to improve recovery

    Neural basis of motor planning for object-oriented actions: the role of kinematics and cognitive aspects

    Get PDF
    The project I have carried out in these three years as PhD student pursued the aim of describing the motor preparation activity related to the object oriented actions actually performed. The importance of these studies comes from the lack of literature on EEG and complex movements actually executed and not just mimed or pantomimed. Using the term ‘complex’ here we refer to actions that are oriented to an object with the intent to interact with it. In order to provide a broader idea about the aim of the project, I have illustrated the complexity of the movements and cortical networks involved in their processing and execution. Several cortical areas concur to the plan and execution of a movement, and the contribution of these different areas changes according to the complexity, in terms of kinematics, of the action. The object-oriented action seems to be a circuit apart: besides motor structures, it also involves a temporo-parietal network that takes part to both planning and performing actions like reaching and grasping. Such findings have been pointed out starting from studies on the Mirror neuron system discovered in monkeys at the beginning of the ‘90s and subsequently extended to humans. Apart from all the speculations this discovery has opened to, many different researchers have started investigating different aspects related to reaching and grasping movements, describing different areas involved, all belonging to the posterior parietal cortex (PPC), and their connections with anterior motor cortices through different paradigms and techniques. Most of the studies investigating movement execution and preparation are studies on monkey or fMRI studies on humans. Limits of this technique come from its low temporal resolution and the impossibility to use self-paced movement, that is, movement performed in more ecological conditions when the subject decides freely to move. On the few studies investigating motor preparation using EEG, only pantomime of action has been used, more than real interactions with objects. Because all of these factors, we decided to get through the description of the motor preparation activity for goal oriented actions pursuing two aims: in the first instance, to describe this activity for grasping and reaching actions actually performed toward a cup (a very ecological object); secondly, we wanted to verify which parameters in these kind of movements are taken into account during their planning and preparation: because of all the variables involved in grasping and reaching movements, like the position of the objects, its features, the goal of the action and its meaning, we tried to verify how these variables could affect motor preparation creating two different experiments. In the first one, subjects were requested to perform a grasping and a reaching action toward a cup and in a third condition we tied up their hands as fist in order to verify what it could happen when people are in the condition of turning an ordinary and easy action into a new one to accomplish the final task requested. In the second experiment, we better accounted for the cognitive aspects beyond the motor preparation of an action. Here, indeed, we tested a very simple action like a key press in two different conditions. In the first one the button press was not related to any kind of consequence, whereas in the second case the same action triggered a video on a screen showing a hand moving toward a cup and grasping it (giving like a video-game effect). Both the experiments have shown results straightening the role cognitive processes have in motor planning. In particular, it seemed that the goal of the action, along with the object we are going to interact with, could create a particular response and activity starting very early in the posterior parietal cortex. Finally, because of the actions used in these experiments, it was important testing the hypothesis that our findings could be generalized even to the observation of those same actions. As I mentioned before, object-oriented actions have received great attention starting from the discovery of the mirror neuron system which showed a correspondence between the cortical activity of the person performing the action with the one produced in the observer. Such a finding allowed to describe our brain as a social brain, able to create a mental representation of what the other person is doing which allows us to understand others gesture and intentions. What we wanted to test in this project was the possibility that such a correspondence between the observer and the actor would had been extended even to the motor preparation period of an upcoming action, giving credit to the hypothesis of considering the human brain as able to even predict others actions and intentions besides understanding them. In the last experiment I carried out in my project, thus, I used the same actions involved in the first experiment but asking this time to observe them passively instead of performing them. The results provided in this study confirmed the cognitive, rather than motor, role the PPC plays in action planning. Indeed, even when no movements are involved, the same structure are active reflecting the activity found in the execution experiment. The main result I have reported in this dissertation is related to the suggestion of a new model to understand the role the PPC has in object-oriented movements. Unlike previous hypothesis and models suggesting the contribution of PPC in extracting affordances from the objects or monitoring and transforming coordinates between us and the object into intention for acting, we suggest here that the role of the parietal areas is more to make a judge about the appropriate match of the action goal with the affordances provided by the object. When actually the action we are going to perform fits well with the object features, the PPC starts its activity, elaborating all those coordinates representation and monitoring the execution and programming phases of movement. This model is well supported by results from both our experiments and well combines the two previous models, but putting more emphasis on the ‘goal-object matching’ function of the PPC and the Superior parietal lobe (SPL) in particular

    PMv Neuronal Firing May Be Driven by a Movement Command Trajectory within Multidimensional Gaussian Fields

    Get PDF
    The premotor cortex (PM) is known to be a site of visuo-somatosensory integration for the production of movement. We sought to better understand the ventral PM (PMv) by modeling its signal encoding in greater detail. Neuronal firing data was obtained from 110 PMv neurons in two male rhesus macaques executing four reach-grasp-manipulate tasks. We found that in the large majority of neurons (∌90%) the firing patterns across the four tasks could be explained by assuming that a high-dimensional position/configuration trajectory-like signal evolving ∌250 ms before movement was encoded within a multidimensional Gaussian field (MGF). Our findings are consistent with the possibility that PMv neurons process a visually specified reference command for the intended arm/hand position trajectory with respect to a proprioceptively or visually sensed initial configuration. The estimated MGF were (hyper) disc-like, such that each neuron's firing modulated strongly only with commands that evolved along a single direction within position/configuration space. Thus, many neurons appeared to be tuned to slices of this input signal space that as a collection appeared to well cover the space. The MGF encoding models appear to be consistent with the arm-referent, bell-shaped, visual target tuning curves and target selectivity patterns observed in PMV visual-motor neurons. These findings suggest that PMv may implement a lookup table-like mechanism that helps translate intended movement trajectory into time-varying patterns of activation in motor cortex and spinal cord. MGFs provide an improved nonlinear framework for potentially decoding visually specified, intended multijoint arm/hand trajectories well in advance of movement

    Development of a Unique Whole-Brain Model for Upper Extremity Neuroprosthetic Control

    Get PDF
    Neuroprostheses are at the forefront of upper extremity function restoration. However, contemporary controllers of these neuroprostheses do not adequately address the natural brain strategies related to planning, execution and mediation of upper extremity movements. These lead to restrictions in providing complete and lasting restoration of function. This dissertation develops a novel whole-brain model of neuronal activation with the goal of providing a robust platform for an improved upper extremity neuroprosthetic controller. Experiments (N=36 total) used goal-oriented upper extremity movements with real-world objects in an MRI scanner while measuring brain activation during functional magnetic resonance imaging (fMRI). The resulting data was used to understand neuromotor strategies using brain anatomical and temporal activation patterns. The study\u27s fMRI paradigm is unique and the use of goal-oriented movements and real-world objects are crucial to providing accurate information about motor task strategy and cortical representation of reaching and grasping. Results are used to develop a novel whole-brain model using a machine learning algorithm. When tested on human subject data, it was determined that the model was able to accurately distinguish functional motor tasks with no prior knowledge. The proof of concept model created in this work should lead to improved prostheses for the treatment of chronic upper extremity physical dysfunction

    The Two Visual Processing Streams Through The Lens Of Deep Neural Networks

    Get PDF
    Recent advances in computer vision have enabled machines to have high performance in labeling objects in natural scenes. However, object labeling constitutes only a small fraction of daily human activities. To move towards building machines that can function in natural environments, the usefulness of these models should be evaluated on a broad range of tasks beyond perception. Moving towards this goal, this thesis evaluates the internal representations of state-of-the-art deep convolutional neural networks in predicting a perception-based and an action-based behavior: object similarity judgment and visually guided grasping. To do so, a dataset of everyday objects was collected and used to obtain these two behaviors on the same set of stimuli. For the grasping task, participants’ finger positions were recorded at the end of the object grasping movement. Additionally, for the similarity judgment task, an odd-one-out experiment was conducted to build a dissimilarity matrix based on participants’ similarity judgments. A comparison of the two behaviors suggests that distinct features of objects are used for performing each task. I next explored if the features extracted in different layers of the state-of-the-art deep convolutional neural networks (DNNs) could be useful in deriving both outputs. The prediction accuracy of the similarity judgment behavior increased from low to higher layers of the networks, while that of the grasping behavior increased from low to mid-layers and drastically decreased further along the hierarchy. These results suggest that for building a system that could perform these two tasks, the processing hierarchy may need to be split starting at the middle layers. Overall, the results of this thesis could inform future models that can perform a broader set of tasks on natural images
    • 

    corecore