776 research outputs found

    Myoelectric forearm prostheses: State of the art from a user-centered perspective

    Get PDF
    User acceptance of myoelectric forearm prostheses is currently low. Awkward control, lack of feedback, and difficult training are cited as primary reasons. Recently, researchers have focused on exploiting the new possibilities offered by advancements in prosthetic technology. Alternatively, researchers could focus on prosthesis acceptance by developing functional requirements based on activities users are likely to perform. In this article, we describe the process of determining such requirements and then the application of these requirements to evaluating the state of the art in myoelectric forearm prosthesis research. As part of a needs assessment, a workshop was organized involving clinicians (representing end users), academics, and engineers. The resulting needs included an increased number of functions, lower reaction and execution times, and intuitiveness of both control and feedback systems. Reviewing the state of the art of research in the main prosthetic subsystems (electromyographic [EMG] sensing, control, and feedback) showed that modern research prototypes only partly fulfill the requirements. We found that focus should be on validating EMG-sensing results with patients, improving simultaneous control of wrist movements and grasps, deriving optimal parameters for force and position feedback, and taking into account the psychophysical aspects of feedback, such as intensity perception and spatial acuity

    Decoding sensorimotor information from superior parietal lobule of macaque via Convolutional Neural Networks

    Get PDF
    Despite the well-recognized role of the posterior parietal cortex (PPC) in processing sensory information to guide action, the differential encoding properties of this dynamic processing, as operated by different PPC brain areas, are scarcely known. Within the monkey's PPC, the superior parietal lobule hosts areas V6A, PEc, and PE included in the dorso-medial visual stream that is specialized in planning and guiding reaching movements. Here, a Convolutional Neural Network (CNN) approach is used to investigate how the information is processed in these areas. We trained two macaque monkeys to perform a delayed reaching task towards 9 positions (distributed on 3 different depth and direction levels) in the 3D peripersonal space. The activity of single cells was recorded from V6A, PEc, PE and fed to convolutional neural networks that were designed and trained to exploit the temporal structure of neuronal activation patterns, to decode the target positions reached by the monkey. Bayesian Optimization was used to define the main CNN hyper-parameters. In addition to discrete positions in space, we used the same network architecture to decode plausible reaching trajectories. We found that data from the most caudal V6A and PEc areas outperformed PE area in the spatial position decoding. In all areas, decoding accuracies started to increase at the time the target to reach was instructed to the monkey, and reached a plateau at movement onset. The results support a dynamic encoding of the different phases and properties of the reaching movement differentially distributed over a network of interconnected areas. This study highlights the usefulness of neurons' firing rate decoding via CNNs to improve our understanding of how sensorimotor information is encoded in PPC to perform reaching movements. The obtained results may have implications in the perspective of novel neuroprosthetic devices based on the decoding of these rich signals for faithfully carrying out patient's intentions.(C) 2022 Published by Elsevier Ltd

    A Novel Framework of Online, Task-Independent Cognitive State Transition Detection and Its Applications

    Get PDF
    Complex reach, grasp, and object manipulation tasks require sequential, temporal coordination of a movement plan by neurons in the brain. Detecting cognitive state transitions associated with motor tasks from sequential neural data is pivotal in rehabilitation engineering. The cognitive state detectors proposed thus far rely on task-dependent (TD) models, i.e., the detection strategy exploits a priori knowledge of the movement goals to determine the actual states, regardless of whether these cognitive states actually depend on the movement tasks or not. This approach, however, is not viable when the tasks are not known a priori (e.g., the subject performs many different tasks) or when there is a paucity of neural data for each task. Moreover, some cognitive states (e.g., holding) are invariant to the tasks performs. I first develop an offline, task-dependent cognitive state transition detector and a kinematics decoder to show the feasibility of distinguishing between cognitive states based on their inherent features extracted via a hidden Markov model (HMM) based detection framework. The proposed framework is designed to decode both cognitive states and kinematics from ensemble neural activity. The proposed decoding framework is able to a) automatically differentiate between baseline, plan, and movement, and b) determine novel holding epochs of neural activity and also estimate the epoch-dependent kinematics. Specifically, the framework is mainly composed of a hidden Markov model (HMM) state decoder and a switching linear system (S-LDS) kinematics decoder. I take a supervised approach and use a generative framework of neural activity and kinematics. I demonstrate the decoding framework using neural recordings from ventral premotor (PMv) and dorsal premotor (PMd) neurons of a non-human primate executing four complex reach-to-grasp tasks along with the corresponding kinematics recording. Using the HMM state decoder, I demonstrate that the transitions between neighboring epochs of neural activity, regardless of the existence of any external kinematics changes, can be detected with high accuracy (>85%) and short latencies (<150 ms). I further show that the joint angle kinematics can be estimated reliably with high accuracy (mean = 88%) using a S-LDS kinematics decoder. In addition, I demonstrate that the use of multiple latent state variables to model the within-epoch neural activity variability can improve the decoder performance. This unified decoding framework combining a HMM state decoder and a S-LDS may be useful in neural decoding of cognitive states and complex movements of prosthetic limbs in practical brain-computer interface implementations. I then develop a real-time (online) task-independent (TI) framework to detect cognitive state transitions from spike trains and kinematic measurements. I applied this framework to 226 single-unit recordings collected via multi-electrode arrays in the premotor dorsal and ventral (PMd and PMv) regions of the cortex of two non-human primates performing 3D multi-object reach-to-grasp tasks, and I used the detection latency and accuracy of state transitions to measure the performance. I found that, in both online and offline detection modes, (i) TI models have significantly better performance than TD models when using neuronal data alone, however (ii) during movements, the addition of the kinematics history to the TI models further improves detection performance. These findings suggest that TI models may be able to more accurately detect cognitive state transitions than TD under certain circumstances. The proposed framework could pave the way for a TI control of prosthesis from cortical neurons, a beneficial outcome when the choice of tasks is vast, but despite that the basic movement cognitive states need to be decoded. Based on the online cognitive state transition detector, I further construct an online task-independent kinematics decoder. I constructed our framework using single-unit recordings from 452 neurons and synchronized kinematics recordings from two non-human primates performing 3D multi-object reach-to-grasp tasks. I find that (i) the proposed TI framework performs significantly better than current frameworks that rely on TD models (p = 0.03); and (ii) modeling cognitive state information further improves decoding performance. These findings suggest that TI models with cognitive-state-dependent parameters may more accurately decode kinematics and could pave the way for more clinically viable neural prosthetics

    Vector Autoregressive Hierarchical Hidden Markov Models for Extracting Finger Movements Using Multichannel Surface EMG Signals

    Get PDF
    We present a novel computational technique intended for the robust and adaptable control of a multifunctional prosthetic hand using multichannel surface electromyography. The initial processing of the input data was oriented towards extracting relevant time domain features of the EMG signal. Following the feature calculation, a piecewise modeling of the multidimensional EMG feature dynamics using vector autoregressive models was performed. The next step included the implementation of hierarchical hidden semi-Markov models to capture transitions between piecewise segments of movements and between different movements. Lastly, inversion of the model using an approximate Bayesian inference scheme served as the classifier. The effectiveness of the novel algorithms was assessed versus methods commonly used for real-time classification of EMGs in a prosthesis control application. The obtained results show that using hidden semi-Markov models as the top layer, instead of the hidden Markov models, ranks top in all the relevant metrics among the tested combinations. The choice of the presented methodology for the control of prosthetic hand is also supported by the equal or lower computational complexity required, compared to other algorithms, which enables the implementation on low-power microcontrollers, and the ability to adapt to user preferences of executing individual movements during activities of daily living

    Constructing neural network models from brain data reveals representational transformations linked to adaptive behavior

    Get PDF
    The human ability to adaptively implement a wide variety of tasks is thought to emerge from the dynamic transformation of cognitive information. We hypothesized that these transformations are implemented via conjunctive activations in “conjunction hubs”—brain regions that selectively integrate sensory, cognitive, and motor activations. We used recent advances in using functional connectivity to map the flow of activity between brain regions to construct a task-performing neural network model from fMRI data during a cognitive control task. We verified the importance of conjunction hubs in cognitive computations by simulating neural activity flow over this empirically-estimated functional connectivity model. These empiricallyspecified simulations produced above-chance task performance (motor responses) by integrating sensory and task rule activations in conjunction hubs. These findings reveal the role of conjunction hubs in supporting flexible cognitive computations, while demonstrating the feasibility of using empirically-estimated neural network models to gain insight into cognitive computations in the human brain

    Biomechatronics: Harmonizing Mechatronic Systems with Human Beings

    Get PDF
    This eBook provides a comprehensive treatise on modern biomechatronic systems centred around human applications. A particular emphasis is given to exoskeleton designs for assistance and training with advanced interfaces in human-machine interaction. Some of these designs are validated with experimental results which the reader will find very informative as building-blocks for designing such systems. This eBook will be ideally suited to those researching in biomechatronic area with bio-feedback applications or those who are involved in high-end research on manmachine interfaces. This may also serve as a textbook for biomechatronic design at post-graduate level

    Decoding the content of cross-modal influences in the brain

    Get PDF
    This thesis examined how context and prior experience can shape the neural computations occurring in the human brain, specifically by using pattern classification analysis to decode the content of cross-modal influences in and around the primary somatosensory cortex (S1). In Chapter 2, fMRI was used to investigate whether simply hearing familiar sounds depicting different hand-object interactions could be discriminated in S1, even though stimulus presentation occurred in the auditory domain and no external tactile stimulation occurred. Results found discriminable patterns of activity about the sound of different hand-object interactions in hand-sensitive areas of S1, and not our two control categories of familiar animal vocalizations and unfamiliar pure tones. Chapter 3 aimed to corroborate the cross-modal effects found in the previous fMRI literature using a high temporal resolution neuroimaging technique: EEG. Specifically, EEG was used to examine whether simply viewing images of different familiar visual object categories which imply rich haptic information could be identified in sensorimotor-related oscillatory responses, even though input was from a visual source and no tactile stimulation occurred. Results found the content of different familiar, but not unfamiliar, visual object categories could be discriminated in the mu rhythm oscillatory response, thus establishing a potential oscillatory marker for the cross-modal effects previously observed. Chapter 4 involved an interactive fMRI paradigm using real 3D objects to test whether the primary function of the cross-modal influences previously detected is a likely result of predictive coding mechanisms. Whilst no reliable evidence for an account of predictive coding was found in this experiment, this study provided critical insight into the development of experiments which can directly test the assumptions of predictive coding with real action. The research conducted in this thesis has, therefore, provided significant contributions to the literature regarding our understanding of cross-modal influences and cortical feedback in the human brain. Keywords: cross-modal, cortical feedback, multi-voxel pattern analysis, mu rhythm, predictive coding, primary somatosensory cortex

    Cooperative Particle Swarm Optimization for Combinatorial Problems

    Get PDF
    A particularly successful line of research for numerical optimization is the well-known computational paradigm particle swarm optimization (PSO). In the PSO framework, candidate solutions are represented as particles that have a position and a velocity in a multidimensional search space. The direct representation of a candidate solution as a point that flies through hyperspace (i.e., Rn) seems to strongly predispose the PSO toward continuous optimization. However, while some attempts have been made towards developing PSO algorithms for combinatorial problems, these techniques usually encode candidate solutions as permutations instead of points in search space and rely on additional local search algorithms. In this dissertation, I present extensions to PSO that by, incorporating a cooperative strategy, allow the PSO to solve combinatorial problems. The central hypothesis is that by allowing a set of particles, rather than one single particle, to represent a candidate solution, combinatorial problems can be solved by collectively constructing solutions. The cooperative strategy partitions the problem into components where each component is optimized by an individual particle. Particles move in continuous space and communicate through a feedback mechanism. This feedback mechanism guides them in the assessment of their individual contribution to the overall solution. Three new PSO-based algorithms are proposed. Shared-space CCPSO and multispace CCPSO provide two new cooperative strategies to split the combinatorial problem, and both models are tested on proven NP-hard problems. Multimodal CCPSO extends these combinatorial PSO algorithms to efficiently sample the search space in problems with multiple global optima. Shared-space CCPSO was evaluated on an abductive problem-solving task: the construction of parsimonious set of independent hypothesis in diagnostic problems with direct causal links between disorders and manifestations. Multi-space CCPSO was used to solve a protein structure prediction subproblem, sidechain packing. Both models are evaluated against the provable optimal solutions and results show that both proposed PSO algorithms are able to find optimal or near-optimal solutions. The exploratory ability of multimodal CCPSO is assessed by evaluating both the quality and diversity of the solutions obtained in a protein sequence design problem, a highly multimodal problem. These results provide evidence that extended PSO algorithms are capable of dealing with combinatorial problems without having to hybridize the PSO with other local search techniques or sacrifice the concept of particles moving throughout a continuous search space
    corecore