5,714 research outputs found

    Algorithms for Neural Prosthetic Applications

    Get PDF
    abstract: In the last 15 years, there has been a significant increase in the number of motor neural prostheses used for restoring limb function lost due to neurological disorders or accidents. The aim of this technology is to enable patients to control a motor prosthesis using their residual neural pathways (central or peripheral). Recent studies in non-human primates and humans have shown the possibility of controlling a prosthesis for accomplishing varied tasks such as self-feeding, typing, reaching, grasping, and performing fine dexterous movements. A neural decoding system comprises mainly of three components: (i) sensors to record neural signals, (ii) an algorithm to map neural recordings to upper limb kinematics and (iii) a prosthetic arm actuated by control signals generated by the algorithm. Machine learning algorithms that map input neural activity to the output kinematics (like finger trajectory) form the core of the neural decoding system. The choice of the algorithm is thus, mainly imposed by the neural signal of interest and the output parameter being decoded. The various parts of a neural decoding system are neural data, feature extraction, feature selection, and machine learning algorithm. There have been significant advances in the field of neural prosthetic applications. But there are challenges for translating a neural prosthesis from a laboratory setting to a clinical environment. To achieve a fully functional prosthetic device with maximum user compliance and acceptance, these factors need to be addressed and taken into consideration. Three challenges in developing robust neural decoding systems were addressed by exploring neural variability in the peripheral nervous system for dexterous finger movements, feature selection methods based on clinically relevant metrics and a novel method for decoding dexterous finger movements based on ensemble methods.Dissertation/ThesisDoctoral Dissertation Bioengineering 201

    Machine learning-guided directed evolution for protein engineering

    Get PDF
    Machine learning (ML)-guided directed evolution is a new paradigm for biological design that enables optimization of complex functions. ML methods use data to predict how sequence maps to function without requiring a detailed model of the underlying physics or biological pathways. To demonstrate ML-guided directed evolution, we introduce the steps required to build ML sequence-function models and use them to guide engineering, making recommendations at each stage. This review covers basic concepts relevant to using ML for protein engineering as well as the current literature and applications of this new engineering paradigm. ML methods accelerate directed evolution by learning from information contained in all measured variants and using that information to select sequences that are likely to be improved. We then provide two case studies that demonstrate the ML-guided directed evolution process. We also look to future opportunities where ML will enable discovery of new protein functions and uncover the relationship between protein sequence and function.Comment: Made significant revisions to focus on aspects most relevant to applying machine learning to speed up directed evolutio

    Brain-mediated Transfer Learning of Convolutional Neural Networks

    Full text link
    The human brain can effectively learn a new task from a small number of samples, which indicate that the brain can transfer its prior knowledge to solve tasks in different domains. This function is analogous to transfer learning (TL) in the field of machine learning. TL uses a well-trained feature space in a specific task domain to improve performance in new tasks with insufficient training data. TL with rich feature representations, such as features of convolutional neural networks (CNNs), shows high generalization ability across different task domains. However, such TL is still insufficient in making machine learning attain generalization ability comparable to that of the human brain. To examine if the internal representation of the brain could be used to achieve more efficient TL, we introduce a method for TL mediated by human brains. Our method transforms feature representations of audiovisual inputs in CNNs into those in activation patterns of individual brains via their association learned ahead using measured brain responses. Then, to estimate labels reflecting human cognition and behavior induced by the audiovisual inputs, the transformed representations are used for TL. We demonstrate that our brain-mediated TL (BTL) shows higher performance in the label estimation than the standard TL. In addition, we illustrate that the estimations mediated by different brains vary from brain to brain, and the variability reflects the individual variability in perception. Thus, our BTL provides a framework to improve the generalization ability of machine-learning feature representations and enable machine learning to estimate human-like cognition and behavior, including individual variability

    System Level Assessment of Motor Control through Patterned Microstimulation in the Superior Colliculus

    Get PDF
    We are immersed in an environment full of sensory information, and without much thought or effort we can produce orienting responses to appropriately react to different stimuli. This seemingly simple and reflexive behavior is accomplished by a very complicated set of neural operations, in which motor systems in the brain must control behavior based on populations of sensory information. The oculomotor or saccadic system is particularly well studied in this regard. Within a visual environment consisting of many potential stimuli, we control our gaze with rapid eye movements, or saccades, in order to foveate visual targets of interest. A key sub-cortical structure involved in this process is the superior colliculus (SC). The SC is a structure in the midbrain which receives visual input and in turn projects to lower-level areas in the brainstem that produce saccades. Interestingly, microstimulation of the SC produces eye movements that match the metrics and kinematics of naturally-evoked saccades. As a result, we explore the role of the SC in saccadic motor control by manually introducing distributions of activity through neural stimulation. Systematic manipulation of microstimulation patterns were used to characterize how ensemble activity in the SC is decoded to generate eye movements. Specifically, we focused on three different facets of saccadic motor control. In the first study, we examine the effective influence of microstimulation parameters on behavior to reveal characteristics of the neural mechanisms underlying saccade generation. In the second study, we experimentally verify the predictions of computational algorithms that are used to describe neural mechanisms for saccade generation. And in the third study, we assess where neural mechanisms for decoding occur within the oculomotor network in order to establish the order of operations necessary for saccade generation. The experiments assess different aspects of saccadic motor control, which collectively, reveal properties and mechanisms that contribute to the comprehensive understanding of signal processing in the oculomotor system
    • …
    corecore