13 research outputs found

    Brain-computer interface of focus and motor imagery using wavelet and recurrent neural networks

    Get PDF
    Brain-computer interface is a technology that allows operating a device without involving muscles and sound, but directly from the brain through the processed electrical signals. The technology works by capturing electrical or magnetic signals from the brain, which are then processed to obtain information contained therein. Usually, BCI uses information from electroencephalogram (EEG) signals based on various variables reviewed. This study proposed BCI to move external devices such as a drone simulator based on EEG signal information. From the EEG signal was extracted to get motor imagery (MI) and focus variable using wavelet. Then, they were classified by recurrent neural networks (RNN). In overcoming the problem of vanishing memory from RNN, was used long short-term memory (LSTM). The results showed that BCI used wavelet, and RNN can drive external devices of non-training data with an accuracy of 79.6%. The experiment gave AdaDelta model is better than the Adam model in terms of accuracy and value losses. Whereas in computational learning time, Adam's model is faster than AdaDelta's model

    Decoding neural activity in sulcal and white matter areas of the brain to accurately predict individual finger movement and tactile stimuli of the human hand

    Get PDF
    Millions of people worldwide suffer motor or sensory impairment due to stroke, spinal cord injury, multiple sclerosis, traumatic brain injury, diabetes, and motor neuron diseases such as ALS (amyotrophic lateral sclerosis). A brain-computer interface (BCI), which links the brain directly to a computer, offers a new way to study the brain and potentially restore impairments in patients living with these debilitating conditions. One of the challenges currently facing BCI technology, however, is to minimize surgical risk while maintaining efficacy. Minimally invasive techniques, such as stereoelectroencephalography (SEEG) have become more widely used in clinical applications in epilepsy patients since they can lead to fewer complications. SEEG depth electrodes also give access to sulcal and white matter areas of the brain but have not been widely studied in brain-computer interfaces. Here we show the first demonstration of decoding sulcal and subcortical activity related to both movement and tactile sensation in the human hand. Furthermore, we have compared decoding performance in SEEG-based depth recordings versus those obtained with electrocorticography electrodes (ECoG) placed on gyri. Initial poor decoding performance and the observation that most neural modulation patterns varied in amplitude trial-to-trial and were transient (significantly shorter than the sustained finger movements studied), led to the development of a feature selection method based on a repeatability metric using temporal correlation. An algorithm based on temporal correlation was developed to isolate features that consistently repeated (required for accurate decoding) and possessed information content related to movement or touch-related stimuli. We subsequently used these features, along with deep learning methods, to automatically classify various motor and sensory events for individual fingers with high accuracy. Repeating features were found in sulcal, gyral, and white matter areas and were predominantly phasic or phasic-tonic across a wide frequency range for both HD (high density) ECoG and SEEG recordings. These findings motivated the use of long short-term memory (LSTM) recurrent neural networks (RNNs) which are well-suited to handling transient input features. Combining temporal correlation-based feature selection with LSTM yielded decoding accuracies of up to 92.04 ± 1.51% for hand movements, up to 91.69 ± 0.49% for individual finger movements, and up to 83.49 ± 0.72% for focal tactile stimuli to individual finger pads while using a relatively small number of SEEG electrodes. These findings may lead to a new class of minimally invasive brain-computer interface systems in the future, increasing its applicability to a wide variety of conditions

    Robust representation learning approaches for neural population activity

    Get PDF
    Understanding communication patterns between different regions of the human brain is key to learning useful spatial representations. Once learned, these representations present a foundation on which new tasks can be learned rapidly. Moreover, the activity patterns generated by the brain are ultimately relayed to the muscles to generate behaviour. By measuring these action potentials from the relevant source regions of the brain directly, we can capture expected behaviour notwithstanding interruption in the neural pathways to downstream muscles. Spinal cord injury is an example of interruption in the case of motor control of arm or leg muscles from the motor cortex of the brain. Multiple electrodes recording action potentials from neurons in the motor cortex in conjunction with a plethora of possible modelling techniques can be used to decode this intended movement. Subsequently, soft or hard robotics can be used to bypass the damaged spinal cord in relaying intended movement behaviour to specific limbs. This thesis is comprised of two main parts. The first part addresses the question of how representation learning in neural networks can benefit the learning of goal-directed behaviour. Using the learning of spatial representations through recurrent neural networks as a model, this work showed that such a representation can be used as a foundation for rapid learning of navigational tasks using reinforcement learning. This learned representation takes the form of spatially modulated units within the neural network, similar to place cells found in the brains of mammals. Furthermore, an analysis of the simulated neurons showed that these place units within the neural network have multiple characteristics replicating those found in biological place cells, such as precursory firing behaviour. The second part tackles the issue of variability in neural representations, a phenomenon that causes significant deterioration of the decoding of behaviour from neural population activity over time. Using combined neural and behaviour recordings from monkeys performing motor tasks, this work aims to develop stable decoders that are robust to such fluctuations. Two approaches using unsupervised learning were investigated. The first is based on domain adaptation, where decoders were trained to "ignore" all aspects of the data subject to fluctuations, and to instead extract the salient, stable aspects of the neural representation of movements. This representation then allows the decoder to generalise well to a completely unseen recording session, thus accurately predicting behaviour intention withstanding significant neuron non-stationaries present between recording sessions. This generalisation to an unseen recording session without retraining or recalibration of a decoder has not been previously shown. This first approach performed well for data that was obtained close enough in time to the training data, but required a significant number of recording sessions for successful training. To address these limitations, a contrastive learning approach was used next. In this model, synthetic variations of trials from a single recording session were generated. These variations were similar in type and magnitude to the neuron non-stationaries that exist between recording sessions, and used as training data together with the original data for a model that learns to remove these non-stationaries to recover stable dynamics related to behaviour. This method produced a very stable decoder capable of accurately inferring intended behaviour for up to a week into the future. This training paradigm is an example of self-supervised learning, whereby the model is trained on perturbed versions of data. Taken together, in this thesis I explore approaches which lead to robust representations being learned within neural networks. These representations are shown to be neurally realistic and robust, allowing for a high degree of generalisation

    Real-time neural signal processing and low-power hardware co-design for wireless implantable brain machine interfaces

    Get PDF
    Intracortical Brain-Machine Interfaces (iBMIs) have advanced significantly over the past two decades, demonstrating their utility in various aspects, including neuroprosthetic control and communication. To increase the information transfer rate and improve the devices’ robustness and longevity, iBMI technology aims to increase channel counts to access more neural data while reducing invasiveness through miniaturisation and avoiding percutaneous connectors (wired implants). However, as the number of channels increases, the raw data bandwidth required for wireless transmission also increases becoming prohibitive, requiring efficient on-implant processing to reduce the amount of data through data compression or feature extraction. The fundamental aim of this research is to develop methods for high-performance neural spike processing co-designed within low-power hardware that is scaleable for real-time wireless BMI applications. The specific original contributions include the following: Firstly, a new method has been developed for hardware-efficient spike detection, which achieves state-of-the-art spike detection performance and significantly reduces the hardware complexity. Secondly, a novel thresholding mechanism for spike detection has been introduced. By incorporating firing rate information as a key determinant in establishing the spike detection threshold, we have improved the adaptiveness of spike detection. This eventually allows the spike detection to overcome the signal degradation that arises due to scar tissue growth around the recording site, thereby ensuring enduringly stable spike detection results. The long-term decoding performance, as a consequence, has also been improved notably. Thirdly, the relationship between spike detection performance and neural decoding accuracy has been investigated to be nonlinear, offering new opportunities for further reducing transmission bandwidth by at least 30% with minor decoding performance degradation. In summary, this thesis presents a journey toward designing ultra-hardware-efficient spike detection algorithms and applying them to reduce the data bandwidth and improve neural decoding performance. The software-hardware co-design approach is essential for the next generation of wireless brain-machine interfaces with increased channel counts and a highly constrained hardware budget. The fundamental aim of this research is to develop methods for high-performance neural spike processing co-designed within low-power hardware that is scaleable for real-time wireless BMI applications. The specific original contributions include the following: Firstly, a new method has been developed for hardware-efficient spike detection, which achieves state-of-the-art spike detection performance and significantly reduces the hardware complexity. Secondly, a novel thresholding mechanism for spike detection has been introduced. By incorporating firing rate information as a key determinant in establishing the spike detection threshold, we have improved the adaptiveness of spike detection. This eventually allows the spike detection to overcome the signal degradation that arises due to scar tissue growth around the recording site, thereby ensuring enduringly stable spike detection results. The long-term decoding performance, as a consequence, has also been improved notably. Thirdly, the relationship between spike detection performance and neural decoding accuracy has been investigated to be nonlinear, offering new opportunities for further reducing transmission bandwidth by at least 30\% with only minor decoding performance degradation. In summary, this thesis presents a journey toward designing ultra-hardware-efficient spike detection algorithms and applying them to reduce the data bandwidth and improve neural decoding performance. The software-hardware co-design approach is essential for the next generation of wireless brain-machine interfaces with increased channel counts and a highly constrained hardware budget.Open Acces

    Noninvasive Dynamic Characterization of Swallowing Kinematics and Impairments in High Resolution Cervical Auscultation via Deep Learning

    Get PDF
    Swallowing is a complex sensorimotor activity by which food and liquids are transferred from the oral cavity to the stomach. Swallowing requires the coordination between multiple subsystems which makes it subject to impairment secondary to a variety of medical or surgically related conditions. Dysphagia refers to any swallowing disorder and is common in patients with head and neck cancer and neurological conditions such as stroke. Dysphagia affects nearly 9 million adults and causes death for more than 60,000 yearly in the US. In this research, we utilize advanced signal processing techniques with sensor technology and deep learning methods to develop a noninvasive and widely available tool for the evaluation and diagnosis of swallowing problems. We investigate the use of modern spectral estimation methods in addition to convolutional recurrent neural networks to demarcate and localize the important swallowing physiological events that contribute to airway protection solely based on signals collected from non-invasive sensors attached to the anterior neck. These events include the full swallowing activity, upper esophageal sphincter opening duration and maximal opening diameter, and aspiration. We believe that combining sensor technology and state of the art deep learning architectures specialized in time series analysis, will help achieve great advances for dysphagia detection and management in terms of non-invasiveness, portability, and availability. Like never before, such advances will enable patients to get continuous feedback about their swallowing out of standard clinical care setting which will extremely facilitate their daily activities and enhance the quality of their lives

    Simulation and implementation of novel deep learning hardware architectures for resource constrained devices

    Get PDF
    Corey Lammie designed mixed signal memristive-complementary metal–oxide–semiconductor (CMOS) and field programmable gate arrays (FPGA) hardware architectures, which were used to reduce the power and resource requirements of Deep Learning (DL) systems; both during inference and training. Disruptive design methodologies, such as those explored in this thesis, can be used to facilitate the design of next-generation DL systems

    Smart Sensors for Healthcare and Medical Applications

    Get PDF
    This book focuses on new sensing technologies, measurement techniques, and their applications in medicine and healthcare. Specifically, the book briefly describes the potential of smart sensors in the aforementioned applications, collecting 24 articles selected and published in the Special Issue “Smart Sensors for Healthcare and Medical Applications”. We proposed this topic, being aware of the pivotal role that smart sensors can play in the improvement of healthcare services in both acute and chronic conditions as well as in prevention for a healthy life and active aging. The articles selected in this book cover a variety of topics related to the design, validation, and application of smart sensors to healthcare

    Libro de actas. XXXV Congreso Anual de la Sociedad Española de Ingeniería Biomédica

    Get PDF
    596 p.CASEIB2017 vuelve a ser el foro de referencia a nivel nacional para el intercambio científico de conocimiento, experiencias y promoción de la I D i en Ingeniería Biomédica. Un punto de encuentro de científicos, profesionales de la industria, ingenieros biomédicos y profesionales clínicos interesados en las últimas novedades en investigación, educación y aplicación industrial y clínica de la ingeniería biomédica. En la presente edición, más de 160 trabajos de alto nivel científico serán presentados en áreas relevantes de la ingeniería biomédica, tales como: procesado de señal e imagen, instrumentación biomédica, telemedicina, modelado de sistemas biomédicos, sistemas inteligentes y sensores, robótica, planificación y simulación quirúrgica, biofotónica y biomateriales. Cabe destacar las sesiones dedicadas a la competición por el Premio José María Ferrero Corral, y la sesión de competición de alumnos de Grado en Ingeniería biomédica, que persiguen fomentar la participación de jóvenes estudiantes e investigadores
    corecore