218 research outputs found

    Feature Analysis for Discrimination of Motor Unit Action Potentials

    Full text link
    © 2018 IEEE. In electrophysiological signal processing for intramuscular electromyography data (nEMG), single motor unit activity is of great interest. The changes of action potential (MUAP) morphology, motor unit (MU) activation, and recruitment provide the most informative part to study the nature causality in neuromuscular disorders. In practice, for a single nEMG recording, more than one motor unit activities (in the surrounding area of a needle electrode) are usually collected. Such a fact makes the MUAP discrimination that separates single unit activities a crucial task. Most neurology laboratories worldwide still recruit specialists who spend hours to manually or semi-automatically sort MUAPs. From a machine learning perspective, this task is analogous to the clustering-based classification problem in which the number of classes and other class information are unfortunately missing. In this paper, we present a feature analysis strategy to help better utilize unsupervised (i.e., totally automated) methods for MUAP discrimination. To that end, we extract a large pool of features from each MUAP. Then we select the top ranked candidates using clusterability scores as selection criteria. We found spectrograms of wavelet decomposition as a top-ranking feature, highly correlated to the motor unit reference and was more separable than existing features. Using a correlation-based clustering technique, we demonstrate the sorting performance with this feature set. Compared with the reference produced by human experts, our method obtained a comparable result (e.g., equivalent number of classes was found, identical MUAP morphology in each pair of corresponding MU class, and similar histograms of MUs). Taking the manual labels as references, our method got a much higher sensitivity and accuracy than the compared unsupervised sorting method. We obtained a similar result in MUAP classification to the reference

    Deep learning approach for epileptic seizure detection

    Get PDF
    Abstract. Epilepsy is the most common brain disorder that affects approximately fifty million people worldwide, according to the World Health Organization. The diagnosis of epilepsy relies on manual inspection of EEG, which is error-prone and time-consuming. Automated epileptic seizure detection of EEG signal can reduce the diagnosis time and facilitate targeting of treatment for patients. Current detection approaches mainly rely on the features that are designed manually by domain experts. The features are inflexible for the detection of a variety of complex patterns in a large amount of EEG data. Moreover, the EEG is non-stationary signal and seizure patterns vary across patients and recording sessions. EEG data always contain numerous noise types that negatively affect the detection accuracy of epileptic seizures. To address these challenges deep learning approaches are examined in this paper. Deep learning methods were applied to a large publicly available dataset, the Children’s Hospital of Boston-Massachusetts Institute of Technology dataset (CHB-MIT). The present study includes three experimental groups that are grouped based on the pre-processing steps. The experimental groups contain 3–4 experiments that differ between their objectives. The time-series EEG data is first pre-processed by certain filters and normalization techniques, and then the pre-processed signal was segmented into a sequence of non-overlapping epochs. Second, time series data were transformed into different representations of input signals. In this study time-series EEG signal, magnitude spectrograms, 1D-FFT, 2D-FFT, 2D-FFT magnitude spectrum and 2D-FFT phase spectrum were investigated and compared with each other. Third, time-domain or frequency-domain signals were used separately as a representation of input data of VGG or DenseNet 1D. The best result was achieved with magnitude spectrograms used as representation of input data in VGG model: accuracy of 0.98, sensitivity of 0.71 and specificity of 0.998 with subject dependent data. VGG along with magnitude spectrograms produced promising results for building personalized epileptic seizure detector. There was not enough data for VGG and DenseNet 1D to build subject-dependent classifier.Epileptisten kohtausten havaitseminen syväoppimisella lähestymistavalla. Tiivistelmä. Epilepsia on yleisin aivosairaus, joka Maailman terveysjärjestön mukaan vaikuttaa noin viiteenkymmeneen miljoonaan ihmiseen maailmanlaajuisesti. Epilepsian diagnosointi perustuu EEG:n manuaaliseen tarkastamiseen, mikä on virhealtista ja aikaa vievää. Automaattinen epileptisten kohtausten havaitseminen EEG-signaalista voi potentiaalisesti vähentää diagnoosiaikaa ja helpottaa potilaan hoidon kohdentamista. Nykyiset tunnistusmenetelmät tukeutuvat pääasiassa piirteisiin, jotka asiantuntijat ovat määritelleet manuaalisesti, mutta ne ovat joustamattomia monimutkaisten ilmiöiden havaitsemiseksi suuresta määrästä EEG-dataa. Lisäksi, EEG on epästationäärinen signaali ja kohtauspiirteet vaihtelevat potilaiden ja tallennusten välillä ja EEG-data sisältää aina useita kohinatyyppejä, jotka huonontavat epilepsiakohtauksen havaitsemisen tarkkuutta. Näihin haasteisiin vastaamiseksi tässä diplomityössä tarkastellaan soveltuvatko syväoppivat menetelmät epilepsian havaitsemiseen EEG-tallenteista. Aineistona käytettiin suurta julkisesti saatavilla olevaa Bostonin Massachusetts Institute of Technology lastenklinikan tietoaineistoa (CHB-MIT). Tämän työn tutkimus sisältää kolme koeryhmää, jotka eroavat toisistaan esikäsittelyvaiheiden osalta: aikasarja-EEG-data esikäsiteltiin perinteisten suodattimien ja normalisointitekniikoiden avulla, ja näin esikäsitelty signaali segmentoitiin epookkeihin. Kukin koeryhmä sisältää 3–4 koetta, jotka eroavat menetelmiltään ja tavoitteiltaan. Kussakin niistä epookkeihin jaettu aikasarjadata muutettiin syötesignaalien erilaisiksi esitysmuodoiksi. Tässä tutkimuksessa tutkittiin ja verrattiin keskenään EEG-signaalia sellaisenaan, EEG-signaalin amplitudi-spektrogrammeja, 1D-FFT-, 2D-FFT-, 2D-FFT-amplitudi- ja 2D-FFT -vaihespektriä. Näin saatuja aika- ja taajuusalueen signaaleja käytettiin erikseen VGG- tai DenseNet 1D -mallien syötetietoina. Paras tulos saatiin VGG-mallilla kun syötetietona oli amplitudi-spektrogrammi ja tällöin tarkkuus oli 0,98, herkkyys 0,71 ja spesifisyys 0,99 henkilöstä riippuvaisella EEG-datalla. VGG yhdessä amplitudi-spektrogrammien kanssa tuottivat lupaavia tuloksia henkilökohtaisen epilepsiakohtausdetektorin rakentamiselle. VGG- ja DenseNet 1D -malleille ei ollut tarpeeksi EEG-dataa henkilöstä riippumattoman luokittelijan opettamiseksi

    Comparative study of extracellular recording methods for analysis of afferent sensory information: Empirical modeling, data analysis and interpretation

    Get PDF
    Background: Physiological studies of sensorial systems often require the acquisition and processing of data extracted from their multiple components to evaluate how the neural information changes in relation to the environment changes. In this work, a comparative study about methodological aspects of two electrophysiological approaches is described. New method: Extracellular recordings from deep vibrissal nerves were obtained by using a customized microelectrode Utah array during passive mechanical stimulation of rat´s whiskers. These recordings were compared with those obtained with bipolar electrodes. We also propose here a simplified empirical model of the electrophysiological activity obtained from a bundle of myelinated nerve fibers. Results: The peripheral activity of the vibrissal system was characterized through the temporal and spectral features obtained with both recording methods. The empirical model not only allows the correlation between anatomical structures and functional features, but also allows to predict changes in the CAPs morphology when the arrangement and the geometry of the electrodes changes. Comparison with existing method(s): This study compares two extracellular recording methods based on analysis techniques, empirical modeling and data processing of vibrissal sensory information. Conclusions: This comparative study reveals a close relationship between the electrophysiological techniques and the processing methods necessary to extract sensory information. This relationship is the result of maximizing the extraction of information from recordings of sensory activity.Fil: Farfan, Fernando Daniel. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tucumán. Instituto Superior de Investigaciones Biológicas. Universidad Nacional de Tucumán. Instituto Superior de Investigaciones Biológicas; Argentina. Universidad Nacional de Tucumán. Facultad de Ciencias Exactas y Tecnología. Departamento de Bioingeniería. Laboratorio de Medios e Interfases; ArgentinaFil: Soto Sanchez, Cristina. Universidad de Miguel Hernández; España. Consorcio Centro de Investigación Biomédica en Red de Bioingeniería, Biomateriales y Nanomedicina; EspañaFil: Pizá, Alvaro Gabriel. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tucumán. Instituto Superior de Investigaciones Biológicas. Universidad Nacional de Tucumán. Instituto Superior de Investigaciones Biológicas; Argentina. Universidad Nacional de Tucumán. Facultad de Ciencias Exactas y Tecnología. Departamento de Bioingeniería. Laboratorio de Medios e Interfases; ArgentinaFil: Albarracin, Ana Lia. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tucumán. Instituto Superior de Investigaciones Biológicas. Universidad Nacional de Tucumán. Instituto Superior de Investigaciones Biológicas; Argentina. Universidad Nacional de Tucumán. Facultad de Ciencias Exactas y Tecnología. Departamento de Bioingeniería. Laboratorio de Medios e Interfases; ArgentinaFil: Soletta, Jorge Humberto. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tucumán. Instituto Superior de Investigaciones Biológicas. Universidad Nacional de Tucumán. Instituto Superior de Investigaciones Biológicas; Argentina. Universidad Nacional de Tucumán. Facultad de Ciencias Exactas y Tecnología. Departamento de Bioingeniería. Laboratorio de Medios e Interfases; ArgentinaFil: Lucianna, Facundo Adrián. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tucumán. Instituto Superior de Investigaciones Biológicas. Universidad Nacional de Tucumán. Instituto Superior de Investigaciones Biológicas; Argentina. Universidad Nacional de Tucumán. Facultad de Ciencias Exactas y Tecnología. Departamento de Bioingeniería. Laboratorio de Medios e Interfases; ArgentinaFil: Fernandez, Esteve. Universidad de Miguel Hernández; España. Consorcio Centro de Investigación Biomédica en Red de Bioingeniería, Biomateriales y Nanomedicina; Españ

    Speech Processes for Brain-Computer Interfaces

    Get PDF
    Speech interfaces have become widely used and are integrated in many applications and devices. However, speech interfaces require the user to produce intelligible speech, which might be hindered by loud environments, concern to bother bystanders or the general in- ability to produce speech due to disabilities. Decoding a usera s imagined speech instead of actual speech would solve this problem. Such a Brain-Computer Interface (BCI) based on imagined speech would enable fast and natural communication without the need to actually speak out loud. These interfaces could provide a voice to otherwise mute people. This dissertation investigates BCIs based on speech processes using functional Near In- frared Spectroscopy (fNIRS) and Electrocorticography (ECoG), two brain activity imaging modalities on opposing ends of an invasiveness scale. Brain activity data have low signal- to-noise ratio and complex spatio-temporal and spectral coherence. To analyze these data, techniques from the areas of machine learning, neuroscience and Automatic Speech Recog- nition are combined in this dissertation to facilitate robust classification of detailed speech processes while simultaneously illustrating the underlying neural processes. fNIRS is an imaging modality based on cerebral blood flow. It only requires affordable hardware and can be set up within minutes in a day-to-day environment. Therefore, it is ideally suited for convenient user interfaces. However, the hemodynamic processes measured by fNIRS are slow in nature and the technology therefore offers poor temporal resolution. We investigate speech in fNIRS and demonstrate classification of speech processes for BCIs based on fNIRS. ECoG provides ideal signal properties by invasively measuring electrical potentials artifact- free directly on the brain surface. High spatial resolution and temporal resolution down to millisecond sampling provide localized information with accurate enough timing to capture the fast process underlying speech production. This dissertation presents the Brain-to- Text system, which harnesses automatic speech recognition technology to decode a textual representation of continuous speech from ECoG. This could allow to compose messages or to issue commands through a BCI. While the decoding of a textual representation is unparalleled for device control and typing, direct communication is even more natural if the full expressive power of speech - including emphasis and prosody - could be provided. For this purpose, a second system is presented, which directly synthesizes neural signals into audible speech, which could enable conversation with friends and family through a BCI. Up to now, both systems, the Brain-to-Text and synthesis system are operating on audibly produced speech. To bridge the gap to the final frontier of neural prostheses based on imagined speech processes, we investigate the differences between audibly produced and imagined speech and present first results towards BCI from imagined speech processes. This dissertation demonstrates the usage of speech processes as a paradigm for BCI for the first time. Speech processes offer a fast and natural interaction paradigm which will help patients and healthy users alike to communicate with computers and with friends and family efficiently through BCIs

    Period Concatenation Underlies Interactions between Gamma and Beta Rhythms in Neocortex

    Get PDF
    The neocortex generates rhythmic electrical activity over a frequency range covering many decades. Specific cognitive and motor states are associated with oscillations in discrete frequency bands within this range, but it is not known whether interactions and transitions between distinct frequencies are of functional importance. When coexpressed rhythms have frequencies that differ by a factor of two or more interactions can be seen in terms of phase synchronization. Larger frequency differences can result in interactions in the form of nesting of faster frequencies within slower ones by a process of amplitude modulation. It is not known how coexpressed rhythms, whose frequencies differ by less than a factor of two may interact. Here we show that two frequencies (gamma – 40 Hz and beta2 – 25 Hz), coexpressed in superficial and deep cortical laminae with low temporal interaction, can combine to generate a third frequency (beta1 – 15 Hz) showing strong temporal interaction. The process occurs via period concatenation, with basic rhythm-generating microcircuits underlying gamma and beta2 rhythms forming the building blocks of the beta1 rhythm by a process of addition. The mean ratio of adjacent frequency components was a constant – approximately the golden mean – which served to both minimize temporal interactions, and permit multiple transitions, between frequencies. The resulting temporal landscape may provide a framework for multiplexing – parallel information processing on multiple temporal scales

    Signal Processing and Machine Learning Techniques Towards Various Real-World Applications

    Get PDF
    abstract: Machine learning (ML) has played an important role in several modern technological innovations and has become an important tool for researchers in various fields of interest. Besides engineering, ML techniques have started to spread across various departments of study, like health-care, medicine, diagnostics, social science, finance, economics etc. These techniques require data to train the algorithms and model a complex system and make predictions based on that model. Due to development of sophisticated sensors it has become easier to collect large volumes of data which is used to make necessary hypotheses using ML. The promising results obtained using ML have opened up new opportunities of research across various departments and this dissertation is a manifestation of it. Here, some unique studies have been presented, from which valuable inference have been drawn for a real-world complex system. Each study has its own unique sets of motivation and relevance to the real world. An ensemble of signal processing (SP) and ML techniques have been explored in each study. This dissertation provides the detailed systematic approach and discusses the results achieved in each study. Valuable inferences drawn from each study play a vital role in areas of science and technology, and it is worth further investigation. This dissertation also provides a set of useful SP and ML tools for researchers in various fields of interest.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Electroencephalography brain computer interface using an asynchronous protocol

    Get PDF
    A dissertation submitted to the Faculty of Science, University of the Witwatersrand, in ful llment of the requirements for the degree of Master of Science. October 31, 2016.Brain Computer Interface (BCI) technology is a promising new channel for communication between humans and computers, and consequently other humans. This technology has the potential to form the basis for a paradigm shift in communication for people with disabilities or neuro-degenerative ailments. The objective of this work is to create an asynchronous BCI that is based on a commercial-grade electroencephalography (EEG) sensor. The BCI is intended to allow a user of possibly low income means to issue control signals to a computer by using modulated cortical activation patterns as a control signal. The user achieves this modulation by performing a mental task such as imagining waving the left arm until the computer performs the action intended by the user. In our work, we make use of the Emotiv EPOC headset to perform the EEG measurements. We validate our models by assessing their performance when the experimental data is collected using clinical-grade EEG technology. We make use of a publicly available data-set in the validation phase. We apply signal processing concepts to extract the power spectrum of each electrode from the EEG time-series data. In particular, we make use of the fast Fourier transform (FFT). Specific bands in the power spectra are used to construct a vector that represents an abstract state the brain is in at that particular moment. The selected bands are motivated by insights from neuroscience. The state vector is used in conjunction with a model that performs classification. The exact purpose of the model is to associate the input data with an abstract classification result which can then used to select the appropriate set of instructions to be executed by the computer. In our work, we make use of probabilistic graphical models to perform this association. The performance of two probabilistic graphical models is evaluated in this work. As a preliminary step, we perform classification on pre-segmented data and we assess the performance of the hidden conditional random fields (HCRF) model. The pre-segmented data has a trial structure such that each data le contains the power spectra measurements associated with only one mental task. The objective of the assessment is to determine how well the HCRF models the spatio-spectral and temporal relationships in the EEG data when mental tasks are performed in the aforementioned manner. In other words, the HCRF is to model the internal dynamics of the data corresponding to the mental task. The performance of the HCRF is assessed over three and four classes. We find that the HCRF can model the internal structure of the data corresponding to different mental tasks. As the final step, we perform classification on continuous data that is not segmented and assess the performance of the latent dynamic conditional random fields (LDCRF). The LDCRF is used to perform sequence segmentation and labeling at each time-step so as to allow the program to determine which action should be taken at that moment. The sequence segmentation and labeling is the primary capability that we require in order to facilitate an asynchronous BCI protocol. The continuous data has a trial structure such that each data le contains the power spectra measurements associated with three different mental tasks. The mental tasks are randomly selected at 15 second intervals. The objective of the assessment is to determine how well the LDCRF models the spatio-spectral and temporal relationships in the EEG data, both within each mental task and in the transitions between mental tasks. The performance of the LDCRF is assessed over three classes for both the publicly available data and the data we obtained using the Emotiv EPOC headset. We find that the LDCRF produces a true positive classification rate of 82.31% averaged over three subjects, on the validation data which is in the publicly available data. On the data collected using the Emotiv EPOC, we find that the LDCRF produces a true positive classification rate of 42.55% averaged over two subjects. In the two assessments involving the LDCRF, the random classification strategy would produce a true positive classification rate of 33.34%. It is thus clear that our classification strategy provides above random performance on the two groups of data-sets. We conclude that our results indicate that creating low-cost EEG based BCI technology holds potential for future development. However, as discussed in the final chapter, further work on both the software and low-cost hardware aspects is required in order to improve the performance of the technology as it relates to the low-cost context.LG201

    Deep Learning for Electromyographic Hand Gesture Signal Classification Using Transfer Learning

    Get PDF
    In recent years, deep learning algorithms have become increasingly more prominent for their unparalleled ability to automatically learn discriminant features from large amounts of data. However, within the field of electromyography-based gesture recognition, deep learning algorithms are seldom employed as they require an unreasonable amount of effort from a single person, to generate tens of thousands of examples. This work's hypothesis is that general, informative features can be learned from the large amounts of data generated by aggregating the signals of multiple users, thus reducing the recording burden while enhancing gesture recognition. Consequently, this paper proposes applying transfer learning on aggregated data from multiple users, while leveraging the capacity of deep learning algorithms to learn discriminant features from large datasets. Two datasets comprised of 19 and 17 able-bodied participants respectively (the first one is employed for pre-training) were recorded for this work, using the Myo Armband. A third Myo Armband dataset was taken from the NinaPro database and is comprised of 10 able-bodied participants. Three different deep learning networks employing three different modalities as input (raw EMG, Spectrograms and Continuous Wavelet Transform (CWT)) are tested on the second and third dataset. The proposed transfer learning scheme is shown to systematically and significantly enhance the performance for all three networks on the two datasets, achieving an offline accuracy of 98.31% for 7 gestures over 17 participants for the CWT-based ConvNet and 68.98% for 18 gestures over 10 participants for the raw EMG-based ConvNet. Finally, a use-case study employing eight able-bodied participants suggests that real-time feedback allows users to adapt their muscle activation strategy which reduces the degradation in accuracy normally experienced over time.Comment: Source code and datasets available: https://github.com/Giguelingueling/MyoArmbandDatase
    • …
    corecore