428 research outputs found

    A quantitative taxonomy of human hand grasps

    Get PDF
    Background: A proper modeling of human grasping and of hand movements is fundamental for robotics, prosthetics, physiology and rehabilitation. The taxonomies of hand grasps that have been proposed in scientific literature so far are based on qualitative analyses of the movements and thus they are usually not quantitatively justified. Methods: This paper presents to the best of our knowledge the first quantitative taxonomy of hand grasps based on biomedical data measurements. The taxonomy is based on electromyography and kinematic data recorded from 40 healthy subjects performing 20 unique hand grasps. For each subject, a set of hierarchical trees are computed for several signal features. Afterwards, the trees are combined, first into modality-specific (i.e. muscular and kinematic) taxonomies of hand grasps and then into a general quantitative taxonomy of hand movements. The modality-specific taxonomies provide similar results despite describing different parameters of hand movements, one being muscular and the other kinematic. Results: The general taxonomy merges the kinematic and muscular description into a comprehensive hierarchical structure. The obtained results clarify what has been proposed in the literature so far and they partially confirm the qualitative parameters used to create previous taxonomies of hand grasps. According to the results, hand movements can be divided into five movement categories defined based on the overall grasp shape, finger positioning and muscular activation. Part of the results appears qualitatively in accordance with previous results describing kinematic hand grasping synergies. Conclusions: The taxonomy of hand grasps proposed in this paper clarifies with quantitative measurements what has been proposed in the field on a qualitative basis, thus having a potential impact on several scientific fields

    Deep Learning for Processing Electromyographic Signals: a Taxonomy-based Survey

    Get PDF
    Deep Learning (DL) has been recently employed to build smart systems that perform incredibly well in a wide range of tasks, such as image recognition, machine translation, and self-driving cars. In several fields the considerable improvement in the computing hardware and the increasing need for big data analytics has boosted DL work. In recent years physiological signal processing has strongly benefited from deep learning. In general, there is an exponential increase in the number of studies concerning the processing of electromyographic (EMG) signals using DL methods. This phenomenon is mostly explained by the current limitation of myoelectric controlled prostheses as well as the recent release of large EMG recording datasets, e.g. Ninapro. Such a growing trend has inspired us to seek and review recent papers focusing on processing EMG signals using DL methods. Referring to the Scopus database, a systematic literature search of papers published between January 2014 and March 2019 was carried out, and sixty-five papers were chosen for review after a full text analysis. The bibliometric research revealed that the reviewed papers can be grouped in four main categories according to the final application of the EMG signal analysis: Hand Gesture Classification, Speech and Emotion Classification, Sleep Stage Classification and Other Applications. The review process also confirmed the increasing trend in terms of published papers, the number of papers published in 2018 is indeed four times the amount of papers published the year before. As expected, most of the analyzed papers (≈60 %) concern the identification of hand gestures, thus supporting our hypothesis. Finally, it is worth reporting that the convolutional neural network (CNN) is the most used topology among the several involved DL architectures, in fact, the sixty percent approximately of the reviewed articles consider a CNN

    Sleep Stage Classification: A Deep Learning Approach

    Get PDF
    Sleep occupies significant part of human life. The diagnoses of sleep related disorders are of great importance. To record specific physical and electrical activities of the brain and body, a multi-parameter test, called polysomnography (PSG), is normally used. The visual process of sleep stage classification is time consuming, subjective and costly. To improve the accuracy and efficiency of the sleep stage classification, automatic classification algorithms were developed. In this research work, we focused on pre-processing (filtering boundaries and de-noising algorithms) and classification steps of automatic sleep stage classification. The main motivation for this work was to develop a pre-processing and classification framework to clean the input EEG signal without manipulating the original data thus enhancing the learning stage of deep learning classifiers. For pre-processing EEG signals, a lossless adaptive artefact removal method was proposed. Rather than other works that used artificial noise, we used real EEG data contaminated with EOG and EMG for evaluating the proposed method. The proposed adaptive algorithm led to a significant enhancement in the overall classification accuracy. In the classification area, we evaluated the performance of the most common sleep stage classifiers using a comprehensive set of features extracted from PSG signals. Considering the challenges and limitations of conventional methods, we proposed two deep learning-based methods for classification of sleep stages based on Stacked Sparse AutoEncoder (SSAE) and Convolutional Neural Network (CNN). The proposed methods performed more efficiently by eliminating the need for conventional feature selection and feature extraction steps respectively. Moreover, although our systems were trained with lower number of samples compared to the similar studies, they were able to achieve state of art accuracy and higher overall sensitivity

    Deep learning for automated sleep monitoring

    Get PDF
    Wearable electroencephalography (EEG) is a technology that is revolutionising the longitudinal monitoring of neurological and mental disorders, improving the quality of life of patients and accelerating the relevant research. As sleep disorders and other conditions related to sleep quality affect a large part of the population, monitoring sleep at home, over extended periods of time could have significant impact on the quality of life of people who suffer from these conditions. Annotating the sleep architecture of patients, known as sleep stage scoring, is an expensive and time-consuming process that cannot scale to a large number of people. Using wearable EEG and automating sleep stage scoring is a potential solution to this problem. In this thesis, we propose and evaluate two deep learning algorithms for automated sleep stage scoring using a single channel of EEG. In our first method, we use time-frequency analysis for extracting features that closely follow the guidelines that human experts follow, combined with an ensemble of stacked sparse autoencoders as our classification algorithm. In our second method, we propose a convolutional neural network (CNN) architecture for automatically learning filters that are specific to the problem of sleep stage scoring. We achieved state-of-the-art results (mean F1-score 84%; range 82-86%) with our first method and comparably good results with the second (mean F1-score 81%; range 79-83%). Both our methods effectively account for the skewed performance that is usually found in the literature due to sleep stage duration imbalance. We propose a filter analysis and visualisation methodology for CNNs to understand the filters that CNNs learn. Our results indicate that our CNN was able to robustly learn filters that closely follow the sleep scoring guidelines.Open Acces

    EMG Signal Decomposition Using Motor Unit Potential Train Validity

    Get PDF
    Electromyographic (EMG) signal decomposition is the process of resolving an EMG signal into its component motor unit potential trains (MUPTs). The extracted MUPTs can aid in the diagnosis of neuromuscular disorders and the study of the neural control of movement, but only if they are valid trains. Before using decomposition results and the motor unit potential (MUP) shape and motor unit (MU) firing pattern information related to each active MU for either clinical or research purposes the fact that the extracted MUPTs are valid needs to be confirmed. The existing MUPT validation methods are either time consuming or related to operator experience and skill. More importantly, they cannot be executed during automatic decomposition of EMG signals to assist with improving decomposition results. To overcome these issues, in this thesis the possibility of developing automatic MUPT validation algorithms has been explored. Several methods based on a combination of feature extraction techniques, cluster validation methods, supervised classification algorithms, and multiple classifier fusion techniques were developed. The developed methods, in general, use either the MU firing pattern or MUP-shape consistency of a MUPT, or both, to estimate its overall validity. The performance of the developed systems was evaluated using a variety of MUPTs obtained from the decomposition of several simulated and real intramuscular EMG signals. Based on the results achieved, the methods that use only shape or only firing pattern information had higher generalization error than the systems that use both types of information. For the classifiers that use MU firing pattern information of a MUPT to determine its validity, the accuracy for invalid trains decreases as the number of missed-classification errors in trains increases. Likewise, for the methods that use MUP-shape information of a MUPT to determine its validity, the classification accuracy for invalid trains decreases as the within-train similarity of the invalid trains increase. Of the systems that use both shape and firing pattern information, those that separately estimate MU firing pattern validity and MUP-shape validity and then estimate the overall validity of a train by fusing these two indices using trainable fusion methods performed better than the single classifier scheme that estimates MUPT validity using a single classifier, especially for the real data used. Overall, the multi-classifier constructed using trainable logistic regression to aggregate base classifier outputs had the best performance with overall accuracy of 99.4% and 98.8% for simulated and real data, respectively. The possibility of formulating an algorithm for automated editing MUPTs contaminated with a high number of false-classification errors (FCEs) during decomposition was also investigated. Ultimately, a robust method was developed for this purpose. Using a supervised classifier and MU firing pattern information provided by each MUPT, the developed algorithm first determines whether a given train is contaminated by a high number of FCEs and needs to be edited. For contaminated MUPTs, the method uses both MU firing pattern and MUP shape information to detect MUPs that were erroneously assigned to the train. Evaluation based on simulated and real MU firing patterns, shows that contaminated MUPTs could be detected with 84% and 81% accuracy for simulated and real data, respectively. For a given contaminated MUPT, the algorithm on average correctly classified around 92.1% of the MUPs of the MUPT. The effectiveness of using the developed MUPT validation systems and the MUPT editing methods during EMG signal decomposition was investigated by integrating these algorithms into a certainty-based EMG signal decomposition algorithm. Overall, the decomposition accuracy for 32 simulated and 30 real EMG signals was improved by 7.5% (from 86.7% to 94.2%) and 3.4% (from 95.7% to 99.1%), respectively. A significant improvement was also achieved in correctly estimating the number of MUPTs represented in a set of detected MUPs. The simulated and real EMG signals used were comprised of 3–11 and 3–15 MUPTs, respectively

    Deep learning approach for epileptic seizure detection

    Get PDF
    Abstract. Epilepsy is the most common brain disorder that affects approximately fifty million people worldwide, according to the World Health Organization. The diagnosis of epilepsy relies on manual inspection of EEG, which is error-prone and time-consuming. Automated epileptic seizure detection of EEG signal can reduce the diagnosis time and facilitate targeting of treatment for patients. Current detection approaches mainly rely on the features that are designed manually by domain experts. The features are inflexible for the detection of a variety of complex patterns in a large amount of EEG data. Moreover, the EEG is non-stationary signal and seizure patterns vary across patients and recording sessions. EEG data always contain numerous noise types that negatively affect the detection accuracy of epileptic seizures. To address these challenges deep learning approaches are examined in this paper. Deep learning methods were applied to a large publicly available dataset, the Children’s Hospital of Boston-Massachusetts Institute of Technology dataset (CHB-MIT). The present study includes three experimental groups that are grouped based on the pre-processing steps. The experimental groups contain 3–4 experiments that differ between their objectives. The time-series EEG data is first pre-processed by certain filters and normalization techniques, and then the pre-processed signal was segmented into a sequence of non-overlapping epochs. Second, time series data were transformed into different representations of input signals. In this study time-series EEG signal, magnitude spectrograms, 1D-FFT, 2D-FFT, 2D-FFT magnitude spectrum and 2D-FFT phase spectrum were investigated and compared with each other. Third, time-domain or frequency-domain signals were used separately as a representation of input data of VGG or DenseNet 1D. The best result was achieved with magnitude spectrograms used as representation of input data in VGG model: accuracy of 0.98, sensitivity of 0.71 and specificity of 0.998 with subject dependent data. VGG along with magnitude spectrograms produced promising results for building personalized epileptic seizure detector. There was not enough data for VGG and DenseNet 1D to build subject-dependent classifier.Epileptisten kohtausten havaitseminen syväoppimisella lähestymistavalla. Tiivistelmä. Epilepsia on yleisin aivosairaus, joka Maailman terveysjärjestön mukaan vaikuttaa noin viiteenkymmeneen miljoonaan ihmiseen maailmanlaajuisesti. Epilepsian diagnosointi perustuu EEG:n manuaaliseen tarkastamiseen, mikä on virhealtista ja aikaa vievää. Automaattinen epileptisten kohtausten havaitseminen EEG-signaalista voi potentiaalisesti vähentää diagnoosiaikaa ja helpottaa potilaan hoidon kohdentamista. Nykyiset tunnistusmenetelmät tukeutuvat pääasiassa piirteisiin, jotka asiantuntijat ovat määritelleet manuaalisesti, mutta ne ovat joustamattomia monimutkaisten ilmiöiden havaitsemiseksi suuresta määrästä EEG-dataa. Lisäksi, EEG on epästationäärinen signaali ja kohtauspiirteet vaihtelevat potilaiden ja tallennusten välillä ja EEG-data sisältää aina useita kohinatyyppejä, jotka huonontavat epilepsiakohtauksen havaitsemisen tarkkuutta. Näihin haasteisiin vastaamiseksi tässä diplomityössä tarkastellaan soveltuvatko syväoppivat menetelmät epilepsian havaitsemiseen EEG-tallenteista. Aineistona käytettiin suurta julkisesti saatavilla olevaa Bostonin Massachusetts Institute of Technology lastenklinikan tietoaineistoa (CHB-MIT). Tämän työn tutkimus sisältää kolme koeryhmää, jotka eroavat toisistaan esikäsittelyvaiheiden osalta: aikasarja-EEG-data esikäsiteltiin perinteisten suodattimien ja normalisointitekniikoiden avulla, ja näin esikäsitelty signaali segmentoitiin epookkeihin. Kukin koeryhmä sisältää 3–4 koetta, jotka eroavat menetelmiltään ja tavoitteiltaan. Kussakin niistä epookkeihin jaettu aikasarjadata muutettiin syötesignaalien erilaisiksi esitysmuodoiksi. Tässä tutkimuksessa tutkittiin ja verrattiin keskenään EEG-signaalia sellaisenaan, EEG-signaalin amplitudi-spektrogrammeja, 1D-FFT-, 2D-FFT-, 2D-FFT-amplitudi- ja 2D-FFT -vaihespektriä. Näin saatuja aika- ja taajuusalueen signaaleja käytettiin erikseen VGG- tai DenseNet 1D -mallien syötetietoina. Paras tulos saatiin VGG-mallilla kun syötetietona oli amplitudi-spektrogrammi ja tällöin tarkkuus oli 0,98, herkkyys 0,71 ja spesifisyys 0,99 henkilöstä riippuvaisella EEG-datalla. VGG yhdessä amplitudi-spektrogrammien kanssa tuottivat lupaavia tuloksia henkilökohtaisen epilepsiakohtausdetektorin rakentamiselle. VGG- ja DenseNet 1D -malleille ei ollut tarpeeksi EEG-dataa henkilöstä riippumattoman luokittelijan opettamiseksi
    corecore