21,017 research outputs found

    Development and comparison of dataglove and sEMG signal-based algorithms for the improvement of a hand gestures recognition system.

    Get PDF
    openHand gesture recognition is a topic widely discussed in literature, where several techniques are analyzed both in terms of input signal types and algorithms. The main bottleneck of the field is the generalization ability of the classifier, which becomes harder as the number of gestures to classify increases. This project has two purposes: first, it aims to develop a reliable and high-generalizability classifier, evaluating the difference in performances when using Dataglove and sEMG signals; finally, it makes considerations regarding the difficulties and advantages of developing a sEMG signal-based hand gesture recognition system, with the objective of providing indications for its improvement. To design the algorithms, data coming from a public available dataset were considered; the information were referred to 40 healthy subjects (not amputees), and for each of the 17 gestures considered, 6 repetitions were done. Finally, both conventional machine learning and deep learning approaches were used, comparing their efficiency. The results showed better performances for dataglove-based classifier, highlighting the signal informative power, while the sEMG could not provide high generalization. Interestingly, the latter signal gives better performances if it’s analyzed with classical machine learning approaches which allowed, performing feature selection, to underline both the most significative features and the most informative channels. This study confirmed the intrinsic difficulties in using the sEMG signal, but it could provide hints for the improvement of sEMG signal-based hand gesture recognition systems, by reduction of computational cost and electrodes position optimization

    A preliminary study of micro-gestures:dataset collection and analysis with multi-modal dynamic networks

    Get PDF
    Abstract. Micro-gestures (MG) are gestures that people performed spontaneously during communication situations. A preliminary exploration of Micro-Gesture is made in this thesis. By collecting recorded sequences of body gestures in a spontaneous state during games, a MG dataset is built through Kinect V2. A novel term ‘micro-gesture’ is proposed by analyzing the properties of MG dataset. Implementations of two sets of neural network architectures are achieved for micro-gestures segmentation and recognition task, which are the DBN-HMM model and the 3DCNN-HMM model for skeleton data and RGB-D data respectively. We also explore a method for extracting neutral states used in the HMM structure by detecting the activity level of the gesture sequences. The method is simple to derive and implement, and proved to be effective. The DBN-HMM and 3DCNN-HMM architectures are evaluated on MG dataset and optimized for the properties of micro-gestures. Experimental results show that we are able to achieve micro-gesture segmentation and recognition with satisfied accuracy with these two models. The work we have done about the micro-gestures in this thesis also explores a new research path for gesture recognition. Therefore, we believe that our work could be widely used as a baseline for future research on micro-gestures

    Converting Your Thoughts to Texts: Enabling Brain Typing via Deep Feature Learning of EEG Signals

    Full text link
    An electroencephalography (EEG) based Brain Computer Interface (BCI) enables people to communicate with the outside world by interpreting the EEG signals of their brains to interact with devices such as wheelchairs and intelligent robots. More specifically, motor imagery EEG (MI-EEG), which reflects a subjects active intent, is attracting increasing attention for a variety of BCI applications. Accurate classification of MI-EEG signals while essential for effective operation of BCI systems, is challenging due to the significant noise inherent in the signals and the lack of informative correlation between the signals and brain activities. In this paper, we propose a novel deep neural network based learning framework that affords perceptive insights into the relationship between the MI-EEG data and brain activities. We design a joint convolutional recurrent neural network that simultaneously learns robust high-level feature presentations through low-dimensional dense embeddings from raw MI-EEG signals. We also employ an Autoencoder layer to eliminate various artifacts such as background activities. The proposed approach has been evaluated extensively on a large- scale public MI-EEG dataset and a limited but easy-to-deploy dataset collected in our lab. The results show that our approach outperforms a series of baselines and the competitive state-of-the- art methods, yielding a classification accuracy of 95.53%. The applicability of our proposed approach is further demonstrated with a practical BCI system for typing.Comment: 10 page
    • …
    corecore