3 research outputs found

    A multichannel Deep Belief Network for the classification of EEG data

    Full text link
    © Springer International Publishing Switzerland 2015. Deep learning, and in particular Deep Belief Network (DBN), has recently witnessed increased attention from researchers as a new classification platform. It has been successfully applied to a number of classification problems, such as image classification, speech recognition and natural language processing. However, deep learning has not been fully explored in electroencephalogram (EEG) classification. We propose in this paper three implementations of DBNs to classify multichannel EEG data based on different channel fusion levels. In order to evaluate the proposed method, we used EEG data that has been recorded to study the modulatory effect of transcranial direct current stimulation. One of the proposed DBNs produced very promising results when compared to three well-established classifiers; which are Support Vec- tor Machine (SVM), Linear Discriminant Analysis (LDA) and Extreme Learning Machine (ELM)

    Deep Learning Methods for EEG Signals Classification of Motor Imagery in BCI

    Get PDF
    EEG signals are obtained from an EEG device after recording the user's brain signals. EEG signals can be generated by the user after performing motor movements or imagery tasks. Motor Imagery (MI) is the task of imagining motor movements that resemble the original motor movements. Brain Computer Interface (BCI) bridges interactions between users and applications in performing tasks. Brain Computer Interface (BCI) Competition IV 2a was used in this study. A fully automated correction method of EOG artifacts in EEG recordings was applied in order to remove artifacts and Common Spatial Pattern (CSP) to get features that can distinguish motor imagery tasks. In this study, a comparative studies between two deep learning methods was explored, namely Deep Belief Network (DBN) and Long Short Term Memory (LSTM). Usability of both deep learning methods was evaluated using the BCI Competition IV-2a dataset. The experimental results of these two deep learning methods show average accuracy of 50.35% for DBN and 49.65% for LSTM

    Methods and Apparatus for Autonomous Robotic Control

    Get PDF
    Sensory processing of visual, auditory, and other sensor information (e.g., visual imagery, LIDAR, RADAR) is conventionally based on "stovepiped," or isolated processing, with little interactions between modules. Biological systems, on the other hand, fuse multi-sensory information to identify nearby objects of interest more quickly, more efficiently, and with higher signal-to-noise ratios. Similarly, examples of the OpenSense technology disclosed herein use neurally inspired processing to identify and locate objects in a robot's environment. This enables the robot to navigate its environment more quickly and with lower computational and power requirements
    corecore