1,885 research outputs found

    A physiological signal database of children with different special needs for stress recognition

    Get PDF
    This study presents a new dataset AKTIVES for evaluating the methods for stress detection and game reaction using physiological signals. We collected data from 25 children with obstetric brachial plexus injury, dyslexia, and intellectual disabilities, and typically developed children during game therapy. A wristband was used to record physiological data (blood volume pulse (BVP), electrodermal activity (EDA), and skin temperature (ST)). Furthermore, the facial expressions of children were recorded. Three experts watched the children's videos, and physiological data is labeled "Stress/No Stress" and "Reaction/No Reaction", according to the videos. The technical validation supported high-quality signals and showed consistency between the experts.Scientific and Technological Research Council of Turkey Technology and Innovation Funding Programmes Directorat

    Improving Electroencephalography-Based Imagined Speech Recognition with a Simultaneous Video Data Stream

    Get PDF
    Electroencephalography (EEG) devices offer a non-invasive mechanism for implementing imagined speech recognition, the process of estimating words or commands that a person expresses only in thought. However, existing methods can only achieve limited predictive accuracy with very small vocabularies; and therefore are not yet sufficient to enable fluid communication between humans and machines. This project proposes a new method for improving the ability of a classifying algorithm to recognize imagined speech recognition, by collecting and analyzing a large dataset of simultaneous EEG and video data streams. The results from this project suggest confirmation that complementing high-dimensional EEG data with similarly high-dimensional video data enhances a classifier’s ability to extract features from an EEG stream and facilitate imagined speech recognition

    Classification of EEG signals for facial expression and motor execution with deep learning

    Get PDF
    Recently, algorithms of machine learning are widely used with the field of electroencephalography (EEG) brain-computer interfaces (BCI). The preprocessing stage for the EEG signals is performed by applying the principle component analysis (PCA) algorithm to extract the important features and reducing the data redundancy. A model for classifying EEG, time series, signals for facial expression and some motor execution processes had been designed. A neural network of three hidden layers with deep learning classifier had been used in this work. Data of four different subjects were collected by using a 14 channels Emotiv EPOC+ device. EEG dataset samples including ten action classes for the facial expression and some motor execution movements are recorded. A classification results with accuracy range (91.25-95.75%) for the collected samples were obtained with respect to: number of samples for each class, total number of EEG dataset samples and type of activation function within the hidden and the output layer neurons. A time series EEG signal was taken as signal values not as image or histogram, analysed and classified with deep learning to obtain the satisfied results of accuracy
    corecore