6 research outputs found

    A multimodal dataset for authoring and editing multimedia content:the MAMEM project

    Get PDF
    We present a dataset that combines multimodal biosignals and eye tracking information gathered under a human-computer interaction framework. The dataset was developed in the vein of the MAMEM project that aims to endow people with motor disabilities with the ability to edit and author multimedia content through mental commands and gaze activity. The dataset includes EEG, eye-tracking, and physiological (GSR and Heart rate) signals collected from 34 individuals (18 able-bodied and 16 motor-impaired). Data were collected during the interaction with specifically designed interface for web browsing and multimedia content manipulation and during imaginary movement tasks. The presented dataset will contribute towards the development and evaluation of modern human-computer interaction systems that would foster the integration of people with severe motor impairments back into society.</p

    Real-time subject-dependent EEG-based emotion recognition algorithm

    No full text
    In this paper, we proposed a real-time subject-dependent EEG-based emotion recognition algorithm and tested it on experiments' databases and the benchmark database DEAP. The algorithm consists of two parts: feature extraction and data classification with Support Vector Machine (SVM). Use of a Fractal Dimension feature in combination with statistical and Higher Order Crossings (HOC) features gave results with the best accuracy and with adequate computational time. The features were calculated from EEG using a sliding window. The proposed algorithm can recognize up to 8 emotions such as happy, surprised, satisfied, protected, angry, frightened, unconcerned, and sad using 4 electrodes in real time. Two experiments with audio and visual stimuli were implemented, and the Emotiv EPOC device was used to collect EEG data
    corecore