4,785 research outputs found
Innovative Measures of Verhulst Diagram for Emotion Recognition using Eye-Blinking Variability
Background: The human body continuously reveals the status of several organs through biomedical signals. Over time, biomedical signal acquisition, monitoring, and analysis have captured the attention of many scientists for further prediction, diagnosis, decision-making, and recognition. Recently, building an intelligent emotion recognition system has become a challenging issue using the application of signal processing. Frequently, human emotion classification was proposed utilizing the internal body status in dealing with affective provocations. However, external states, such as eye movements, have been claimed to convey practical information about the participant’s emotions. In this study, we proposed an automatic emotion recognition scheme through the analysis of a single-modal eye-blinking variability.Methods: Initially, the signal was transformed into a 2D space using the Verhulst diagram, a simple analysis based on the signal’s dynamics. Next, some innovative features were introduced to characterize the maps. Then, the extracted measures were inputted to the support vector machine (SVM) and k-nearest neighbor (kNN). The former classifier was evaluated with three kernel functions, including RBF, linear, and polynomial. The latter performances were examined with different values for k. Moreover, the classification results were assessed in two feature-set partitioning modes: a 5-fold and 10-fold cross-validation.Results: The results showed a statistically significant difference between neutral/fear and neutral/sadness for all Verhulst indices. Also, the average values of these characteristics were higher for fear and sadness than those of other emotions. Our results indicated a maximum rate of 100% for the fear/neutral classification. Therefore, the suggested Verhulst-based approach was supremely talented in emotion classification and analysis using eye-blinking signals.Conclusion: The novel biomarkers set the scene for designing a simple accurate emotion recognition system. Additionally, this experiment could fortify the territory of ocular affective computing, and open a new horizon for diagnosing or treating various emotion deficiency disorders
Data-driven multivariate and multiscale methods for brain computer interface
This thesis focuses on the development of data-driven multivariate and multiscale methods
for brain computer interface (BCI) systems. The electroencephalogram (EEG), the
most convenient means to measure neurophysiological activity due to its noninvasive nature,
is mainly considered. The nonlinearity and nonstationarity inherent in EEG and its
multichannel recording nature require a new set of data-driven multivariate techniques to
estimate more accurately features for enhanced BCI operation. Also, a long term goal
is to enable an alternative EEG recording strategy for achieving long-term and portable
monitoring.
Empirical mode decomposition (EMD) and local mean decomposition (LMD), fully
data-driven adaptive tools, are considered to decompose the nonlinear and nonstationary
EEG signal into a set of components which are highly localised in time and frequency. It
is shown that the complex and multivariate extensions of EMD, which can exploit common
oscillatory modes within multivariate (multichannel) data, can be used to accurately
estimate and compare the amplitude and phase information among multiple sources, a
key for the feature extraction of BCI system. A complex extension of local mean decomposition
is also introduced and its operation is illustrated on two channel neuronal
spike streams. Common spatial pattern (CSP), a standard feature extraction technique
for BCI application, is also extended to complex domain using the augmented complex
statistics. Depending on the circularity/noncircularity of a complex signal, one of the
complex CSP algorithms can be chosen to produce the best classification performance
between two different EEG classes.
Using these complex and multivariate algorithms, two cognitive brain studies are
investigated for more natural and intuitive design of advanced BCI systems. Firstly, a Yarbus-style auditory selective attention experiment is introduced to measure the user
attention to a sound source among a mixture of sound stimuli, which is aimed at improving
the usefulness of hearing instruments such as hearing aid. Secondly, emotion experiments
elicited by taste and taste recall are examined to determine the pleasure and displeasure
of a food for the implementation of affective computing. The separation between two
emotional responses is examined using real and complex-valued common spatial pattern
methods.
Finally, we introduce a novel approach to brain monitoring based on EEG recordings
from within the ear canal, embedded on a custom made hearing aid earplug. The new
platform promises the possibility of both short- and long-term continuous use for standard
brain monitoring and interfacing applications
The Affective Perceptual Model: Enhancing Communication Quality for Persons with PIMD
Methods for prolonged compassionate care for persons with Profound Intellectual and Multiple Disabilities (PIMD) require a rotating cast of import people in the subjects life in order to facilitate interaction with the external environment. As subjects continue to age, dependency on these people increases with complexity of communications while the quality of communication decreases. It is theorized that a machine learning (ML) system could replicate the attuning process and replace these people to promote independence. This thesis extends this idea to develop a conceptual and formal model and system prototype.
The main contributions of this thesis are: (1) proposal of a conceptual and formal model for using machine learning to attune to unique communications from subjects with PIMD, (2) implementation of the system with both hardware and software components, and (3) modeling affect recognition in individuals based on the sensors from the hardware implementation
Semi-supervised Deep Generative Modelling of Incomplete Multi-Modality Emotional Data
There are threefold challenges in emotion recognition. First, it is difficult
to recognize human's emotional states only considering a single modality.
Second, it is expensive to manually annotate the emotional data. Third,
emotional data often suffers from missing modalities due to unforeseeable
sensor malfunction or configuration issues. In this paper, we address all these
problems under a novel multi-view deep generative framework. Specifically, we
propose to model the statistical relationships of multi-modality emotional data
using multiple modality-specific generative networks with a shared latent
space. By imposing a Gaussian mixture assumption on the posterior approximation
of the shared latent variables, our framework can learn the joint deep
representation from multiple modalities and evaluate the importance of each
modality simultaneously. To solve the labeled-data-scarcity problem, we extend
our multi-view model to semi-supervised learning scenario by casting the
semi-supervised classification problem as a specialized missing data imputation
task. To address the missing-modality problem, we further extend our
semi-supervised multi-view model to deal with incomplete data, where a missing
view is treated as a latent variable and integrated out during inference. This
way, the proposed overall framework can utilize all available (both labeled and
unlabeled, as well as both complete and incomplete) data to improve its
generalization ability. The experiments conducted on two real multi-modal
emotion datasets demonstrated the superiority of our framework.Comment: arXiv admin note: text overlap with arXiv:1704.07548, 2018 ACM
Multimedia Conference (MM'18
- …