214 research outputs found

    Modeling of Performance Creative Evaluation Driven by Multimodal Affective Data

    Get PDF
    Performance creative evaluation can be achieved through affective data, and the use of affective featuresto evaluate performance creative is a new research trend. This paper proposes a “Performance Creative—Multimodal Affective (PC-MulAff)” model based on the multimodal affective features for performance creative evaluation. The multimedia data acquisition equipment is used to collect the physiological data of the audience, including the multimodal affective data such as the facial expression, heart rate and eye movement. Calculate affective features of multimodal data combined with director annotation, and defined “Performance Creative—Affective Acceptance (PC-Acc)” based on multimodal affective features to evaluate the quality of performance creative. This paper verifies the PC-MulAff model on different performance data sets. The experimental results show that the PC-MulAff model shows high evaluation quality in different performance forms. In the creative evaluation of dance performance, the accuracy of the model is 7.44% and 13.95% higher than that of the single textual and single video evaluation

    Video Based Deep CNN Model for Depression Detection

    Get PDF
    Our face reflects our feelings towards anything and everything we see, smell, teste or feel through any of our senses. Hence multiple attempts have been made since last few decades towards understanding the facial expressions. Emotion detection has numerous applications since Safe Driving, Health Monitoring Systems, Marketing and Advertising etc. We propose an Automatic Depression Detection (ADD) system based on Facial Expression Recognition (FER). We propose a model to optimize the FER system for understanding seven basic emotions (joy, sadness, fear, anger, surprise, disgust and neutral) and use it for detection of Depression Level in the subject. The proposed model will detect if a person is in depression and if so, up to what extent. Our model will be based on a Deep Convolution Neural Network (DCNN)

    Deep learning analysis of mobile physiological, environmental and location sensor data for emotion detection

    Get PDF
    The detection and monitoring of emotions are important in various applications, e.g. to enable naturalistic and personalised human-robot interaction. Emotion detection often require modelling of various data inputs from multiple modalities, including physiological signals (e.g.EEG and GSR), environmental data (e.g. audio and weather), videos (e.g. for capturing facial expressions and gestures) and more recently motion and location data. Many traditional machine learning algorithms have been utilised to capture the diversity of multimodal data at the sensors and features levels for human emotion classification. While the feature engineering processes often embedded in these algorithms are beneficial for emotion modelling, they inherit some critical limitations which may hinder the development of reliable and accurate models. In this work, we adopt a deep learning approach for emotion classification through an iterative process by adding and removing large number of sensor signals from different modalities. Our dataset was collected in a real-world study from smart-phones and wearable devices. It merges local interaction of three sensor modalities: on-body, environmental and location into global model that represents signal dynamics along with the temporal relationships of each modality. Our approach employs a series of learning algorithms including a hybrid approach using Convolutional Neural Network and Long Short-term Memory Recurrent Neural Network (CNN-LSTM) on the raw sensor data, eliminating the needs for manual feature extraction and engineering. The results show that the adoption of deep-learning approaches is effective in human emotion classification when large number of sensors input is utilised (average accuracy 95% and F-Measure=%95) and the hybrid models outperform traditional fully connected deep neural network (average accuracy 73% and F-Measure=73%). Furthermore, the hybrid models outperform previously developed Ensemble algorithms that utilise feature engineering to train the model average accuracy 83% and F-Measure=82%

    An Exploratory Study to Bring Meaning of Haptic In Association with Human Emotion

    Get PDF
    The popularity of haptic technologies has permitted daily life, allowing intimate and emotional contact to be conveyed from sender to receiver. However there are weaknesses apart when haptic is being applied into an application, which can result misinterpreted, high complexity and confusion to the user. Research shows that emotion comprise close relationship with haptic feedback, this research project will investigate the effectiveness of emotion to bring haptic meaning. The project has predict the weaknesses of emotion in explore the absolute meaning of haptic, however with the present of multi-model technology the weaknesses could be reduce in order to identify the suitable definition of haptic with association to emotion

    SSL Framework for Causal Inconsistency between Structures and Representations

    Full text link
    The cross-pollination of deep learning and causal discovery has catalyzed a burgeoning field of research seeking to elucidate causal relationships within non-statistical data forms like images, videos, and text. Such data, often being named `indefinite data', exhibit unique challenges-inconsistency between causal structure and representation, which are not common in conventional data forms. To tackle this issue, we theoretically develop intervention strategies suitable for indefinite data and derive causal consistency condition (CCC). Moreover, we design a self-supervised learning (SSL) framework that considers interventions as `views' and CCC as a `philosophy' with two implement examples on Supervised Specialized Models (SSMs) and Large Language Models (LLMs), respectively. To evaluate pure inconsistency manifestations, we have prepared the first high-quality causal dialogue dataset-Causalogue. Evaluations are also performed on three other downstream tasks. Extensive experimentation has substantiated the efficacy of our methodology, illuminating how CCC could potentially play an influential role in various fields

    Subjective Fear in Virtual Reality: A Linear Mixed-Effects Analysis of Skin Conductance

    Get PDF
    he investigation of the physiological and pathological processes involved in fear perception is complicated due to the difficulties in reliably eliciting and measuring the complex construct of fear. This study proposes a novel approach to induce and measure subjective fear and its physiological correlates combining virtual reality (VR) with a mixed-effects model based on skin conductance (SC). Specifically, we developed a new VR scenario applying specific guidelines derived from horror movies and video games. Such a VR environment was used to induce fear in eighteen volunteers in an experimental protocol, including two relaxation scenarios and a neutral virtual environment. The SC signal was acquired throughout the experiment, and after each virtual scenario, the emotional state and fear perception level were assessed using psychometric scales. We statistically evaluated the greatest sympathetic activation induced by the fearful scenario compared to the others, showing significant results for most SC-derived features. Finally, we developed a rigorous mixed-effects model to explain the perceived fear as a function of the SC features. Model-fitting results showed a significant relationship between the fear perception scores and a combination of features extracted from both fast- and slow-varying SC components, proposing a novel solution for a more objective fear assessme

    ForDigitStress: A multi-modal stress dataset employing a digital job interview scenario

    Full text link
    We present a multi-modal stress dataset that uses digital job interviews to induce stress. The dataset provides multi-modal data of 40 participants including audio, video (motion capturing, facial recognition, eye tracking) as well as physiological information (photoplethysmography, electrodermal activity). In addition to that, the dataset contains time-continuous annotations for stress and occurred emotions (e.g. shame, anger, anxiety, surprise). In order to establish a baseline, five different machine learning classifiers (Support Vector Machine, K-Nearest Neighbors, Random Forest, Long-Short-Term Memory Network) have been trained and evaluated on the proposed dataset for a binary stress classification task. The best-performing classifier achieved an accuracy of 88.3% and an F1-score of 87.5%

    A Detail Based Method for Linear Full Reference Image Quality Prediction

    Full text link
    In this paper, a novel Full Reference method is proposed for image quality assessment, using the combination of two separate metrics to measure the perceptually distinct impact of detail losses and of spurious details. To this purpose, the gradient of the impaired image is locally decomposed as a predicted version of the original gradient, plus a gradient residual. It is assumed that the detail attenuation identifies the detail loss, whereas the gradient residuals describe the spurious details. It turns out that the perceptual impact of detail losses is roughly linear with the loss of the positional Fisher information, while the perceptual impact of the spurious details is roughly proportional to a logarithmic measure of the signal to residual ratio. The affine combination of these two metrics forms a new index strongly correlated with the empirical Differential Mean Opinion Score (DMOS) for a significant class of image impairments, as verified for three independent popular databases. The method allowed alignment and merging of DMOS data coming from these different databases to a common DMOS scale by affine transformations. Unexpectedly, the DMOS scale setting is possible by the analysis of a single image affected by additive noise.Comment: 15 pages, 9 figures. Copyright notice: The paper has been accepted for publication on the IEEE Trans. on Image Processing on 19/09/2017 and the copyright has been transferred to the IEE
    • …
    corecore