4,112 research outputs found
Recent Trends in Deep Learning Based Personality Detection
Recently, the automatic prediction of personality traits has received a lot
of attention. Specifically, personality trait prediction from multimodal data
has emerged as a hot topic within the field of affective computing. In this
paper, we review significant machine learning models which have been employed
for personality detection, with an emphasis on deep learning-based methods.
This review paper provides an overview of the most popular approaches to
automated personality detection, various computational datasets, its industrial
applications, and state-of-the-art machine learning models for personality
detection with specific focus on multimodal approaches. Personality detection
is a very broad and diverse topic: this survey only focuses on computational
approaches and leaves out psychological studies on personality detection
Recommended from our members
Automatic Prediction of Impressions in Time and across Varying Context: Personality, Attractiveness and Likeability
© 2010-2012 IEEE. In this paper, we propose a novel multimodal framework for automatically predicting the impressions of extroversion, agreeableness, conscientiousness, neuroticism , openness, attractiveness and likeability continuously in time and across varying situational contexts. Differently from the existing works, we obtain visual-only and audio-only annotations continuously in time for the same set of subjects, for the first time in the literature, and compare them to their audio-visual annotations. We propose a time-continuous prediction approach that learns the temporal relationships rather than treating each time instant separately. Our experiments show that the best prediction results are obtained when regression models are learned from audio-visual annotations and visual cues, and from audio-visual annotations and visual cues combined with audio cues at the decision level. Continuously generated annotations have the potential to provide insight into better understanding which impressions can be formed and predicted more dynamically, varying with situational context, and which ones appear to be more static and stable over time.This research work was supported by the EPSRC MAPTRAITS Project (Grant Ref: EP/K017500/1) and the EPSRC HARPS Project under its IDEAS Factory Sandpits call on Digital Personhood (Grant Ref: EP/L00416X/1)
Multimodal Emotion Recognition among Couples from Lab Settings to Daily Life using Smartwatches
Couples generally manage chronic diseases together and the management takes
an emotional toll on both patients and their romantic partners. Consequently,
recognizing the emotions of each partner in daily life could provide an insight
into their emotional well-being in chronic disease management. The emotions of
partners are currently inferred in the lab and daily life using self-reports
which are not practical for continuous emotion assessment or observer reports
which are manual, time-intensive, and costly. Currently, there exists no
comprehensive overview of works on emotion recognition among couples.
Furthermore, approaches for emotion recognition among couples have (1) focused
on English-speaking couples in the U.S., (2) used data collected from the lab,
and (3) performed recognition using observer ratings rather than partner's
self-reported / subjective emotions. In this body of work contained in this
thesis (8 papers - 5 published and 3 currently under review in various
journals), we fill the current literature gap on couples' emotion recognition,
develop emotion recognition systems using 161 hours of data from a total of
1,051 individuals, and make contributions towards taking couples' emotion
recognition from the lab which is the status quo, to daily life. This thesis
contributes toward building automated emotion recognition systems that would
eventually enable partners to monitor their emotions in daily life and enable
the delivery of interventions to improve their emotional well-being.Comment: PhD Thesis, 2022 - ETH Zuric
Performance Analysis of State-of-the-Art Deep Learning Models in the Visual-Based Apparent Personality Detection
This paper analyses the performances of pre-trained deep learning models as feature extractors for apparent personality trait detection (APD) by utilising different statistical methods to find the best performing pre-trained model. Accuracy and computational cost were used to measure the model performance. Personality is measured using the Big Five Personality Schema. CNN-RNN networks were designed using VGG19, ResNet152, and VGGFace pre-trained models to measure the personality with scene data. The models were compared using the mean accuracy attained and the average time is taken for training and testing. Descriptive statistics, graphs, and inferential statistics were applied in model comparisons. Results convey that, ResNet152 model reported the highest mean accuracy in the test dataset (0.9077), followed by VGG19 with 0.9036; VGGFace recorded the lowest (0.8962). ResNet152 consumed more time than other architectures in model training and testing since the number of parameters is comparably higher than the other two architectures involved. Statistical test results prove no significant evidence to conclude that VGG19 and ResNet152 based CNN-RNN models performed differently. This leads to the conclusion that even with a comparably lower number of parameters VGG19 model performed well. The findings reveal that satisfactory accuracy is obtained with a limited number of frames extracted from videos since models achieved more than 90% accuracy
- …