4,112 research outputs found

    Recent Trends in Deep Learning Based Personality Detection

    Full text link
    Recently, the automatic prediction of personality traits has received a lot of attention. Specifically, personality trait prediction from multimodal data has emerged as a hot topic within the field of affective computing. In this paper, we review significant machine learning models which have been employed for personality detection, with an emphasis on deep learning-based methods. This review paper provides an overview of the most popular approaches to automated personality detection, various computational datasets, its industrial applications, and state-of-the-art machine learning models for personality detection with specific focus on multimodal approaches. Personality detection is a very broad and diverse topic: this survey only focuses on computational approaches and leaves out psychological studies on personality detection

    Multimodal Emotion Recognition among Couples from Lab Settings to Daily Life using Smartwatches

    Full text link
    Couples generally manage chronic diseases together and the management takes an emotional toll on both patients and their romantic partners. Consequently, recognizing the emotions of each partner in daily life could provide an insight into their emotional well-being in chronic disease management. The emotions of partners are currently inferred in the lab and daily life using self-reports which are not practical for continuous emotion assessment or observer reports which are manual, time-intensive, and costly. Currently, there exists no comprehensive overview of works on emotion recognition among couples. Furthermore, approaches for emotion recognition among couples have (1) focused on English-speaking couples in the U.S., (2) used data collected from the lab, and (3) performed recognition using observer ratings rather than partner's self-reported / subjective emotions. In this body of work contained in this thesis (8 papers - 5 published and 3 currently under review in various journals), we fill the current literature gap on couples' emotion recognition, develop emotion recognition systems using 161 hours of data from a total of 1,051 individuals, and make contributions towards taking couples' emotion recognition from the lab which is the status quo, to daily life. This thesis contributes toward building automated emotion recognition systems that would eventually enable partners to monitor their emotions in daily life and enable the delivery of interventions to improve their emotional well-being.Comment: PhD Thesis, 2022 - ETH Zuric

    Performance Analysis of State-of-the-Art Deep Learning Models in the Visual-Based Apparent Personality Detection

    Get PDF
    This paper analyses the performances of pre-trained deep learning models as feature extractors for apparent personality trait detection (APD) by utilising different statistical methods to find the best performing pre-trained model. Accuracy and computational cost were used to measure the model performance. Personality is measured using the Big Five Personality Schema. CNN-RNN networks were designed using VGG19, ResNet152, and VGGFace pre-trained models to measure the personality with scene data. The models were compared using the mean accuracy attained and the average time is taken for training and testing. Descriptive statistics, graphs, and inferential statistics were applied in model comparisons. Results convey that, ResNet152 model reported the highest mean accuracy in the test dataset (0.9077), followed by VGG19 with 0.9036; VGGFace recorded the lowest (0.8962). ResNet152 consumed more time than other architectures in model training and testing since the number of parameters is comparably higher than the other two architectures involved. Statistical test results prove no significant evidence to conclude that VGG19 and ResNet152 based CNN-RNN models performed differently. This leads to the conclusion that even with a comparably lower number of parameters VGG19 model performed well. The findings reveal that satisfactory accuracy is obtained with a limited number of frames extracted from videos since models achieved more than 90% accuracy

    Spectral Representation of Behaviour Primitives for Depression Analysis

    Get PDF
    • …
    corecore