8 research outputs found

    RCEA: Real-time, Continuous Emotion Annotation for collecting precise mobile video ground truth labels

    Get PDF
    Collecting accurate and precise emotion ground truth labels for mobile video watching is essential for ensuring meaningful predictions. However, video-based emotion annotation techniques either rely on post-stimulus discrete self-reports, or allow real-time, continuous emotion annotations (RCEA) only for desktop settings. Following a user-centric approach, we designed an RCEA technique for mobile video watching, and validated its usability and reliability in a controlled, indoor (N=12) and later outdoor (N=20) study. Drawing on physiological measures, interaction logs, and subjective workload reports, we show that (1) RCEA is perceived to be usable for annotating emotions while mobile video watching, without increasing users' mental workload (2) the resulting time-variant annotations are comparable with intended emotion attributes of the video stimuli (classification error for valence: 8.3%; arousal: 25%). We contribute a validated annotation technique and associated annotation fusion method, that is suitable for collecting fine-grained emotion annotations while users watch mobile videos

    Advances in Character Recognition

    Get PDF
    This book presents advances in character recognition, and it consists of 12 chapters that cover wide range of topics on different aspects of character recognition. Hopefully, this book will serve as a reference source for academic research, for professionals working in the character recognition field and for all interested in the subject

    Enhancement of emotion recogniton using feature fusion and the neighborhood components analysis

    Get PDF
    Abstract Feature fusion is a common approach to improve the accuracy of the system. Several attemps have been made using this approach on the Mahnob-HCI database for affective recognition, achieving 76% and 68% for valence and arousal respectively as the highest achievements. This study aimed to improve the baselines for both valence and arousal using feature fusion of HRV-based, which used the standard Heart Rate Variability analysis, standardized to mean/standard deviation and normalized to [-1,1], and cvxEDA-based feature, calculated based on a convex optimization approach, to get the new baselines for this database. The selected features, after applying the sequential forward floating search (SFFS), were enhanced by the Neighborhood Component Analysis and fed to kNN classifier to solve 3-class classification problem, validated using leave-one-out (LOO), leave-one-subject-out (LOSO), and 10-fold cross validation methods. The standardized HRV-based features were not selected during the SFFS method, leaving feature fusion from normalized HRV-based and cvxEDA-based features only. The results were compared to previous studies using both single- and multi-modality. Applying the NCA enhanced the features such that the performances in valence set new baselines: 82.4% (LOO validation), 79.6% (10-fold cross validation), and 81.9% (LOSO validation), enhanced the best achievement from both single- and multi-modality. For arousal, the performances were 78.3%, 78.7%, and 77.7% for LOO, LOSO, and 10-fold cross validations respectively. They outperformed the best achievement using feature fusion but could not enhance the performance in single-modality study using cvxEDA-based feature. Some future works include utilizing other feature extraction methods and using more sophisticated classifier other than the simple kNN
    corecore