3,548 research outputs found
Recommended from our members
Automatic Prediction of Impressions in Time and across Varying Context: Personality, Attractiveness and Likeability
© 2010-2012 IEEE. In this paper, we propose a novel multimodal framework for automatically predicting the impressions of extroversion, agreeableness, conscientiousness, neuroticism , openness, attractiveness and likeability continuously in time and across varying situational contexts. Differently from the existing works, we obtain visual-only and audio-only annotations continuously in time for the same set of subjects, for the first time in the literature, and compare them to their audio-visual annotations. We propose a time-continuous prediction approach that learns the temporal relationships rather than treating each time instant separately. Our experiments show that the best prediction results are obtained when regression models are learned from audio-visual annotations and visual cues, and from audio-visual annotations and visual cues combined with audio cues at the decision level. Continuously generated annotations have the potential to provide insight into better understanding which impressions can be formed and predicted more dynamically, varying with situational context, and which ones appear to be more static and stable over time.This research work was supported by the EPSRC MAPTRAITS Project (Grant Ref: EP/K017500/1) and the EPSRC HARPS Project under its IDEAS Factory Sandpits call on Digital Personhood (Grant Ref: EP/L00416X/1)
Automatic Classification and Shift Detection of Facial Expressions in Event-Aware Smart Environments
Affective application developers often face a challenge in integrating the output of facial expression recognition (FER) software in interactive systems: although many algorithms have been proposed for FER, integrating the results of these algorithms into applications remains difficult. Due to inter- and within-subject variations further post-processing is needed. Our work addresses this problem by introducing and comparing three post-processing classification algorithms for FER output applied to an event-based interaction scheme to pinpoint the affective context within a time window. Our comparison is based on earlier published experiments with an interactive cycling simulation in which participants were provoked with game elements and their facial expression responses were analysed by all three algorithms with a human observer as reference. The three post-processing algorithms we investigate are mean fixed-window, matched filter, and Bayesian changepoint detection. In addition, we introduce a novel method for detecting fast transition of facial expressions, which we call emotional shift. The proposed detection pattern is suitable for affective applications especially in smart environments, wherever users\u27 reactions can be tied to events
Critical Analysis on Multimodal Emotion Recognition in Meeting the Requirements for Next Generation Human Computer Interactions
Emotion recognition is the gap in today’s Human Computer Interaction (HCI). These systems lack the ability to effectively recognize, express and feel emotion limits in their human interaction. They still lack the better sensitivity to human emotions. Multi modal emotion recognition attempts to addresses this gap by measuring emotional state from gestures, facial expressions, acoustic characteristics, textual expressions. Multi modal data acquired from video, audio, sensors etc. are combined using various techniques to classify basis human emotions like happiness, joy, neutrality, surprise, sadness, disgust, fear, anger etc. This work presents a critical analysis of multi modal emotion recognition approaches in meeting the requirements of next generation human computer interactions. The study first explores and defines the requirements of next generation human computer interactions and critically analyzes the existing multi modal emotion recognition approaches in addressing those requirements
Machine Analysis of Facial Expressions
No abstract
Leveraging contextual-cognitive relationships into mobile commerce systems
A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyMobile smart devices are becoming increasingly important within the on-line purchasing cycle. Thus the requirement for mobile commerce systems to become truly context-aware remains paramount if they are to be effective within the varied situations that mobile users encounter. Where traditionally a recommender system will focus upon the user – item relationship, i.e. what to recommend, in this thesis it is proposed that due to the complexity of mobile user situational profiles the how and when must also be considered for recommendations to be effective. Though non-trivial, it should be, through the understanding of a user’s ability to complete certain cognitive processes, possible to determine the likelihood of engagement and therefore the success of the recommendation.
This research undertakes an investigation into physical and modal contexts and presents findings as to their relationships with cognitive processes. Through the introduction of the novel concept, disruptive contexts, situational contexts, including noise, distractions and user activity, are identified as having significant effects upon the relationship between user affective state and cognitive capability. Experimental results demonstrate that by understanding specific cognitive capabilities, e.g. a user’s perception of advert content and user levels of purchase-decision involvement, a system can determine potential user engagement and therefore improve the effectiveness of recommender systems’ performance.
A quantitative approach is followed with a reliance upon statistical measures to inform the development, and subsequent validation, of a contextual-cognitive model that was implemented as part of a context-aware system. The development of SiDISense (Situational Decision Involvement Sensing system) demonstrated, through the use of smart-phone sensors and machine learning, that is was viable to classify subjectively rated contexts to then infer levels of cognitive capability and therefore likelihood of positive user engagement. Through this success in furthering the understanding of contextual-cognitive relationships there are novel and significant advances that are now viable within the area of m-commerce
- …