3,548 research outputs found

    Automatic Classification and Shift Detection of Facial Expressions in Event-Aware Smart Environments

    Get PDF
    Affective application developers often face a challenge in integrating the output of facial expression recognition (FER) software in interactive systems: although many algorithms have been proposed for FER, integrating the results of these algorithms into applications remains difficult. Due to inter- and within-subject variations further post-processing is needed. Our work addresses this problem by introducing and comparing three post-processing classification algorithms for FER output applied to an event-based interaction scheme to pinpoint the affective context within a time window. Our comparison is based on earlier published experiments with an interactive cycling simulation in which participants were provoked with game elements and their facial expression responses were analysed by all three algorithms with a human observer as reference. The three post-processing algorithms we investigate are mean fixed-window, matched filter, and Bayesian changepoint detection. In addition, we introduce a novel method for detecting fast transition of facial expressions, which we call emotional shift. The proposed detection pattern is suitable for affective applications especially in smart environments, wherever users\u27 reactions can be tied to events

    Critical Analysis on Multimodal Emotion Recognition in Meeting the Requirements for Next Generation Human Computer Interactions

    Get PDF
    Emotion recognition is the gap in today’s Human Computer Interaction (HCI). These systems lack the ability to effectively recognize, express and feel emotion limits in their human interaction. They still lack the better sensitivity to human emotions. Multi modal emotion recognition attempts to addresses this gap by measuring emotional state from gestures, facial expressions, acoustic characteristics, textual expressions. Multi modal data acquired from video, audio, sensors etc. are combined using various techniques to classify basis human emotions like happiness, joy, neutrality, surprise, sadness, disgust, fear, anger etc. This work presents a critical analysis of multi modal emotion recognition approaches in meeting the requirements of next generation human computer interactions. The study first explores and defines the requirements of next generation human computer interactions and critically analyzes the existing multi modal emotion recognition approaches in addressing those requirements

    Machine Analysis of Facial Expressions

    Get PDF
    No abstract

    User Adaptive and Context-Aware Smart Home Using Pervasive and Semantic Technologies

    Get PDF

    Leveraging contextual-cognitive relationships into mobile commerce systems

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyMobile smart devices are becoming increasingly important within the on-line purchasing cycle. Thus the requirement for mobile commerce systems to become truly context-aware remains paramount if they are to be effective within the varied situations that mobile users encounter. Where traditionally a recommender system will focus upon the user – item relationship, i.e. what to recommend, in this thesis it is proposed that due to the complexity of mobile user situational profiles the how and when must also be considered for recommendations to be effective. Though non-trivial, it should be, through the understanding of a user’s ability to complete certain cognitive processes, possible to determine the likelihood of engagement and therefore the success of the recommendation. This research undertakes an investigation into physical and modal contexts and presents findings as to their relationships with cognitive processes. Through the introduction of the novel concept, disruptive contexts, situational contexts, including noise, distractions and user activity, are identified as having significant effects upon the relationship between user affective state and cognitive capability. Experimental results demonstrate that by understanding specific cognitive capabilities, e.g. a user’s perception of advert content and user levels of purchase-decision involvement, a system can determine potential user engagement and therefore improve the effectiveness of recommender systems’ performance. A quantitative approach is followed with a reliance upon statistical measures to inform the development, and subsequent validation, of a contextual-cognitive model that was implemented as part of a context-aware system. The development of SiDISense (Situational Decision Involvement Sensing system) demonstrated, through the use of smart-phone sensors and machine learning, that is was viable to classify subjectively rated contexts to then infer levels of cognitive capability and therefore likelihood of positive user engagement. Through this success in furthering the understanding of contextual-cognitive relationships there are novel and significant advances that are now viable within the area of m-commerce
    • …
    corecore