13,203 research outputs found

    Machine Understanding of Human Behavior

    Get PDF
    A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior

    Affective feedback: an investigation into the role of emotions in the information seeking process

    Get PDF
    User feedback is considered to be a critical element in the information seeking process, especially in relation to relevance assessment. Current feedback techniques determine content relevance with respect to the cognitive and situational levels of interaction that occurs between the user and the retrieval system. However, apart from real-life problems and information objects, users interact with intentions, motivations and feelings, which can be seen as critical aspects of cognition and decision-making. The study presented in this paper serves as a starting point to the exploration of the role of emotions in the information seeking process. Results show that the latter not only interweave with different physiological, psychological and cognitive processes, but also form distinctive patterns, according to specific task, and according to specific user

    What does touch tell us about emotions in touchscreen-based gameplay?

    Get PDF
    This is the post-print version of the Article. The official published version can be accessed from the link below - Copyright @ 2012 ACM. It is posted here by permission of ACM for your personal use. Not for redistribution.Nowadays, more and more people play games on touch-screen mobile phones. This phenomenon raises a very interesting question: does touch behaviour reflect the playerā€™s emotional state? If possible, this would not only be a valuable evaluation indicator for game designers, but also for real-time personalization of the game experience. Psychology studies on acted touch behaviour show the existence of discriminative affective profiles. In this paper, finger-stroke features during gameplay on an iPod were extracted and their discriminative power analysed. Based on touch-behaviour, machine learning algorithms were used to build systems for automatically discriminating between four emotional states (Excited, Relaxed, Frustrated, Bored), two levels of arousal and two levels of valence. The results were very interesting reaching between 69% and 77% of correct discrimination between the four emotional states. Higher results (~89%) were obtained for discriminating between two levels of arousal and two levels of valence

    Affective Computing

    Get PDF
    This book provides an overview of state of the art research in Affective Computing. It presents new ideas, original results and practical experiences in this increasingly important research field. The book consists of 23 chapters categorized into four sections. Since one of the most important means of human communication is facial expression, the first section of this book (Chapters 1 to 7) presents a research on synthesis and recognition of facial expressions. Given that we not only use the face but also body movements to express ourselves, in the second section (Chapters 8 to 11) we present a research on perception and generation of emotional expressions by using full-body motions. The third section of the book (Chapters 12 to 16) presents computational models on emotion, as well as findings from neuroscience research. In the last section of the book (Chapters 17 to 22) we present applications related to affective computing

    Automatic recognition of micro-expressions using local binary patterns on three orthogonal planes and extreme learning machine

    Get PDF
    A dissertation submitted in fullment of the requirements for the degree of Master of Science to the Faculty of Science, University of the Witwatersrand, Johannesburg, September 2017Recognition of micro-expressions is a growing research area as a result of its application in revealing subtle intention of humans especially under high stake situations. Owing to micro-expressions' short duration and low inten- sity, e orts to train humans in their recognition has resulted in very low performance. The use of temporal methods (on image sequences) and static methods (on apex frames) were explored for feature extraction. Supervised machine learning algorithms which include Support Vector Machines (SVM) and Extreme Learning Machines (ELM) were used for the purpose of classi- cation. Extreme learning machines which has the ability to learn fast was compared with SVM which acted as the baseline model. For experimentation, samples from Chinese Academy of Micro-expressions (CASME II) database were used. Results revealed that use of temporal features outperformed the use of static features for micro-expression recognition on both SVM and ELM models. Static and temporal features gave an average testing accuracy of 94.08% and 97.57% respectively for ve classes of micro-expressions us- ing ELM model. Signi cance test carried out on these two average means suggested that temporal features outperformed static features using ELM. Comparison between SVM and ELM learning time also revealed that ELM learns faster than SVM. For the ve selected micro-expression classes, an av- erage training time of 0.3405 seconds was achieved for SVM while an average training time of 0.0409 seconds was achieved for ELM. Hence we can sug- gest that micro-expressions can be recognised successfully by using temporal features and a machine learning algorithm that has a fast learning speed.MT201

    Speech-based recognition of self-reported and observed emotion in a dimensional space

    Get PDF
    The differences between self-reported and observed emotion have only marginally been investigated in the context of speech-based automatic emotion recognition. We address this issue by comparing self-reported emotion ratings to observed emotion ratings and look at how differences between these two types of ratings affect the development and performance of automatic emotion recognizers developed with these ratings. A dimensional approach to emotion modeling is adopted: the ratings are based on continuous arousal and valence scales. We describe the TNO-Gaming Corpus that contains spontaneous vocal and facial expressions elicited via a multiplayer videogame and that includes emotion annotations obtained via self-report and observation by outside observers. Comparisons show that there are discrepancies between self-reported and observed emotion ratings which are also reflected in the performance of the emotion recognizers developed. Using Support Vector Regression in combination with acoustic and textual features, recognizers of arousal and valence are developed that can predict points in a 2-dimensional arousal-valence space. The results of these recognizers show that the self-reported emotion is much harder to recognize than the observed emotion, and that averaging ratings from multiple observers improves performance

    ACII 2009: Affective Computing and Intelligent Interaction. Proceedings of the Doctoral Consortium 2009

    Get PDF

    Automatic Detection and Intensity Estimation of Spontaneous Smiles

    Get PDF
    Both the occurrence and intensity of facial expression are critical to what the face reveals. While much progress has been made towards the automatic detection of expression occurrence, controversy exists about how best to estimate expression intensity. Broadly, one approach is to adapt classifiers trained on binary ground truth to estimate expression intensity. An alternative approach is to explicitly train classifiers for the estimation of expression intensity. We investigated this issue by comparing multiple methods for binary smile detection and smile intensity estimation using two large databases of spontaneous expressions. SIFT and Gabor were used for feature extraction; Laplacian Eigenmap and PCA were used for dimensionality reduction; and binary SVM margins, multiclass SVMs, and Īµ-SVR models were used for prediction. Both multiclass SVMs and Īµ-SVR classifiers explicitly trained on intensity ground truth outperformed binary SVM margins for smile intensity estimation. A surprising finding was that multiclass SVMs also outperformed binary SVM margins on binary smile detection. This suggests that training on intensity ground truth is worthwhile even for binary expression detection
    • ā€¦
    corecore