5,031 research outputs found

    Formalization of Event Perception and Event Appraisal Process

    Get PDF
    Integration of emotion in a virtual agent is a topic of research to depict human-like behavior in a simulated environment. For the last few decades, many researchers are working in the field of incorporating emotions in a virtual agent. In the emotion model, the behavior of an agent depends upon how the event is perceived by the agent with respect to the goal. Hence, perception of the event while considering the past experience, importance of event towards achieving goal, agent’s own capabilities and resources is an important process which directly influences the decision making and action selection. The proposed models, till date, are either too complex to adapt or are using a very few parameters to describe the event. So, in this paper, we propose an extension of perception process in an existing emotion model, EMIA and suggest the formalization of event perception and appraisal processes to make it adaptable. This has been carried out using five parameters for event description along-with fuzzy logic which makes the process more effective yet simple

    A virtual diary companion

    Get PDF
    Chatbots and embodied conversational agents show turn based conversation behaviour. In current research we almost always assume that each utterance of a human conversational partner should be followed by an intelligent and/or empathetic reaction of chatbot or embodied agent. They are assumed to be alert, trying to please the user. There are other applications which have not yet received much attention and which require a more patient or relaxed attitude, waiting for the right moment to provide feedback to the human partner. Being able and willing to listen is one of the conditions for being successful. In this paper we have some observations on listening behaviour research and introduce one of our applications, the virtual diary companion

    Continuous Stress Monitoring under Varied Demands Using Unobtrusive Devices

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.This research aims to identify a feasible model to predict a learner’s stress in an online learning platform. It is desirable to produce a cost-effective, unobtrusive and objective method to measure a learner’s emotions. The few signals produced by mouse and keyboard could enable such solution to measure real world individual’s affective states. It is also important to ensure that the measurement can be applied regardless the type of task carried out by the user. This preliminary research proposes a stress classification method using mouse and keystroke dynamics to classify the stress levels of 190 university students when performing three different e-learning activities. The results show that the stress measurement based on mouse and keystroke dynamics is consistent with the stress measurement according to the changes of duration spent between two consecutive questions. The feedforward back-propagation neural network achieves the best performance in the classification

    Using Emotions to Empower the Self-adaptation Capability of Software Services

    Get PDF

    Software agents in music and sound art research/creative work: Current state and a possible direction

    Get PDF
    Composers, musicians and computer scientists have begun to use software-based agents to create music and sound art in both linear and non-linear (non-predetermined form and/or content) idioms, with some robust approaches now drawing on various disciplines. This paper surveys recent work: agent technology is first introduced, a theoretical framework for its use in creating music/sound art works put forward, and an overview of common approaches then given. Identifying areas of neglect in recent research, a possible direction for further work is then briefly explored. Finally, a vision for a new hybrid model that integrates non-linear, generative, conversational and affective perspectives on interactivity is proposed

    Agents for educational games and simulations

    Get PDF
    This book consists mainly of revised papers that were presented at the Agents for Educational Games and Simulation (AEGS) workshop held on May 2, 2011, as part of the Autonomous Agents and MultiAgent Systems (AAMAS) conference in Taipei, Taiwan. The 12 full papers presented were carefully reviewed and selected from various submissions. The papers are organized topical sections on middleware applications, dialogues and learning, adaption and convergence, and agent applications

    Emotions in the face: biology or culture? – Using idiomatic constructions as indirect evidence to inform a psychological research controversy

    Get PDF
    Research on the facial expression of emotions has become a bone of contention in psychological research. On the one hand, Ekman and his colleagues have argued for a universal set of six basic emotions that are recognized with a considerable degree of accuracy across cultures and automatically displayed in highly similar ways by people. On the other hand, more recent research in cognitive science has provided results that are supportive of a cultural-relativist position. In this paper this controversy is approached from a contrastive perspective on phraseological constructions. It focuses on how emotional displays are codified in somatic idioms in some European (English, German, French, Spanish) and East Asian (Japanese, Korean, Chinese [Cantonese]) languages. Using somatic idioms such as make big eyes or die Nase rĂŒmpfen as a pool of evidence to shed linguistic light on the psychological controversy, the paper engages with the following general research question: Is there a significant difference between European and East Asian somatic idioms or do these constructions rather speak for a universal apprehension of facial emotion displays? To answer this question, the paper compares somatic expressions that are selected from (idiom) dictionaries of the languages listed above. Moreover, native speakers of the East Asian languages were consulted to support the analysis of the respective data. All corresponding entries were analysed categorically, i. e. with regard to whether or not they encode a given facial area to denote a specific emotion. The results show arguments both for and against the universalist and the cultural-relativist positions. In general, they speak for an opportunistic encoding of facial emotion displays

    Do Deepfakes Adequately Display Emotions? A Study on Deepfake Facial Emotion Expression

    Get PDF
    Recent technological advancements in Artificial Intelligence make it easy to create deepfakes and hyper-realistic videos, in which images and video clips are processed to create fake videos that appear authentic. Many of them are based on swapping faces without the consent of the person whose appearance and voice are used. As emotions are inherent in human communication, studying how deepfakes transfer emotional expressions from original to fakes is relevant. In this work, we conduct an in-depth study on facial emotional expression in deepfakes using a well-known face swap-based deepfake database. Firstly, we extracted the photograms from their videos. Then, we analyzed the emotional expression in the original and faked versions of video recordings for all performers in the database. Results show that emotional expressions are not adequately transferred between original recordings and the deepfakes created from them. High variability in emotions and performers detected between original and fake recordings indicates that performer emotion expressiveness should be considered for better deepfake generation or detection. Dades primĂ ries associades a l'article https://doi.org/10.34810/data262This work was supported by the Ministry for Science and Innovation through the State Research Agency (MCIN/AEI/10.13039/501100011033) under grant number (PID2020-117912RB-C22)
    • 

    corecore