228 research outputs found

    Perception of non-verbal emotional listener feedback

    Get PDF
    This paper reports on a listening test assessing the perception of short non-verbal emotional vocalisations emitted by a listener as feedback to the speaker. We clarify the concepts backchannel and feedback, and investigate the use of affect bursts as a means of giving emotional feedback via the backchannel. Experiments with German and Dutch subjects confirm that the recognition of emotion from affect bursts in a dialogical context is similar to their perception in isolation. We also investigate the acceptability of affect bursts when used as listener feedback. Acceptability appears to be linked to display rules for emotion expression. While many ratings were similar between Dutch and German listeners, a number of clear differences was found, suggesting language-specific affect bursts

    Signals of intensification and attenuation in orchestra and choir conduction

    Get PDF
    Based on a model of communication according to which not only words but also body signals constitute lexicons (Poggi, 2007), the study presented aimes at building a lexicon of conductors? multimodal behaviours requesting intensification and attenuation of sound intensity. In a corpus of concerts and rehearsals, the conductors? body signals requesting to play or sing forte, piano, crescendo, diminuendo were analysed through an annotation scheme describing the body signals, their meanings, and their semiotic devices: generic codified (the same as in everyday language); specific codified (shared with laypeople but with specific meanings in conduction); direct iconic, (resemblance between visual and acoustic modality); indirect iconic, (evoking the technical movement by connected movements or emotion expressions). The work outlines a lexicon of the conductors? signals that in gesture, head, face, gaze, posture, body convey attenuation and intensification in music

    Close your eyes…and communicate

    Get PDF
    Proceedings of the 3rd Nordic Symposium on Multimodal Communication. Editors: Patrizia Paggio, Elisabeth Ahlsén, Jens Allwood, Kristiina Jokinen, Costanza Navarretta. NEALT Proceedings Series, Vol. 15 (2011), 62–71. © 2011 The editors and contributors. Published by Northern European Association for Language Technology (NEALT) http://omilia.uio.no/nealt . Electronically published at Tartu University Library (Estonia) http://hdl.handle.net/10062/22532

    Schadenfreude: Malicious Joy in Social Media Interactions

    Get PDF
    The paper presents a model of Schadenfreude, pleasure at another’s misfortune, resulting in a typology of cases of this emotion. Four types are singled out: Compensation, Identification, Aversion, and Injustice Schadenfreude. The typology is first tested on a corpus of 472 comments drawn from three social media, Facebook, Twitter and Instagram. Then a specific corpus of comments is collected and analyzed concerning a specific case of Injustice Schadenfreude, the posts concerning Brexit, United Kingdom leaving the European Union. From the analysis, it emerges that spatial or factual closeness does not look necessary to feel Schadenfreude. Finally, a lexicometric automatic analysis is conducted on the general corpus of Italian comments collected using several hashtags and enriched by comments about the fire of Notre Dame, showing how even complex emotions like Schadenfreude can be automatically extracted from social media

    Does Siri Have a Soul? Exploring Voice Assistants Through Shinto Design Fictions

    Full text link
    It can be difficult to critically reflect on technology that has become part of everyday rituals and routines. To combat this, speculative and fictional approaches have previously been used by HCI to decontextualise the familiar and imagine alternatives. In this work we turn to Japanese Shinto narratives as a way to defamiliarise voice assistants, inspired by the similarities between how assistants appear to 'inhabit' objects similarly to kami. Describing an alternate future where assistant presences live inside objects, this approach foregrounds some of the phenomenological quirks that can otherwise easily become lost. Divorced from the reality of daily life, this approach allows us to reevaluate some of the common interactions and design patterns that are common in the virtual assistants of the present.Comment: 11 pages, 2 images. To appear in the Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20

    How Soundtracks Shape What We See: Analyzing the Influence of Music on Visual Scenes Through Self-Assessment, Eye Tracking, and Pupillometry

    Get PDF
    This article presents two studies that deepen the theme of how soundtracks shape our interpretation of audiovisuals. Embracing a multivariate perspective, Study 1 (N = 118) demonstrated, through an online between-subjects experiment, that two different music scores (melancholic vs. anxious) deeply affected the interpretations of an unknown movie scene in terms of empathy felt toward the main character, impressions of his personality, plot anticipations, and perception of the environment of the scene. With the melancholic music, participants felt empathy toward the character, viewing him as more agreeable and introverted, more oriented to memories than to decisions, while perceiving the environment as cozier. An almost opposite pattern emerged with the anxious music. In Study 2 (N = 92), we replicated the experiment in our lab but with the addition of eye-tracking and pupillometric measurements. Results of Study 1 were largely replicated; moreover, we proved that the anxious score, by increasing the participants’ vigilance and state of alert (wider pupil dilation), favored greater attention to minor details, as in the case of another character who was very hard to be noticed (more time spent on his figure). Results highlight the pervasive nature of the influence of music within the process of interpretation of visual scenes

    A High-Fidelity Open Embodied Avatar with Lip Syncing and Expression Capabilities

    Full text link
    Embodied avatars as virtual agents have many applications and provide benefits over disembodied agents, allowing non-verbal social and interactional cues to be leveraged, in a similar manner to how humans interact with each other. We present an open embodied avatar built upon the Unreal Engine that can be controlled via a simple python programming interface. The avatar has lip syncing (phoneme control), head gesture and facial expression (using either facial action units or cardinal emotion categories) capabilities. We release code and models to illustrate how the avatar can be controlled like a puppet or used to create a simple conversational agent using public application programming interfaces (APIs). GITHUB link: https://github.com/danmcduff/AvatarSimComment: International Conference on Multimodal Interaction (ICMI 2019

    An audiovisual corpus of guided tours in cultural sites : data collection protocols in the CHROME project

    Get PDF
    Creating interfaces for cultural heritage access is considered a fundamental research field because of the many beneficial effects it has on society. In this era of significant advances towards natural interaction with machines and deeper understanding of social communication nuances, it is important to investigate the communicative strategies human experts adopt when delivering contents to the visitors of cultural sites, as this allows the creation of a strong theoretical background for the development of efficient conversational agents. In this work, we present the data collection and annotation protocols adopted for the ongoing creation of the reference material to be used in the Cultural Heritage Resources Orienting Multimodal Experiences (CHROME) project to accomplish that goa
    corecore