3,617 research outputs found

    Designing gestures for affective input: an analysis of shape, effort and valence

    Get PDF
    We discuss a user-centered approach to incorporating affective expressions in interactive applications, and argue for a design that addresses both body and mind. In particular, we have studied the problem of finding a set of affective gestures. Based on previous work in movement analysis and emotion theory [Davies, Laban and Lawrence, Russell], and a study of an actor expressing emotional states in body movements, we have identified three underlying dimensions of movements and emotions: shape, effort and valence. From these dimensions we have created a new affective interaction model, which we name the affective gestural plane model. We applied this model to the design of gestural affective input to a mobile service for affective messages

    Illuminating music : impact of color hue for background lighting on emotional arousal in piano performance videos

    Get PDF
    This study sought to determine if hues overlayed on a video recording of a piano performance would systematically influence perception of its emotional arousal level. The hues were artificially added to a series of four short video excerpts of different performances using video editing software. Over two experiments 106 participants were sorted into 4 conditions, with each viewing different combinations of musical excerpts (two excerpts with nominally high arousal and two excerpts with nominally low arousal) and hue (red or blue) combinations. Participants rated the emotional arousal depicted by each excerpt. Results indicated that the overall arousal ratings were consistent with the nominal arousal of the selected excerpts. However, hues added to video produced no significant effect on arousal ratings, contrary to predictions. This could be due to the domination of the combined effects of other channels of information (e.g., the music and player movement) over the emotional effects of the hypothesized influence of hue on perceived performance (red expected to enhance and blue to reduce arousal of the performance). To our knowledge this is the first study to investigate the impact of these hues upon perceived arousal of music performance, and has implications for musical performers and stage lighting. Further research that investigates reactions during live performance and manipulation of a wider range of lighting hues, saturation and brightness levels, and editing techniques, is recommended to further scrutinize the veracity of the findings

    CaRo 2.0: an interactive system for expressive music rendering

    Get PDF
    In several application contexts in multimedia field (educational, extreme gaming), the interaction with the user requests that system is able to render music in expressive way. The expressiveness is the added value of a performance and is part of the reason that music is interesting to listen. Understanding and modeling expressive content communication is important for many engineering applications in information technology (e.g., Music Information Retrieval, as well as several applications in the affective computing field). In this paper, we present an original approach to modify the expressive content of a performance in a gradual way, applying a smooth morphing among performances with different expressive content in order to adapt the audio expressive character to the user's desires. The system won the final stage of Rencon 2011. This performance RENdering CONtest is a research project that organizes contests for computer systems generating expressive musical performances

    Changing musical emotion: A computational rule system for modifying score and performance

    Get PDF
    CMERS system architecture has been implemented in the programming language scheme, and it uses the improvised music programming environment with the objective to provide researchers with a tool for testing the relationships between musical features and emotion. A music work represented in CMERS uses the music object hierarchy that is based on GTTM's grouping structure and is automatically generated from the phrase boundary markup and MIDI file. The Mode rule type of CMERS converts a note into those of the parallel mode and no change in pitch height occurs when converting to the parallel mode. It is reported that the odds of correctness with CMERS are approximately five times greater than that of DM. The repeated-measures analysis of variance for valence shows a significant difference between systems with F (1, 17) = 45.49, p < .0005 and the interaction between system and quadrant is significant with F (3, 51) = 4.23, p = .01, which indicates that CMERS is extensively more effective at correctly influencing valence than DM. c 2010 Massachusetts Institute of Technology

    Towards a multi-layer architecture for multi-modal rendering of expressive actions

    No full text
    International audienceExpressive content has multiple facets that can be conveyed by music, gesture, actions. Different application scenarios can require different metaphors for expressiveness control. In order to meet the requirements for flexible representation, we propose a multi-layer architecture structured into three main levels of abstraction. At the top (user level) there is a semantic description, which is adapted to specific user requirements and conceptualization. At the other end are low-level features that describe parameters strictly related to the rendering model. In between these two extremes, we propose an intermediate layer that provides a description shared by the various high-level representations on one side, and that can be instantiated to the various low-level rendering models on the other side. In order to provide a common representation of different expressive semantics and different modalities, we propose a physically-inspired description specifically suited for expressive actions

    Exploring the Affective Loop

    Get PDF
    Research in psychology and neurology shows that both body and mind are involved when experiencing emotions (Damasio 1994, Davidson et al. 2003). People are also very physical when they try to communicate their emotions. Somewhere in between beings consciously and unconsciously aware of it ourselves, we produce both verbal and physical signs to make other people understand how we feel. Simultaneously, this production of signs involves us in a stronger personal experience of the emotions we express. Emotions are also communicated in the digital world, but there is little focus on users' personal as well as physical experience of emotions in the available digital media. In order to explore whether and how we can expand existing media, we have designed, implemented and evaluated /eMoto/, a mobile service for sending affective messages to others. With eMoto, we explicitly aim to address both cognitive and physical experiences of human emotions. Through combining affective gestures for input with affective expressions that make use of colors, shapes and animations for the background of messages, the interaction "pulls" the user into an /affective loop/. In this thesis we define what we mean by affective loop and present a user-centered design approach expressed through four design principles inspired by previous work within Human Computer Interaction (HCI) but adjusted to our purposes; /embodiment/ (Dourish 2001) as a means to address how people communicate emotions in real life, /flow/ (Csikszentmihalyi 1990) to reach a state of involvement that goes further than the current context, /ambiguity/ of the designed expressions (Gaver et al. 2003) to allow for open-ended interpretation by the end-users instead of simplistic, one-emotion one-expression pairs and /natural but designed expressions/ to address people's natural couplings between cognitively and physically experienced emotions. We also present results from an end-user study of eMoto that indicates that subjects got both physically and emotionally involved in the interaction and that the designed "openness" and ambiguity of the expressions, was appreciated and understood by our subjects. Through the user study, we identified four potential design problems that have to be tackled in order to achieve an affective loop effect; the extent to which users' /feel in control/ of the interaction, /harmony and coherence/ between cognitive and physical expressions/,/ /timing/ of expressions and feedback in a communicational setting, and effects of users' /personality/ on their emotional expressions and experiences of the interaction

    Another You

    Get PDF
    Another You is an animated 2D graduate thesis film. The entire film, including the credits, is 4 minutes 45 seconds long. The production phase went from September 2018 to May 2019. This story is about a housewife who tries to escape from her daily life. Her escape takes place at the dinner table. The housewife has been busy with cooking and serving her family, but no one really cares about her, which makes her very angry and she abruptly leaves the table and everyone is shocked. She runs upstairs and to a hallway with many doors and moves from one door to the next, opening each and seeing scenes from her dreary life. At one point, she notices other doors across the hallway. Behind these doors, she sees herself in other lives and taking roads that she has missed taking in her own life. As she follows her fantasy selves through many different life possibilities, she is confronted by one of her fantasy selves, who leaves her alone. As her fantasy ends and she finds herself at the dinner. Everyone in her family begins to pay attention to her, showing her finally that they really do care for her. Another You is a 2D animation that was made using many software programs, including Adobe Photoshop, After Effects, Premiere, and TVPaint Animation. The final output format was 1080HD with a high-quality stereophonic track

    Musical audio-mining

    Get PDF
    • …
    corecore