415 research outputs found

    Modelling affect for horror soundscapes

    Get PDF
    The feeling of horror within movies or games relies on the audience’s perception of a tense atmosphere — often achieved through sound accompanied by the on-screen drama — guiding its emotional experience throughout the scene or game-play sequence. These progressions are often crafted through an a priori knowledge of how a scene or game-play sequence will playout, and the intended emotional patterns a game director wants to transmit. The appropriate design of sound becomes even more challenging once the scenery and the general context is autonomously generated by an algorithm. Towards realizing sound-based affective interaction in games this paper explores the creation of computational models capable of ranking short audio pieces based on crowdsourced annotations of tension, arousal and valence. Affect models are trained via preference learning on over a thousand annotations with the use of support vector machines, whose inputs are low-level features extracted from the audio assets of a comprehensive sound library. The models constructed in this work are able to predict the tension, arousal and valence elicited by sound, respectively, with an accuracy of approximately 65%, 66% and 72%.peer-reviewe

    Hearing the Past

    Get PDF
    Recent developments in computer technology are providing historians with new ways to see—and seek to hear, touch, or smell—traces of the past. Place-based augmented reality applications are an increasingly common feature at heritage sites and museums, allowing historians to create immersive, multifaceted learning experiences. Now that computer vision can be directed at the past, research involving thousands of images can recreate lost or destroyed objects or environments, and discern patterns in vast datasets that could not be perceived by the naked eye. Seeing the Past with Computers is a collection of twelve thought-pieces on the current and potential uses of augmented reality and computer vision in historical research, teaching, and presentation. The experts gathered here reflect upon their experiences working with new technologies, share their ideas for best practices, and assess the implications of—and imagine future possibilities for—new methods of historical study. Among the experimental topics they explore are the use of augmented reality that empowers students to challenge the presentation of historical material in their textbooks; the application of seeing computers to unlock unusual cultural knowledge, such as the secrets of vaudevillian stage magic; hacking facial recognition technology to reveal victims of racism in a century-old Australian archive; and rebuilding the soundscape of an Iron Age village with aural augmented reality. This volume is a valuable resource for scholars and students of history and the digital humanities more broadly. It will inspire them to apply innovative methods to open new paths for conducting and sharing their own research

    RankTrace : relative and unbounded affect annotation

    Get PDF
    How should annotation data be processed so that it can best characterize the ground truth of affect? This paper attempts to address this critical question by testing various methods of processing annotation data on their ability to capture phasic elements of skin conductance. Towards this goal the paper introduces a new affect annotation tool, RankTrace, that allows for the annotation of affect in a continuous yet unbounded fashion. RankTrace is tested on first-person annotation lines (traces) of tension elicited from a horror video game. The key findings of the paper suggest that the relative processing of traces via their mean gradient yields the best and most robust predictors of phasic manifestations of skin conductance.peer-reviewe

    Experimental Analysis of Spatial Sound for Storytelling in Virtual Reality

    Get PDF
    Spatial sound is useful in enhancing immersion and presence of the user in a virtual world. This audio design allows the game designer to place audio cues that appropriately match with the visual cues in a virtual game environment. These localized audio cues placed in a story based game environment also help to evoke an emotional response from the user and construct the narrative of the game by capturing the user’s attention towards the guiding action events in the game. Our thesis explores the usefulness of spatial sound for improving the performance and experience of a user in a virtual game environment. Additionally, with the help of the relevant subjective and objective inferences collected from a user study conducted on three different evaluation models, the thesis also analyzes and establishes the potential of spatial sound as a powerful storytelling tool in a virtual game environment designed for Virtual Reality

    Survival horror games - an uncanny modality

    Get PDF
    This study investigates the relationship between the perceived eeriness of a virtual character with the perception of human-likeness for some attributes of motion and sound. 100 participants were asked to rate 13 video clips of 12 different virtual characters and one human. The results indicate that attributes of motion and sound do exaggerate the uncanny phenomenon and how frightening that character is perceived to be. Strong correlations were identified for the perceived eeriness for a character with how human-like a character's voice sounded, how human-like facial expression appeared and the synchronization of the character's sound with lip movement; characters rated as the least synchronized were perceived to be the most frightening. Based on the results of this study, this paper seeks to define an initial set of hypotheses for the fear-evoking aspects of character facial rendering and vocalization in survival horror games that can be used by game designers seeking to increase the fear factor in the genre and that will form the basis of further experiments which, it is hoped, will lead to a conceptual framework for the uncanny

    Developing heightened listening: A creative tool for introducing primary school children to sound-based music

    Get PDF
    Sound-based music (sbm), which is an umbrella term created by Landy (2007) for music where sound is the main unit rather than the musical note, rarely features in music curricula in schools and currently has a relatively small audience outside of academia. Building on previous research conducted at De Montfort University concerned with widening access to sbm, this thesis investigates whether sbm composition can provide an engaging experience for Key Stage 2 (7-11 year olds) pupils supported by the development of heightened listening skills. The research is interdisciplinary spanning sbm studies, music technology and education, and involved case studies in eight schools with 241 children conducted from 2013 to 2015. Each case study included a series of workshops in which the pupils developed listening skills, recorded sounds and created sound-based compositions. Using a grounded theory approach, qualitative and quantitative data was gathered over three phases through questionnaires, teacher feedback, observations, recordings and pupils’ work. The results of the research indicate that the children had a high level of engagement with the workshop activities. The data also suggests that the heightened listening training helped to support the pupils in their compositional work. The main factor in this engagement appeared to be the opportunity to be creative, which is something that reports since the 1990s have highlighted as essential for all children. Additionally, a range of complex local conditions influenced engagement in each case study and there were indications that engagement also decreased with age. Pupils chose a variety of different approaches for composing sound-based work that ranged from incorporating detailed narratives to focusing purely on experimenting with the sound itself without reference to any external themes. The compositional pathway chosen by each pupil seemed to be partly influenced by previous musical experience.Midlands3Cities AHR

    The ordinal nature of emotions

    Get PDF
    Representing computationally everyday emotional states is a challenging task and, arguably, one of the most fundamental for affective computing. Standard practice in emotion annotation is to ask humans to assign an absolute value of intensity to each emotional behavior they observe. Psychological theories and evidence from multiple disciplines including neuroscience, economics and artificial intelligence, however, suggest that the task of assigning reference-based (relative) values to subjective notions is better aligned with the underlying representations than assigning absolute values. Evidence also shows that we use reference points, or else anchors, against which we evaluate values such as the emotional state of a stimulus; suggesting again that ordinal labels are a more suitable way to represent emotions. This paper draws together the theoretical reasons to favor relative over absolute labels for representing and annotating emotion, reviewing the literature across several disciplines. We go on to discuss good and bad practices of treating ordinal and other forms of annotation data, and make the case for preference learning methods as the appropriate approach for treating ordinal labels. We finally discuss the advantages of relative annotation with respect to both reliability and validity through a number of case studies in affective computing, and address common objections to the use of ordinal data. Overall, the thesis that emotions are by nature relative is supported by both theoretical arguments and evidence, and opens new horizons for the way emotions are viewed, represented and analyzed computationally.peer-reviewe
    • …
    corecore