1,569 research outputs found

    Audio-Based Visualization of Expressive Body Movements in Music Performance: An Evaluation of Methodology in Three Electroacoustic Compositions

    Get PDF
    An increase in collaboration amongst visual artists, performance artists, musicians, and programmers has given rise to the exploration of multimedia performance arts. A methodology for audio-based visualization has been created that integrates the information of sound with the visualization of physical expressions, with the goal of magnifying the expressiveness of the performance. The emphasis is placed on exalting the music by using the audio to affect and enhance the video processing, while the video does not affect the audio at all. In this sense the music is considered to be autonomous of the video. The audio-based visualization can provide the audience with a deeper appreciation of the music. Unique implementations of the methodology have been created for three compositions. A qualitative analysis of each implementation is employed to evaluate both the technological and aesthetic merits for each composition

    Creating Bright Shadows: Visual Music Using Immersion, Stereography, and Computer Animation

    Get PDF
    This thesis outlines the research and process of creating an immersive audiovisual work titled “Bright Shadows,” an 11 minute three-dimensional animation of dynamic, colorful abstractions choreographed to instrumental music. This piece is categorized under a long tradition of a type of visual art aspiring to musical analogy called “visual music” and draws heavily from the two-dimensional aesthetic stylings of time-based visual music works made in the early to mid-twentieth century. Among the topics discussed in this paper will be an overview of the artistic and technical challenges associated with translating the visual grammar of these two-dimensional works to three-dimensional computer graphics while establishing a unique aesthetic style. This paper also presents a framework for creating a digital, synthetic space using a large-format immersive theater, stereoscopic imaging, and static framing of the digital environment

    Paralinguistic vocal control of interactive media: how untapped elements of voice might enhance the role of non-speech voice input in the user's experience of multimedia.

    Get PDF
    Much interactive media development, especially commercial development, implies the dominance of the visual modality, with sound as a limited supporting channel. The development of multimedia technologies such as augmented reality and virtual reality has further revealed a distinct partiality to visual media. Sound, however, and particularly voice, have many aspects which have yet to be adequately investigated. Exploration of these aspects may show that sound can, in some respects, be superior to graphics in creating immersive and expressive interactive experiences. With this in mind, this thesis investigates the use of non-speech voice characteristics as a complementary input mechanism in controlling multimedia applications. It presents a number of projects that employ the paralinguistic elements of voice as input to interactive media including both screen-based and physical systems. These projects are used as a means of exploring the factors that seem likely to affect users’ preferences and interaction patterns during non-speech voice control. This exploration forms the basis for an examination of potential roles for paralinguistic voice input. The research includes the conceptual and practical development of the projects and a set of evaluative studies. The work submitted for Ph.D. comprises practical projects (50 percent) and a written dissertation (50 percent). The thesis aims to advance understanding of how voice can be used both on its own and in combination with other input mechanisms in controlling multimedia applications. It offers a step forward in the attempts to integrate the paralinguistic components of voice as a complementary input mode to speech input applications in order to create a synergistic combination that might let the strengths of each mode overcome the weaknesses of the other

    Interloops in audiovisual works

    Get PDF
    This portfolio presents eight original audiovisual works, plus six experimental studies that fed into their creation, alongside a written commentary that articulates the research that formed and manifests in the works. These artworks include elements of various forms of sound and visual art practices, including film, sculpture, music and sound, as well as incorporating processes of performance, installation and recordings. Aiming to achieve a balance and integration of the audio and the visual, they explore various possible forms of audiovisual coherences. Overall, through creative practice research and its critical discussion, the portfolio examines interrelationships between sound and image. It configures these as a process of audiovisual looping, here termed an ‘interloop’, in which each element continually affects the other, extending out towards the audience and the space of reception, and feeding back into the work itself. A form of conversation between the audio and visual elements is therefore established: an on- going dialogue aimed at achieving a sense of synchronicity in the presentation of audiovisual works. The works in the portfolio are presented as fixed medium video, live performance documentations, web and software applications, sound sculpture, and scores. The portfolio submission and commentary are also available online (hidden link) at https://sites.google.com/view/lq-phd

    Seeing sound “How to generate visual artworks by analysing a music track and representing it in terms of emotion analysis and musical features?”

    Get PDF
    Music and visual artwork are a valuable part of our daily life. Since both media induce human emotion, this thesis demonstrates how to convert music into visual artwork such as generative art. Especially, the project shows the method of connecting music emotion to the theme of colour. This thesis describes the human emotional model based on arousal and valence. Also, this thesis explains how colour affects our emotion. In order to connect music emotion into the colour theme, this thesis shows the method to retrieve music information which includes arousal and valence of the music. In order to generate visual artwork from the music, this thesis demonstrates the implementation of working software that integrates music emotion and musical characteristics such as frequency analysis. Besides, this thesis presents how to apply generative artwork into our daily life products. This thesis discusses learning outcomes from the project based on practice-based research methodology. Also, this thesis introduces a further plan related to AI

    Plays of proximity and distance: Gesture-based interaction and visual music

    Get PDF
    This thesis presents the relations between gestural interfaces and artworks which deal with real- time and simultaneous performance of dynamic imagery and sound, the so called visual music practices. Those relation extend from a historical, practical and theoretical viewpoint, which this study aims to cover, at least partially, all of them. Such relations are exemplified by two artistic projects developed by the author of this thesis, which work as a starting point for analysing the issues around the two main topics. The principles, patterns, challenges and concepts which struc- tured the two artworks are extracted, analysed and discussed, providing elements for comparison and evaluation, which may be useful for future researches on the topic

    Investigating User Experiences Through Animation-based Sketching

    Get PDF

    Emotional remapping of music to facial animation

    Get PDF
    We propose a method to extract the emotional data from a piece of music and then use that data via a remapping algorithm to automatically animate an emotional 3D face sequence. The method is based on studies of the emotional aspect of music and our parametric-based behavioral head model for face animation. We address the issue of affective communication remapping in general, i.e. translation of affective content (eg. emotions, and mood) from one communication form to another. We report on the results of our MusicFace system, which use these techniques to automatically create emotional facial animations from multiinstrument polyphonic music scores in MIDI format and a remapping rule set. ? ACM, 2006. This is the author\u27s version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Proceedings of the 2006 ACM SIGGRAPH symposium on Videogames, 143-149. Boston, Massachusetts: ACM. doi:10.1145/1183316.118333
    • …
    corecore