8 research outputs found

    Специфичности мултимедијалне интерпретације камерне музике савремених српских аутора

    Get PDF

    Interactive music visualization: implementation, realization and evaluation

    Get PDF
    This thesis describes all process of the development of music visualization, starting with the implementation, followed by realization and then evaluation. The main goal is to have to knowledge of how the audience live performance experience can be enhanced through music visualization. With music visualization is possible to give a better understanding about the music feelings constructing an intensive atmosphere in the live music performance, which enhances the connection between the live music and the audience through visuals. These visuals have to be related to the live music, furthermore has to quickly respond to live music changes and introduce novelty into the visuals. The mapping between music and visuals is the focus of this project, in order to improve the relationship between the live performance and the spectators. The implementation of music visualization is based on the translation of music into graphic visualizations, therefore at the beginning the project was based on the existent works. Later on, it was decided to introduce new ways of conveying music into visuals. Several attempts were made in order to discover the most efficient mapping between music and visualization so people can fully connect with the performance. Throughout this project, those attempts resulted in several music visualizations created for four live music performances, afterwards it was produced an online survey to evaluate those live performances with music visualization. In the end, all conclusions are presented based on the results of the online survey, and also is explained which music elements should be depicted in the visuals, plus how those visuals should respond to the selected music elements.Universidade da Madeir

    Design Strategies for Adaptive Social Composition: Collaborative Sound Environments

    Get PDF
    In order to develop successful collaborative music systems a variety of subtle interactions need to be identified and integrated. Gesture capture, motion tracking, real-time synthesis, environmental parameters and ubiquitous technologies can each be effectively used for developing innovative approaches to instrument design, sound installations, interactive music and generative systems. Current solutions tend to prioritise one or more of these approaches, refining a particular interface technology, software design or compositional approach developed for a specific composition, performer or installation environment. Within this diverse field a group of novel controllers, described as ‘Tangible Interfaces’ have been developed. These are intended for use by novices and in many cases follow a simple model of interaction controlling synthesis parameters through simple user actions. Other approaches offer sophisticated compositional frameworks, but many of these are idiosyncratic and highly personalised. As such they are difficult to engage with and ineffective for groups of novices. The objective of this research is to develop effective design strategies for implementing collaborative sound environments using key terms and vocabulary drawn from the available literature. This is articulated by combining an empathic design process with controlled sound perception and interaction experiments. The identified design strategies have been applied to the development of a new collaborative digital instrument. A range of technical and compositional approaches was considered to define this process, which can be described as Adaptive Social Composition. Dan Livingston

    ESCOM 2017 Proceedings

    Get PDF

    Soma: live performance where congruent musical, visual, and proprioceptive stimuli fuse to form a combined aesthetic narrative

    Get PDF
    Artists and scientists have long had an interest in the relationship between music and visual art. Today, many occupy themselves with correlated animation and music, called 'visual music'. Established tools and paradigms for performing live visual music however, have several limitations: Virtually no user interface exists, with an expressivity comparable to live musical performance. Mappings between music and visuals are typically reduced to the music‘s beat and amplitude being statically associated to the visuals, disallowing close audiovisual congruence, tension and release, and suspended expectation in narratives. Collaborative performance, common in other live art, is mostly absent due to technical limitations. Preparing or improvising performances is complicated, often requiring software development. This thesis addresses these, through a transdisciplinary integration of findings from several research areas, detailing the resulting ideas, and their implementation in a novel system: Musical instruments are used as the primary control data source, accurately encoding all musical gestures of each performer. The advanced embodied knowledge musicians have of their instruments, allows increased expressivity, the full control data bandwidth allows high mapping complexity, while musicians‘ collaborative performance familiarity may translate to visual music performance. The conduct of Mutable Mapping, gradually creating, destroying and altering mappings, may allow for a narrative in mapping during performance. The art form of Soma, in which correlated auditory, visual and proprioceptive stimulus form a combined narrative, builds on knowledge that performers and audiences are more engaged in performance requiring advanced motor knowledge, and when congruent percepts across modalities coincide. Preparing and improvising is simplified, through re-adapting the Processing programming language for artists to behave as a plug-in API, thus encapsulating complexity in modules, which may be dynamically layered during performance. Design research methodology is employed during development and evaluation, while introducing the additional viewpoint of ethnography during evaluation, engaging musicians, audience and visuals performers

    Using Music to Interact with a Virtual Character

    No full text
    We present a real-time system which allows musicians to interact with synthetic virtual characters as they perform. Using Max/MSP to parameterize keyboard and vocal input, meaningful features (pitch, amplitude, chord information, and vocal timbre) are extracted from live performance in real-time. These extracted musical features are then mapped to character behaviour in such a way that the musician 's performance elicits a response from the virtual character. The system uses the ANIMUS framework to generate believable character expressions. Experimental results are presented for simple characters

    ABSTRACT Using Music to Interact with a Virtual Character

    No full text
    We present a real-time system which allows musicians to interact with synthetic virtual characters as they perform. Using Max/MSP to parameterize keyboard and vocal input, meaningful features (pitch, amplitude, chord information, and vocal timbre) are extracted from live performance in real-time. These extracted musical features are then mapped to character behaviour in such a way that the musician’s performance elicits a response from the virtual character. The system uses the ANIMUS framework to generate believable character expressions. Experimental results are presented for simple characters
    corecore