636 research outputs found

    Introduction to Gestural Similarity in Music. An Application of Category Theory to the Orchestra

    Full text link
    Mathematics, and more generally computational sciences, intervene in several aspects of music. Mathematics describes the acoustics of the sounds giving formal tools to physics, and the matter of music itself in terms of compositional structures and strategies. Mathematics can also be applied to the entire making of music, from the score to the performance, connecting compositional structures to acoustical reality of sounds. Moreover, the precise concept of gesture has a decisive role in understanding musical performance. In this paper, we apply some concepts of category theory to compare gestures of orchestral musicians, and to investigate the relationship between orchestra and conductor, as well as between listeners and conductor/orchestra. To this aim, we will introduce the concept of gestural similarity. The mathematical tools used can be applied to gesture classification, and to interdisciplinary comparisons between music and visual arts.Comment: The final version of this paper has been published by the Journal of Mathematics and Musi

    Sensing and mapping for interactive performance

    Get PDF
    This paper describes a trans-domain mapping (TDM) framework for translating meaningful activities from one creative domain onto another. The multi-disciplinary framework is designed to facilitate an intuitive and non-intrusive interactive multimedia performance interface that offers the users or performers real-time control of multimedia events using their physical movements. It is intended to be a highly dynamic real-time performance tool, sensing and tracking activities and changes, in order to provide interactive multimedia performances. From a straightforward definition of the TDM framework, this paper reports several implementations and multi-disciplinary collaborative projects using the proposed framework, including a motion and colour-sensitive system, a sensor-based system for triggering musical events, and a distributed multimedia server for audio mapping of a real-time face tracker, and discusses different aspects of mapping strategies in their context. Plausible future directions, developments and exploration with the proposed framework, including stage augmenta tion, virtual and augmented reality, which involve sensing and mapping of physical and non-physical changes onto multimedia control events, are discussed

    Computers in Support of Musical Expression

    Get PDF

    Designing and Composing for Interdependent Collaborative Performance with Physics-Based Virtual Instruments

    Get PDF
    Interdependent collaboration is a system of live musical performance in which performers can directly manipulate each other’s musical outcomes. While most collaborative musical systems implement electronic communication channels between players that allow for parameter mappings, remote transmissions of actions and intentions, or exchanges of musical fragments, they interrupt the energy continuum between gesture and sound, breaking our cognitive representation of gesture to sound dynamics. Physics-based virtual instruments allow for acoustically and physically plausible behaviors that are related to (and can be extended beyond) our experience of the physical world. They inherently maintain and respect a representation of the gesture to sound energy continuum. This research explores the design and implementation of custom physics-based virtual instruments for realtime interdependent collaborative performance. It leverages the inherently physically plausible behaviors of physics-based models to create dynamic, nuanced, and expressive interconnections between performers. Design considerations, criteria, and frameworks are distilled from the literature in order to develop three new physics-based virtual instruments and associated compositions intended for dissemination and live performance by the electronic music and instrumental music communities. Conceptual, technical, and artistic details and challenges are described, and reflections and evaluations by the composer-designer and performers are documented

    Plays of proximity and distance: Gesture-based interaction and visual music

    Get PDF
    This thesis presents the relations between gestural interfaces and artworks which deal with real- time and simultaneous performance of dynamic imagery and sound, the so called visual music practices. Those relation extend from a historical, practical and theoretical viewpoint, which this study aims to cover, at least partially, all of them. Such relations are exemplified by two artistic projects developed by the author of this thesis, which work as a starting point for analysing the issues around the two main topics. The principles, patterns, challenges and concepts which struc- tured the two artworks are extracted, analysed and discussed, providing elements for comparison and evaluation, which may be useful for future researches on the topic

    Multiparametric interfaces for fine-grained control of digital music

    Get PDF
    Digital technology provides a very powerful medium for musical creativity, and the way in which we interface and interact with computers has a huge bearing on our ability to realise our artistic aims. The standard input devices available for the control of digital music tools tend to afford a low quality of embodied control; they fail to realise our innate expressiveness and dexterity of motion. This thesis looks at ways of capturing more detailed and subtle motion for the control of computer music tools; it examines how this motion can be used to control music software, and evaluates musicians’ experience of using these systems. Two new musical controllers were created, based on a multiparametric paradigm where multiple, continuous, concurrent motion data streams are mapped to the control of musical parameters. The first controller, Phalanger, is a markerless video tracking system that enables the use of hand and finger motion for musical control. EchoFoam, the second system, is a malleable controller, operated through the manipulation of conductive foam. Both systems use machine learning techniques at the core of their functionality. These controllers are front ends to RECZ, a high-level mapping tool for multiparametric data streams. The development of these systems and the evaluation of musicians’ experience of their use constructs a detailed picture of multiparametric musical control. This work contributes to the developing intersection between the fields of computer music and human-computer interaction. The principal contributions are the two new musical controllers, and a set of guidelines for the design and use of multiparametric interfaces for the control of digital music. This work also acts as a case study of the application of HCI user experience evaluation methodology to musical interfaces. The results highlight important themes concerning multiparametric musical control. These include the use of metaphor and imagery, choreography and language creation, individual differences and uncontrol. They highlight how this style of interface can fit into the creative process, and advocate a pluralistic approach to the control of digital music tools where different input devices fit different creative scenarios

    Motion Modeling for Expressive Interaction

    Get PDF
    While human-human or human-object interactions involve very rich, complex and nuanced gestures, gestures as they are captured for human-computer interaction remain relatively simplistic. Our approach is to consider the study of variation of motion input as a way of understanding expression and expressivity in human-computer interaction and in order to propose computational solutions for capturing and using these expressive variations. The paper reports an attempt at drawing the lines of design guidelines for modeling systems adapting to motion variations. We propose to illustrate them through two case studies: the first model is used to estimate temporal and geometrical motion variations while the second is used to track variations of motion dynamics. These case studies are illustrated in two application

    Ashitaka: an audiovisual instrument

    Get PDF
    This thesis looks at how sound and visuals may be linked in a musical instrument, with a view to creating such an instrument. Though it appears to be an area of significant interest, at the time of writing there is very little existing - written, or theoretical - research available in this domain. Therefore, based on Michel Chion’s notion of synchresis in film, the concept of a fused, inseparable audiovisual material is presented. The thesis then looks at how such a material may be created and manipulated in a performance situation. A software environment named Heilan was developed in order to provide a base for experimenting with different approaches to the creation of audiovisual instruments. The software and a number of experimental instruments are discussed prior to a discussion and evaluation of the final ‘Ashitaka’ instrument. This instrument represents the culmination of the work carried out for this thesis, and is intended as a first step in identifying the issues and complications involved in the creation of such an instrument

    INTERACTIVE SONIFICATION STRATEGIES FOR THE MOTION AND EMOTION OF DANCE PERFORMANCES

    Get PDF
    The Immersive Interactive SOnification Platform, or iISoP for short, is a research platform for the creation of novel multimedia art, as well as exploratory research in the fields of sonification, affective computing, and gesture-based user interfaces. The goal of the iISoP’s dancer sonification system is to “sonify the motion and emotion” of a dance performance via musical auditory display. An additional goal of this dissertation is to develop and evaluate musical strategies for adding layer of emotional mappings to data sonification. The result of the series of dancer sonification design exercises led to the development of a novel musical sonification framework. The overall design process is divided into three main iterative phases: requirement gathering, prototype generation, and system evaluation. For the first phase help was provided from dancers and musicians in a participatory design fashion as domain experts in the field of non-verbal affective communication. Knowledge extraction procedures took the form of semi-structured interviews, stimuli feature evaluation, workshops, and think aloud protocols. For phase two, the expert dancers and musicians helped create test-able stimuli for prototype evaluation. In phase three, system evaluation, experts (dancers, musicians, etc.) and novice participants were recruited to provide subjective feedback from the perspectives of both performer and audience. Based on the results of the iterative design process, a novel sonification framework that translates motion and emotion data into descriptive music is proposed and described

    Deep Visual Instruments: Realtime Continuous, Meaningful Human Control over Deep Neural Networks for Creative Expression

    Get PDF
    In this thesis, we investigate Deep Learning models as an artistic medium for new modes of performative, creative expression. We call these Deep Visual Instruments: realtime interactive generative systems that exploit and leverage the capabilities of state-of-the-art Deep Neural Networks (DNN), while allowing Meaningful Human Control, in a Realtime Continuous manner. We characterise Meaningful Human Control in terms of intent, predictability, and accountability; and Realtime Continuous Control with regards to its capacity for performative interaction with immediate feedback, enhancing goal-less exploration. The capabilities of DNNs that we are looking to exploit and leverage in this manner, are their ability to learn hierarchical representations modelling highly complex, real-world data such as images. Thinking of DNNs as tools that extract useful information from massive amounts of Big Data, we investigate ways in which we can navigate and explore what useful information a DNN has learnt, and how we can meaningfully use such a model in the production of artistic and creative works, in a performative, expressive manner. We present five studies that approach this from different but complementary angles. These include: a collaborative, generative sketching application using MCTS and discriminative CNNs; a system to gesturally conduct the realtime generation of text in different styles using an ensemble of LSTM RNNs; a performative tool that allows for the manipulation of hyperparameters in realtime while a Convolutional VAE trains on a live camera feed; a live video feed processing software that allows for digital puppetry and augmented drawing; and a method that allows for long-form story telling within a generative model's latent space with meaningful control over the narrative. We frame our research with the realtime, performative expression provided by musical instruments as a metaphor, in which we think of these systems as not used by a user, but played by a performer
    corecore