105,173 research outputs found

    An Introduction to Interactive Music for Percussion and Computers

    Get PDF
    Composers began combining acoustic performers with electronically produced sounds in the early twentieth century. As computer-processing power increased the potential for significant musical communication was developed. Despite the body of research concerning electronic music, performing a composition with a computer partner remains intimidating for performers. The purpose of this paper is to provide an introductory method for interacting with a computer. This document will first follow the parallel evolution of percussion and electronics in order to reveal how each medium was influenced by the other. The following section will define interaction and explain how this is applied to musical communication between humans and computers. The next section introduces specific techniques used to cultivate human-computer interaction. The roles of performer, instrument, composer and conductor will then be defined as they apply to the human performer and the computer. If performers are aware of these roles they will develop richer communication that can enhance the performer's and audience member's recognition of human-computer interaction. In the final section, works for percussion and computer will be analyzed to reveal varying levels of interaction and the shifting roles of the performer. Three compositions will illustrate this point, 120bpm from neither Anvil nor Pulley by Dan Trueman, It's Like the Nothing Never Was by Von Hansen, and Music for Snare Drum and Computer by Cort Lippe. These three pieces develop a continuum of increasing interaction, moving from interaction within a fully defined score, to improvisation with digital synthesis, to the manipulation of computerized compositional algorithms using performer input. The unique ways each composer creates interaction will expose the vast possibilities for performing with interactive music systems

    Musical and meta-musical conversations

    Get PDF
    This collaboration emerged out of informal conversation between the authors about improvisation. Ben-Tal is a composer/researcher who has been using Music Information Retrieval (MIR) techniques and AI as tools for composition. Dolan is a performer/improviser and researcher on improvisation, creativity and expressive performance with little knowledge of music technology. Dolan became intrigued but also highly sceptical about Ben-Tal’s ideas of musical dialogues between human and computer as a basis for co-creation. They agreed to meet and trial the possibility of real-time improvisation between piano and computer. By his own admission, Dolan came to this first session assuming he will prove the inadequacy of such a set-up for joint improvisation based on extended tonal music idiom.  He found himself equally surprised and alarmed when he experienced moments that felt, to himself,  as real dialogue with the machine. This proof-of-concept session provided the starting point for an ongoing collaboration: developing a unique duo-improvisation within the context of computationally creative tools, real-time interaction, tonal-music and human-computer interaction. Central to this work are musical dialogues between Dolan on the piano and Ben-Tal’s computing system as they improvise together. These are surrounded and complemented by conversations between the authors about the system, about improvisation, composition, performance, music and AI.  This presentation starts from a description of the current improvisation set-up and the development that allowed us to arrive at this stage. The following section re-enacts some of the conversations that the authors engaged in, which will illuminate the learning and discovery process they underwent together. We will end by drawing out important themes emerging from the musical and meta-musical conversations in relation to current debates around music and AI

    Software agents in music and sound art research/creative work: Current state and a possible direction

    Get PDF
    Composers, musicians and computer scientists have begun to use software-based agents to create music and sound art in both linear and non-linear (non-predetermined form and/or content) idioms, with some robust approaches now drawing on various disciplines. This paper surveys recent work: agent technology is first introduced, a theoretical framework for its use in creating music/sound art works put forward, and an overview of common approaches then given. Identifying areas of neglect in recent research, a possible direction for further work is then briefly explored. Finally, a vision for a new hybrid model that integrates non-linear, generative, conversational and affective perspectives on interactivity is proposed

    Computers in Support of Musical Expression

    Get PDF

    Interactive Spaces. Models and Algorithms for Reality-based Music Applications

    Get PDF
    Reality-based interfaces have the property of linking the user's physical space with the computer digital content, bringing in intuition, plasticity and expressiveness. Moreover, applications designed upon motion and gesture tracking technologies involve a lot of psychological features, like space cognition and implicit knowledge. All these elements are the background of three presented music applications, employing the characteristics of three different interactive spaces: a user centered three dimensional space, a floor bi-dimensional camera space, and a small sensor centered three dimensional space. The basic idea is to deploy the application's spatial properties in order to convey some musical knowledge, allowing the users to act inside the designed space and to learn through it in an enactive way

    A Conceptual Framework for Motion Based Music Applications

    Get PDF
    Imaginary projections are the core of the framework for motion based music applications presented in this paper. Their design depends on the space covered by the motion tracking device, but also on the musical feature involved in the application. They can be considered a very powerful tool because they allow not only to project in the virtual environment the image of a traditional acoustic instrument, but also to express any spatially defined abstract concept. The system pipeline starts from the musical content and, through a geometrical interpretation, arrives to its projection in the physical space. Three case studies involving different motion tracking devices and different musical concepts will be analyzed. The three examined applications have been programmed and already tested by the authors. They aim respectively at musical expressive interaction (Disembodied Voices), tonal music knowledge (Harmonic Walk) and XX century music composition (Hand Composer)

    PIWeCS: enhancing human/machine agency in an interactive composition system

    Get PDF
    This paper focuses on the infrastructure and aesthetic approach used in PIWeCS: a Public Space Interactive Web-based Composition System. The concern was to increase the sense of dialogue between human and machine agency in an interactive work by adapting Paine's (2002) notion of a conversational model of interaction as a ‘complex system’. The machine implementation of PIWeCS is achieved through integrating intelligent agent programming with MAX/MSP. Human input is through a web infrastructure. The conversation is initiated and continued by participants through arrangements and composition based on short performed samples of traditional New Zealand Maori instruments. The system allows the extension of a composition through the electroacoustic manipulation of the source material
    • …
    corecore