24 research outputs found

    MediaScape: towards a video, music, and sound metacreation

    Get PDF
    We present a new media work, MediaScape, which is an initial foray into a fully interdisciplinary metacreativity. This paper defines metacreation, and we present examples of metacreative art within the fields of music, sound art, the history of generative narrative, and discuss the potential of the “open-documentary” as an immediate goal of metacreative video. Lastly, we describe MediaScape in detail, and present some future directions

    Collaborative composition for musical robots

    Get PDF
    The goal of this research is to collaborate with a number of different artists to explore the capabilities of robotic musical instruments to cultivate new music. This paper describes the challenges faced in using musical robotics in rehearsals and on the performance stage. It also describes the design of custom software frameworks and tools for the variety of composers and performers interacting with the new instruments. Details of how laboratory experiments and rehearsals moved to the concert hall in a variety of diverse performance scenarios are described. Finally, a paradigm for how to teach musical robotics as a multimedia composition course is discussed

    Evaluation of Musical Creativity and Musical Metacreation Systems

    Get PDF
    The field of computational creativity, including musical metacreation, strives to develop artificial systems that are capable of demonstrating creative behavior or producing creative artefacts. But the claim of creativity is often assessed, subjectively only on the part of the researcher and not objectively at all. This article provides theoretical motivation for more systematic evaluation of musical metacreation and computationally creative systems and presents an overview of current methods used to assess human and machine creativity that may be adapted for this purpose. In order to highlight the need for a varied set of evaluation tools, a distinction is drawn among three types of creative systems: those that are purely generative, those that contain internal or external feedback, and those that are capable of reflection and self-reflection. To address the evaluation of each of these aspects, concrete examples of methods and techniques are suggested to help researchers (1) evaluate their systems' creative process and generated artefacts, and test their impact on the perceptual, cognitive, and affective states of the audience, and (2) build mechanisms for reflection into the creative system, including models of human perception and cognition, to endow creative systems with internal evaluative mechanisms to drive self-reflective processes. The first type of evaluation can be considered external to the creative system and may be employed by the researcher to both better understand the efficacy of their system and its impact and to incorporate feedback into the system. Here we take the stance that understanding human creativity can lend insight to computational approaches, and knowledge of how humans perceive creative systems and their output can be incorporated into artificial agents as feedback to provide a sense of how a creation will impact the audience. The second type centers around internal evaluation, in which the system is able to reason about its own behavior and generated output. We argue that creative behavior cannot occur without feedback and reflection by the creative/metacreative system itself. More rigorous empirical testing will allow computational and metacreative systems to become more creative by definition and can be used to demonstrate the impact and novelty of particular approaches

    Generating Structure – Towards Large-Scale Formal Generation

    No full text
    Metacreative systems have been very successful at generating event-to-event musical details and generating short forms. However, large-scale formal decisions have tended to remain in the hands of a human, either through an operator guiding an improvisational system along in performance, or the assemblage of shorter generated sections into longer forms. This paper will describe the open problem of large-scale generative form, and the author’s attempts at delegating such decisions to a metacreative system by describing several of his generative systems and their approach to structure. The author will describe in greater detail his latest system — The Indifference Engine — and how it negotiates between agent intentions and performance information derived from a live performer

    Embracing the Bias of the Machine: Exploring Non-Human Fitness Functions

    No full text
    Autonomous aesthetic evaluation is the Holy Grail of generative music, and one of the great challenges of computational creativity. Unlike most other computational activities, there is no notion of optimality in evaluating creative output: there are subjective impressions involved, and framing obviously plays a big role. When developing metacreative systems, a purely objective fitness function is not available: the designer is thus faced with how much of their own aesthetic to include. Can a generative system be free of the designer’s bias? This paper presents a system that incorporates an aesthetic selection process that allows for both human-designed and non-human fitness functions

    Generative Music for Live Musicians: An Unnatural Selection

    No full text
    Abstract An Unnatural Selection is a generative musical composition for conductor, eight live musicians, robotic percussion, and Disklavier. It was commissioned by Vancouver's Turning Point Ensemble, and premiered in May 2014. Music for its three movements is generated live: the melodic, harmonic, and rhythmic material is based upon analysis of supplied corpora. The traditionally notated music is displayed as a score for the conductor, and individual parts are sent to eight iPads for the musicians to sight-read. The entire system is autonomous (although it does reference a pre-made score), using evolutionary algorithms to develop musical material. Video of the performance is available online. 1 This paper describes the system used to create the work, and the heuristic decisions made in both the system design and the composition itself

    International Computer Music Conference 2007: Live Electronics

    No full text

    The Generative Electronic Dance Music Algorithmic System (GEDMAS)

    No full text
    The Generative Electronic Dance Music Algorithmic System (GEDMAS) is a generative music system that composes full Electronic Dance Music (EDM) compositions.  The compositions are based on a corpus of transcribed musical data collected through a process of detailed human transcription.  This corpus data is used to analyze genre-specific characteristics associated with EDM styles. GEDMAS uses probabilistic and first order Markov chain models to generate song form structures, chord progressions, melodies and rhythms. The system is integrated with Ableton Live, and allows its user to select one or several songs from the corpus, and generate a 16 tracks/parts composition in a few clicks
    corecore