39 research outputs found

    Generative theatre of totality

    Get PDF
    Generative art can be used for creating complex multisensory and multimedia experiences within predetermined aesthetic parameters, characteristic of the performing arts and remarkably suitable to address Moholy-Nagy's Theatre of Totality vision. In generative artworks the artist will usually take on the role of an experience framework designer, and the system evolves freely within that framework and its defined aesthetic boundaries. Most generative art impacts visual arts, music and literature, but there does not seem to be any relevant work exploring the cross-medium potential, and one could confidently state that most generative art outcomes are abstract and visual, or audio. It is the goal of this article to propose a model for the creation of generative performances within the Theatre of Totality's scope, derived from stochastic Lindenmayer systems, where mapping techniques are proposed to address the seven variables addressed by Moholy-Nagy: light, space, plane, form, motion, sound and man ("man" is replaced in this article with "human", except where quoting from the author), with all the inherent complexities

    MediaScape: towards a video, music, and sound metacreation

    Get PDF
    We present a new media work, MediaScape, which is an initial foray into a fully interdisciplinary metacreativity. This paper defines metacreation, and we present examples of metacreative art within the fields of music, sound art, the history of generative narrative, and discuss the potential of the “open-documentary” as an immediate goal of metacreative video. Lastly, we describe MediaScape in detail, and present some future directions

    Collaborative composition for musical robots

    Get PDF
    The goal of this research is to collaborate with a number of different artists to explore the capabilities of robotic musical instruments to cultivate new music. This paper describes the challenges faced in using musical robotics in rehearsals and on the performance stage. It also describes the design of custom software frameworks and tools for the variety of composers and performers interacting with the new instruments. Details of how laboratory experiments and rehearsals moved to the concert hall in a variety of diverse performance scenarios are described. Finally, a paradigm for how to teach musical robotics as a multimedia composition course is discussed

    Evaluation of Musical Creativity and Musical Metacreation Systems

    Get PDF
    The field of computational creativity, including musical metacreation, strives to develop artificial systems that are capable of demonstrating creative behavior or producing creative artefacts. But the claim of creativity is often assessed, subjectively only on the part of the researcher and not objectively at all. This article provides theoretical motivation for more systematic evaluation of musical metacreation and computationally creative systems and presents an overview of current methods used to assess human and machine creativity that may be adapted for this purpose. In order to highlight the need for a varied set of evaluation tools, a distinction is drawn among three types of creative systems: those that are purely generative, those that contain internal or external feedback, and those that are capable of reflection and self-reflection. To address the evaluation of each of these aspects, concrete examples of methods and techniques are suggested to help researchers (1) evaluate their systems' creative process and generated artefacts, and test their impact on the perceptual, cognitive, and affective states of the audience, and (2) build mechanisms for reflection into the creative system, including models of human perception and cognition, to endow creative systems with internal evaluative mechanisms to drive self-reflective processes. The first type of evaluation can be considered external to the creative system and may be employed by the researcher to both better understand the efficacy of their system and its impact and to incorporate feedback into the system. Here we take the stance that understanding human creativity can lend insight to computational approaches, and knowledge of how humans perceive creative systems and their output can be incorporated into artificial agents as feedback to provide a sense of how a creation will impact the audience. The second type centers around internal evaluation, in which the system is able to reason about its own behavior and generated output. We argue that creative behavior cannot occur without feedback and reflection by the creative/metacreative system itself. More rigorous empirical testing will allow computational and metacreative systems to become more creative by definition and can be used to demonstrate the impact and novelty of particular approaches

    Embracing the Bias of the Machine: Exploring Non-Human Fitness Functions

    No full text
    Autonomous aesthetic evaluation is the Holy Grail of generative music, and one of the great challenges of computational creativity. Unlike most other computational activities, there is no notion of optimality in evaluating creative output: there are subjective impressions involved, and framing obviously plays a big role. When developing metacreative systems, a purely objective fitness function is not available: the designer is thus faced with how much of their own aesthetic to include. Can a generative system be free of the designer’s bias? This paper presents a system that incorporates an aesthetic selection process that allows for both human-designed and non-human fitness functions

    Generative Music for Live Musicians: An Unnatural Selection

    No full text
    Abstract An Unnatural Selection is a generative musical composition for conductor, eight live musicians, robotic percussion, and Disklavier. It was commissioned by Vancouver's Turning Point Ensemble, and premiered in May 2014. Music for its three movements is generated live: the melodic, harmonic, and rhythmic material is based upon analysis of supplied corpora. The traditionally notated music is displayed as a score for the conductor, and individual parts are sent to eight iPads for the musicians to sight-read. The entire system is autonomous (although it does reference a pre-made score), using evolutionary algorithms to develop musical material. Video of the performance is available online. 1 This paper describes the system used to create the work, and the heuristic decisions made in both the system design and the composition itself

    International Computer Music Conference 2007: Live Electronics

    No full text
    corecore