154 research outputs found

    Evaluation of Musical Creativity and Musical Metacreation Systems

    Get PDF
    The field of computational creativity, including musical metacreation, strives to develop artificial systems that are capable of demonstrating creative behavior or producing creative artefacts. But the claim of creativity is often assessed, subjectively only on the part of the researcher and not objectively at all. This article provides theoretical motivation for more systematic evaluation of musical metacreation and computationally creative systems and presents an overview of current methods used to assess human and machine creativity that may be adapted for this purpose. In order to highlight the need for a varied set of evaluation tools, a distinction is drawn among three types of creative systems: those that are purely generative, those that contain internal or external feedback, and those that are capable of reflection and self-reflection. To address the evaluation of each of these aspects, concrete examples of methods and techniques are suggested to help researchers (1) evaluate their systems' creative process and generated artefacts, and test their impact on the perceptual, cognitive, and affective states of the audience, and (2) build mechanisms for reflection into the creative system, including models of human perception and cognition, to endow creative systems with internal evaluative mechanisms to drive self-reflective processes. The first type of evaluation can be considered external to the creative system and may be employed by the researcher to both better understand the efficacy of their system and its impact and to incorporate feedback into the system. Here we take the stance that understanding human creativity can lend insight to computational approaches, and knowledge of how humans perceive creative systems and their output can be incorporated into artificial agents as feedback to provide a sense of how a creation will impact the audience. The second type centers around internal evaluation, in which the system is able to reason about its own behavior and generated output. We argue that creative behavior cannot occur without feedback and reflection by the creative/metacreative system itself. More rigorous empirical testing will allow computational and metacreative systems to become more creative by definition and can be used to demonstrate the impact and novelty of particular approaches

    Computational Creativity and Music Generation Systems: An Introduction to the State of the Art

    Get PDF
    Computational Creativity is a multidisciplinary field that tries to obtain creative behaviors from computers. One of its most prolific subfields is that of Music Generation (also called Algorithmic Composition or Musical Metacreation), that uses computational means to compose music. Due to the multidisciplinary nature of this research field, it is sometimes hard to define precise goals and to keep track of what problems can be considered solved by state-of-the-art systems and what instead needs further developments. With this survey, we try to give a complete introduction to those who wish to explore Computational Creativity and Music Generation. To do so, we first give a picture of the research on the definition and the evaluation of creativity, both human and computational, needed to understand how computational means can be used to obtain creative behaviors and its importance within Artificial Intelligence studies. We then review the state of the art of Music Generation Systems, by citing examples for all the main approaches to music generation, and by listing the open challenges that were identified by previous reviews on the subject. For each of these challenges, we cite works that have proposed solutions, describing what still needs to be done and some possible directions for further research

    Designing Computationally Creative Musical Performance Systems

    Get PDF
    This is work in progress where we outline a design process for a computationally creative musical performance system using the Creative Systems Framework (CSF). The proposed system is intended to produce virtuosic interpretations, and subsequent synthesized renderings of these interpretations with a physical model of a bass guitar, using case-based reasoning and reflection. We introduce our interpretations of virtuosity and musical performance, outline the suitability of case-based reasoning in computationally creative systems and introduce notions of computational creativity and the CSF. We design our system by formalising the components of the CSF and briefly outline a potential implementation. In doing so, we demonstrate how the CSF can be used as a tool to aid in designing computationally creative musical performance systems

    Virtual Agents in Live Coding: A Short Review

    Get PDF
    Although this special issue has been scheduled for 15 March 2021, it is still unpublished: https://econtact.ca/call.html (eContact! 21.1 — Take Back the Stage: Live coding, live audiovisual, laptop orchestra…)AI and live coding has been little explored. This article contributes with a short review of different perspectives of using virtual agents in the practice of live coding looking at past and present as well as pointing to future directions

    Player Responses to a Live Algorithm: Conceptualising computational creativity without recourse to human comparisons?

    Get PDF
    Abstract Live algorithms are computational systems made to perform in an improvised manner with human improvising musicians, typically using only live audio or MIDI streams as the medium of interaction. They are designed to establish meaningful musical interaction with their musical partners, without necessarily being conceived of as "virtual musicians". This paper investigates, with respect to a specific live algorithm designed by the author, how improvising musicians approach and discuss performing with that system. The study supports a working assumption that such systems constitute a distinct type of object from the traditional categories of instrument, composition and performer, which are capable of satisfying some of the expectations of an engaging improvisatory performance experience, despite being unambiguously distinct from a human musician. I investigate how the study participants' comments and actions support this view. Specifically: 1) participants interacting with the system had a stronger sense of the nature of the interaction than when they were passively observing the interaction; 2) participants couldn't tell what the "rules" of the interactive behaviour were, and didn't feel they could predict the behaviour, but reported this as being a positive, engaging aspect of the experience. Their actions implied that the improvisation had purpose and invited engagement; 3) participants strictly avoided discussing the system in terms of virtual musicianship, or of creating original output, and preferred to categorise the system as an instrument or a composition, despite describing the interaction of the system as musically engaging; 4) participants felt the long-term structure was lacking. Such results, it is argued, lend weight to the idea that as CC applications in real creation scenarios grow, the creative contribution of computer systems becomes less grounded in comparison with human standards

    The Effect of Explicit Structure Encoding of Deep Neural Networks for Symbolic Music Generation

    Full text link
    With recent breakthroughs in artificial neural networks, deep generative models have become one of the leading techniques for computational creativity. Despite very promising progress on image and short sequence generation, symbolic music generation remains a challenging problem since the structure of compositions are usually complicated. In this study, we attempt to solve the melody generation problem constrained by the given chord progression. This music meta-creation problem can also be incorporated into a plan recognition system with user inputs and predictive structural outputs. In particular, we explore the effect of explicit architectural encoding of musical structure via comparing two sequential generative models: LSTM (a type of RNN) and WaveNet (dilated temporal-CNN). As far as we know, this is the first study of applying WaveNet to symbolic music generation, as well as the first systematic comparison between temporal-CNN and RNN for music generation. We conduct a survey for evaluation in our generations and implemented Variable Markov Oracle in music pattern discovery. Experimental results show that to encode structure more explicitly using a stack of dilated convolution layers improved the performance significantly, and a global encoding of underlying chord progression into the generation procedure gains even more.Comment: 8 pages, 13 figure

    Automated manipulation of musical grammars to support episodic interactive experiences

    Get PDF
    Music is used to enhance the experience of participants and visitors in a range of settings including theatre, film, video games, installations and theme parks. These experiences may be interactive, contrastingly episodic and with variable duration. Hence, the musical accompaniment needs to be dynamic and to transition between contrasting music passages. In these contexts, computer generation of music may be necessary for practical reasons including distribution and cost. Automated and dynamic composition algorithms exist but are not well-suited to a highly interactive episodic context owing to transition-related problems including discontinuity, abruptness, extended repetitiveness and lack of musical granularity and musical form. Addressing these problems requires algorithms capable of reacting to participant behaviour and episodic change in order to generate formic music that is continuous and coherent during transitions. This thesis presents the Form-Aware Transitioning and Recovering Algorithm (FATRA) for realtime, adaptive, form-aware music generation to provide continuous musical accompaniment in episodic context. FATRA combines stochastic grammar adaptation and grammar merging in real time. The Form-Aware Transition Engine (FATE) implementation of FATRA estimates the time-occurrence of upcoming narrative transitions and generates a harmonic sequence as narrative accompaniment with a focus on coherent, form-aware music transitioning between music passages of contrasting character. Using FATE, FATRA has been evaluated in three perceptual user studies: An audioaugmented real museum experience, a computer-simulated museum experience and a music-focused online study detached from narrative. Music transitions of FATRA were benchmarked against common approaches of the video game industry, i.e. crossfading and direct transitions. The participants were overall content with the music of FATE during their experience. Transitions of FATE were significantly favoured against the crossfading benchmark and competitive against the direct transitions benchmark, without statistical significance for the latter comparison. In addition, technical evaluation demonstrated capabilities of FATRA including form generation, repetitiveness avoidance and style/form recovery in case of falsely predicted narrative transitions. Technical results along with perceptual preference and competitiveness against the benchmark approaches are deemed as positive and the structural advantages of FATRA, including form-aware transitioning, carry considerable potential for future research

    Harmonization and evaluation. Tweaking the parameters on human listeners

    Get PDF
    Kansei models were used to study the connotative meaning of music. In multimedia and mixed reality, automatically generated melodies are increasingly being used. It is important to consider whether and what feelings are communicated by this music. Evaluation of computer-generated melodies is not a trivial task. Considered the difficulty of defining useful quantitative metrics of the quality of a generated musical piece, researchers often resort to human evaluation. In these evaluations, often the judges are required to evaluate a set of generated pieces along with some benchmark pieces. The latter are often composed by humans. While this kind of evaluation is relatively common, it is known that care should be taken when designing the experiment, as humans can be influenced by a variety of factors. In this paper, we examine the impact of the presence of harmony in audio files that judges must evaluate, to see whether having an accompaniment can change the evaluation of generated melodies. To do so, we generate melodies with two different algorithms and harmonize them with an automatic tool that we designed for this experiment, and ask more than sixty participants to evaluate the melodies. By using statistical analyses, we show harmonization does impact the evaluation process, by emphasizing the differences among judgements

    Aspects of Self-awareness: An Anatomy of Metacreative Systems

    Get PDF
    We formulate a model of computational metacreativity. It consists of various aspects of creative self-awareness that potentially contribute, in various combinations, to the metacreative capabilities of a creative system. Our model is inspired by a psychological view of metacreativity promoting the awareness of one's thoughts during the creative process, and draws from the field of self-adaptive software systems to explicate different viewpoints of metacreativity in creative systems. The model is designed to help in analyzing metacreative capabilities of creative systems, and to guide the development of creative systems to a more autonomous and adaptive direction.Peer reviewe
    • …
    corecore