4 research outputs found

    Techniques for generative melodies inspired by music cognition

    Get PDF
    This article presents a series of algorithmic techniques for melody generation, inspired by models of music cognition. The techniques are designed for interactive composition, and so privilege brevity, simplicity, and flexibility over fidelity to the underlying models. The cognitive models canvassed span gestalt, preference rule, and statistical learning perspectives; this is a diverse collection with a common thread—the centrality of “expectations” to music cognition. We operationalize some recurrent themes across this collection as probabilistic descriptions of melodic tendency, codifying them as stochastic melody-generation techniques. The techniques are combined into a concise melody generator, with salient parameters exposed for ready manipulation in real time. These techniques may be especially relevant to algorithmic composers, the live-coding community, and to music psychologists and theorists interested in how computational interpretations of cognitive models “sound” in practice

    An Interactive System for Generating Music from Moving Images

    Get PDF
    Moving images contain a wealth of information pertaining to motion. Motivated by the interconnectedness of music and movement, we present a framework for transforming the kinetic qualities of moving images into music. We developed an interactive software system that takes video as input and maps its motion attributes into the musical dimension based on perceptually grounded principles. The system combines existing sonification frameworks with theories and techniques of generative music. To evaluate the system, we conducted a two-part experiment. First, we asked participants to make judgements on video-audio correspondence from clips generated by the system. Second, we asked participants to give ratings for audiovisual works created using the system. These experiments revealed that 1) the system is able to generate music with a significant level of perceptual correspondence to the source video’s motion and 2) the system can effectively be used as an artistic tool for generative composition
    corecore