4 research outputs found
Techniques for generative melodies inspired by music cognition
This article presents a series of algorithmic techniques for melody generation, inspired by models of music cognition. The techniques are designed for interactive composition, and so privilege brevity, simplicity, and flexibility over fidelity to the underlying models. The cognitive models canvassed span gestalt, preference rule, and statistical learning perspectives; this is a diverse collection with a common thread—the centrality of “expectations” to music cognition. We operationalize some recurrent themes across this collection as probabilistic descriptions of melodic tendency, codifying them as stochastic melody-generation techniques. The techniques are combined into a concise melody generator, with salient parameters exposed for ready manipulation in real time. These techniques may be especially relevant to algorithmic composers, the live-coding community, and to music psychologists and theorists interested in how computational interpretations of cognitive models “sound” in practice
Recommended from our members
Emergent Works
Nineteen sixty-four was a very important year for the Copyright Office. The copyright revision effort that would eventually become the Copyright Act of 1976 was in full swing; draft bills that largely resembled the final Act were introduced in July. The goal of this effort was to update the old 1909 Act to fully account for the incredible proliferation of mass media. Just as that effort was shifting to Congress, however, over at the Copyright Office, the first harbinger of a new set of problems was sounding: the Office issued its first registrations for computer programs
An Interactive System for Generating Music from Moving Images
Moving images contain a wealth of information pertaining to motion. Motivated by the interconnectedness of music and movement, we present a framework for transforming the kinetic qualities of moving images into music. We developed an interactive software system that takes video as input and maps its motion attributes into the musical dimension based on perceptually grounded principles. The system combines existing sonification frameworks with theories and techniques of generative music. To evaluate the system, we conducted a two-part experiment. First, we asked participants to make judgements on video-audio correspondence from clips generated by the system. Second, we asked participants to give ratings for audiovisual works created using the system. These experiments revealed that 1) the system is able to generate music with a significant level of perceptual correspondence to the source video’s motion and 2) the system can effectively be used as an artistic tool for generative composition