Tension-driven Automatic Music Generation

Abstract

The Ancient Greeks are one of the first civilisations we know of to have created algorithms to compose music. Since then, algorithmic techniques have vastly improved with increasingly sophisticated computers. In the last two decades, much research in this area has focused on two goals: designing algorithms which generate music as close as possible to that of human composers and implementing those algorithms to automatically generate music in interactive scenarios, such as video games. To meet these goals, automatically generated music should: - focus on higher-level concepts, such as musical tension, - have long-term structure, and - be able to adapt to changes in real time. Combining these three requirements is, however, a challenging task. This dissertation investigates three steps to overcome this challenge. First, we argue that Lerdahl’s model of musical tension is suited to the automatic generation of tonal music that has long-term structure and that matches a given tension profile. By means of an illustrative example, we review Lerdhal’s model and implement a novel computational system to automate it. Second, we show that an effective generation strategy is to combine statistical methods with both rule-based methods and generative grammars to create a music generation system. Third, we implement the system and evaluate it through a collection of computational tests and empirical studies. Our evaluation shows that: (1) the system works effectively in real time, as long as the input tension profiles do not contain too many steep transitions, (2) the hierarchical structure perceived by listeners matches the patterns intended by the system in the generated music, and (3) tension-changing input profiles are accurately matched by the generated music

    Similar works