7 research outputs found
Dynamic Procedural Music Generation from NPC Attributes
Procedural content generation for video games (PCGG) has seen a steep increase in the past decade, aiming to foster emergent gameplay as well as to address the challenge of producing large amounts of engaging content quickly. Most work in PCGG has been focused on generating art and assets such as levels, textures, and models, or on narrative design to generate storylines and progression paths. Given the difficulty of generating harmonically pleasing and interesting music, procedural music generation for games (PMGG) has not seen as much attention during this time.
Music in video games is essential for establishing developers\u27 intended mood and environment. Given the deficit of PMGG content, this paper aims to address the demand for high-quality PMGG. This paper describes the system developed to solve this problem, which generates thematic music for non-player characters (NPCs) based on developer-defined attributes in real time and responds to the dynamic relationship between the player and target NPC.
The system was evaluated by means of user study: participants confront four NPC bosses each with their own uniquely generated dynamic track based on their varying attributes in relation to the player\u27s. The survey gathered information on the perceived quality, dynamism, and helpfulness to gameplay of the generated music. Results showed that the generated music was generally pleasing and harmonious, and that while players could not detect the details of how, they were able to detect a general relationship between themselves and the NPCs as reflected by the music
Computational Creativity and Music Generation Systems: An Introduction to the State of the Art
Computational Creativity is a multidisciplinary field that tries to obtain creative behaviors from computers. One of its most prolific subfields is that of Music Generation (also called Algorithmic Composition or Musical Metacreation), that uses computational means to compose music. Due to the multidisciplinary nature of this research field, it is sometimes hard to define precise goals and to keep track of what problems can be considered solved by state-of-the-art systems and what instead needs further developments. With this survey, we try to give a complete introduction to those who wish to explore Computational Creativity and Music Generation. To do so, we first give a picture of the research on the definition and the evaluation of creativity, both human and computational, needed to understand how computational means can be used to obtain creative behaviors and its importance within Artificial Intelligence studies. We then review the state of the art of Music Generation Systems, by citing examples for all the main approaches to music generation, and by listing the open challenges that were identified by previous reviews on the subject. For each of these challenges, we cite works that have proposed solutions, describing what still needs to be done and some possible directions for further research
Recommended from our members
Automatic Generation of Dynamic Musical Transitions in Computer Games
In video games, music must often change quickly from one piece to another due to player interaction, such as when moving between different areas. This quick change in music can often sound jarring if the two pieces are very different from each other. Several transition techniques have been used in industry such as the abrupt cut transition, crossfading, horizontal resequencing and vertical reorchestration among others. However, while several claims are made about their effectiveness (or lack thereof), none of these have been experimentally tested.
To investigate how effective each transition technique is, this dissertation empirically evaluates each technique in a study informed by music psychology. This is done based on several features identified as being important for successful transitions. The obtained results led to a novel approach to musical transitions in video games by investigating the use of a multiple viewpoint system, with viewpoints being modelled using Markov models. This algorithm allowed the seamless generation of music that could serve as a transition between two composed pieces of music. While transitions in games normally occur over a zone boundary, the algorithm presented in this dissertation takes place over a transition region, giving the generated music enough time to transition.
This novel approach was evaluated in a bespoke video game environment, where participants navigated through several pairs of different game environments and rated the resulting musical transitions. The results indicate that the generated transitions perform as well as crossfading, a technique commonly used in the industry. Since crossfading is not always appropriate, being able to use generated transitions gives composers another tool in their toolbox. Furthermore, the principled approach taken opens up avenues for further research
Applying Transformational Music Theory to Dynamic Music Composition for Game Soundtracks: A practice-based investigation
Dynamic music systems are often found in video games for interactively altering music as a result of player input and game events. Real-time recomposition of music is a ``holy grail'' of dynamic soundtracks. However, common approaches based on managing multi-track audio and sequenced segments face challenges in responding musically to events. Audio tracks are limited due to a lack of granularity and sequenced content is unwieldy due to the complexity of relationships. Many of these limitations can be addressed, in part, by the representation used for the musical material. In this thesis I engage with generative sequencing for dynamic soundtracks through application of algorithmic composition to manage complexity.
In this practice, I utilise a novel representational model to situate musical material in spatial relationships, drawing on concepts from transformational music theory. I have implemented this model as a new software library, ScaleVec, which is also explained in detail in this research. This alternate approach to dynamic soundtracks addresses shortcomings in the granularity of contemporary stem-based approaches by generating and rendering a soundtrack in real-time. Furthermore, the spatial representation facilitates a path-based model which manages complex relationships by mapping out where the soundtrack has been, where it is planning to go and where it may diverge in response to events.
Through a practice-led enquiry, I evaluate the quality of this approach by considering dynamic music for games according to four categories of adaptive behaviours: immediate (short-timescale), game-state (mid-timescale), plot (long-timescale) and variation (steady-state). These four timescales are applied as a conceptual framework for the contextualisation and discussion of two prototype soundtracks, Spaceship and Submarine. These two works utilise ScaleVec to control the musical output of the generative processes, presenting a parametric interface for the dynamic behaviours of the soundtrack. The reflection and discussion of my practice examines the perceived affordances, musical quality and the nature of my compositional process. The capacity for real-time recomposition through generative scoring, by managing musical spaces with ScaleVec, represents a paradigmatic shift which allows for deeper engagement with the interactive facets of video game music composition, thereby embracing the emergent nature of gameplay as an integral part of the soundtrack