3 research outputs found

    Co-operative coevolution for computational creativity: a case study In videogame design

    Get PDF
    The term procedural content generation (PCG) refers to writing software which can synthesise content for a game (or other media such as film) without further intervention from a designer. PCG has become a rich area of research in recent years, finding new ways to apply artificial intelligence to generate high-quality game content such as levels, weapons or puzzles for games. Such research is generally constrained to a single type of content, however, with the assumption that the remainder of the game's design will be fixed by an external designer. Generating many aspects of a game's design simultaneously, perhaps ultimately generating the entirety of a game's design, using PCG is not a well-explored idea. The notion of automated game design is not well-established, and is not seen as a task distinct from simply performing lots of PCG tasks at the same time. In particular, the high-level design tasks guiding the creative direction of a game are all but completely absent in PCG literature, because it is rare that a designer wishes to hand over such responsibility to a PCG system. We present here ANGELINA, an automated game designer that has developed games using a multi-faceted approach to content generation underpinned by a co-operative co-evolutionary approach which breaks down a game design into several distinct tasks, each of which controlled by an evolutionary subsystem within ANGELINA. We will show that this approach works well to automate game design, can be ported across many game engines and game genres, and can be enhanced and extended using novel computational creativity techniques to give the system a heightened sense of autonomy and independence.Open Acces

    New evaluation methods for automatic music generation

    Get PDF
    Recent research in the field of automatic music generation lacks rigorous and comprehensive evaluation methods, creating plagiarism risks and partial understandings of generation performance. To contribute to evaluation methodology in this field, I first introduce the originality report for measuring the extent to which an algorithm copies from the input music. It starts with constructing a baseline to determine the extent to which human composers borrow from themselves and each other in some existing music corpus. I then apply the similar analysis to musical outputs of runs of MAIA Markov and Music Transformer generation algorithms, and compare the results to the baseline. Results indicate that the originality of Music Transfomer's output is below the 95\% confidence interval of the baseline, while MAIA Markov stays within that interval. Second, I conduct a listening study to comparatively evaluate music generation systems along six musical dimensions: stylistic success, aesthetic pleasure, repetition or self-reference, melody, harmony, and rhythm. A range of models are used to generate 30-second excerpts in the style of Classical string quartets and classical piano improvisations. Fifty participants with relatively high musical knowledge rate unlabelled samples of computer-generated and human-composed excerpts. I use non-parametric Bayesian hypothesis testing to interpret the results. The results show that the strongest deep learning method, Music Transformer, has equivalent performance to a non-deep learning method, MAIA Markov, and there still remains a significant gap between any algorithmic method and human-composed excerpts. Third, I introduce six musical features: statistical complexity, transitional complexity, arc score, tonality ambiguity, time intervals and onset jitters to investigate correlations to the collected ratings. The result shows human composed music remains at the same level of statistical complexity, while the computer-generated excerpts have either lower or higher statistical complexity and receive lower ratings. This thesis contributes to the evaluation methodology of automatic music generation by filling the gap of originality report, comparative evaluation and musicological analysis

    Assessing progress in building autonomously creative systems

    Get PDF
    Determining conclusively whether a new version of software creatively exceeds a previous version or a third party system is difficult, yet very important for scientific approaches in Computational Creativity research. We argue that software product and process need to be assessed simultaneously in assessing progress, and we introduce a diagrammatic formalism which exposes various timelines of creative acts in the construction and execution of successive versions of artefact generating software. The formalism enables estimations ofprogress or regress from system to system by comparing their diagrams and assessing changes in quality, quantity and variety of creative acts undertaken; audience perception of behaviours; and the quality of artefacts produced. We present a case study in the building of evolutionary art systems, and we use the formalism to highlight various issues in measuring progress in the building of creative systems
    corecore