603 research outputs found

    AI Methods in Algorithmic Composition: A Comprehensive Survey

    Get PDF
    Algorithmic composition is the partial or total automation of the process of music composition by using computers. Since the 1950s, different computational techniques related to Artificial Intelligence have been used for algorithmic composition, including grammatical representations, probabilistic methods, neural networks, symbolic rule-based systems, constraint programming and evolutionary algorithms. This survey aims to be a comprehensive account of research on algorithmic composition, presenting a thorough view of the field for researchers in Artificial Intelligence.This study was partially supported by a grant for the MELOMICS project (IPT-300000-2010-010) from the Spanish Ministerio de Ciencia e Innovación, and a grant for the CAUCE project (TSI-090302-2011-8) from the Spanish Ministerio de Industria, Turismo y Comercio. The first author was supported by a grant for the GENEX project (P09-TIC- 5123) from the Consejería de Innovación y Ciencia de Andalucía

    Interfacing Jazz: A Study in Computer-Mediated Jazz Music Creation And Performance

    Get PDF
    O objetivo central desta dissertação é o estudo e desenvolvimento de algoritmos e interfaces mediados por computador para performance e criação musical. É sobretudo centrado em acompanhamentos em Jazz clássico e explora um meta-controlo dos parâmetros musicais como forma de potenciar a experiência de tocar Jazz por músicos e não-músicos, quer individual quer coletivamente. Pretende contribuir para a pesquisa existente nas áreas de geração automática de música e de interfaces para expressão musical, apresentando um conjunto de algoritmos e interfaces de controlo especialmente criados para esta dissertação. Estes algoritmos e interfaces implementam processos inteligentes e musicalmente informados, para gerar eventos musicais sofisticados e corretos musical estilisticamente, de forma automática, a partir de um input simplificado e intuitivo do utilizador, e de forma coerente gerir a experiência de grupo, estabelecendo um controlo integrado sobre os parâmetros globais. A partir destes algoritmos são apresentadas propostas para diferentes aplicações dos conceitos e técnicas, de forma a ilustrar os benefícios e potencial da utilização de um meta-controlo como extensão dos paradigmas existentes para aplicações musicais, assim como potenciar a criação de novos. Estas aplicações abordam principalmente três áreas onde a música mediada por computador pode trazer grandes benefícios, nomeadamente a performance, a criação e a educação. Uma aplicação, PocketBand, implementada no ambiente de programação Max, permite a um grupo de utilizadores tocarem em grupo como uma banda de jazz, quer sejam ou não treinados musicalmente, cada um utilizando um teclado de computador ou um dispositivo iOS multitoque. O segundo protótipo visa a utilização em contextos coletivos e participativos. Trata-se de uma instalação para vários utilizadores, para ecrã multitoque, intitulada MyJazzBand, que permite até quatro utilizadores tocarem juntos como membros de uma banda de jazz virtual. Ambas as aplicações permitem que os utilizadores experienciem e participem de forma eficaz como músicos de jazz, quer sejam ou não músicos profissionais. As aplicações podem ser utilizadas para fins educativos, seja como um sistema de acompanhamento automático em tempo real para qualquer instrumentista ou cantor, seja como uma fonte de informação para procedimentos harmónicos, ou como uma ferramenta prática para criar esboços ou conteúdos para aulas. Irei também demonstrar que esta abordagem reflete uma tendência crescente entre as empresas de software musical comercial, que já começaram a explorar a mediação por computador e algoritmos musicais inteligentes.Abstract : This dissertation focuses on the study and development of computer-mediated interfaces and algorithms for music performance and creation. It is mainly centered on traditional Jazz music accompaniment and explores the meta-control over musical events to potentiate the rich experience of playing jazz by musicians and non-musicians alike, both individually and collectively. It aims to complement existing research on automatic generation of jazz music and new interfaces for musical expression, by presenting a group of specially designed algorithms and control interfaces that implement intelligent, musically informed processes to automatically produce sophisticated and stylistically correct musical events. These algorithms and control interfaces are designed to have a simplified and intuitive input from the user, and to coherently manage group playing by establishing an integrated control over global common parameters. Using these algorithms, two proposals for different applications are presented, in order to illustrate the benefits and potential of this meta-control approach to extend existing paradigms for musical applications, as well as to create new ones. These proposals focus on two main perspectives where computer-mediated music can benefit by using this approach, namely in musical performance and creation, both of which can also be observed from an educational perspective. A core framework, implemented in the Max programming environment, integrates all the functionalities of the instrument algorithms and control strategies, as well as global control, synchronization and communication between all the components. This platform acts as a base, from which different applications can be created. For this dissertation, two main application concepts were developed. The first, PocketBand, has a single-user, one-man-band approach, where a single interface allows a single user to play up to three instruments. This prototype application, for a multi- touch tablet, was the test bed for several experiments with the user interface and playability issues that helped define and improve the mediated interface concept and the instrument algorithms. The second prototype aims the creation of a collective experience. It is a multi-user installation for a multi-touch table, called MyJazzBand, that allows up to four users to play together as members of a virtual jazz band. Both applications allow the users to experience and effectively participate as jazz band musicians, whether they are musically trained or not. The applications can be used for educational purposes, whether as a real-time accompaniment system for any jazz instrument practitioner or singer, as a source of information for harmonic procedures, or as a practical tool for creating quick arrangement drafts or music lesson contents. I will also demonstrate that this approach reflects a growing trend on commercial music software that has already begun to explore and implement mediated interfaces and intelligent music algorithms

    Algorithmic composition of music in real-time with soft constraints

    Get PDF
    Music has been the subject of formal approaches for a long time, ranging from Pythagoras’ elementary research on tonal systems to J. S. Bach’s elaborate formal composition techniques. Especially in the 20th century, much music was composed based on formal techniques: Algorithmic approaches for composing music were developed by composers like A. Schoenberg as well as in the scientific area. So far, a variety of mathematical techniques have been employed for composing music, e.g. probability models, artificial neural networks or constraint-based reasoning. In the recent time, interactive music systems have become popular: existing songs can be replayed with musical video games and original music can be interactively composed with easy-to-use applications running e.g. on mobile devices. However, applications which algorithmically generate music in real-time based on user interaction are mostly experimental and limited in either interactivity or musicality. There are many enjoyable applications but there are also many opportunities for improvements and novel approaches. The goal of this work is to provide a general and systematic approach for specifying and implementing interactive music systems. We introduce an algebraic framework for interactively composing music in real-time with a reasoning-technique called ‘soft constraints’: this technique allows modeling and solving a large range of problems and is suited particularly well for problems with soft and concurrent optimization goals. Our framework is based on well-known theories for music and soft constraints and allows specifying interactive music systems by declaratively defining ‘how the music should sound’ with respect to both user interaction and musical rules. Based on this core framework, we introduce an approach for interactively generating music similar to existing melodic material. With this approach, musical rules can be defined by playing notes (instead of writing code) in order to make interactively generated melodies comply with a certain musical style. We introduce an implementation of the algebraic framework in .NET and present several concrete applications: ‘The Planets’ is an application controlled by a table-based tangible interface where music can be interactively composed by arranging planet constellations. ‘Fluxus’ is an application geared towards musicians which allows training melodic material that can be used to define musical styles for applications geared towards non-musicians. Based on musical styles trained by the Fluxus sequencer, we introduce a general approach for transforming spatial movements to music and present two concrete applications: the first one is controlled by a touch display, the second one by a motion tracking system. At last, we investigate how interactive music systems can be used in the area of pervasive advertising in general and how our approach can be used to realize ‘interactive advertising jingles’.Musik ist seit langem Gegenstand formaler Untersuchungen, von Phytagoras‘ grundlegender Forschung zu tonalen Systemen bis hin zu J. S. Bachs aufwändigen formalen Kompositionstechniken. Vor allem im 20. Jahrhundert wurde vielfach Musik nach formalen Methoden komponiert: Algorithmische Ansätze zur Komposition von Musik wurden sowohl von Komponisten wie A. Schoenberg als auch im wissenschaftlichem Bereich entwickelt. Bislang wurde eine Vielzahl von mathematischen Methoden zur Komposition von Musik verwendet, z.B. statistische Modelle, künstliche neuronale Netze oder Constraint-Probleme. In der letzten Zeit sind interaktive Musiksysteme populär geworden: Bekannte Songs können mit Musikspielen nachgespielt werden, und mit einfach zu bedienenden Anwendungen kann man neue Musik interaktiv komponieren (z.B. auf mobilen Geräten). Allerdings sind die meisten Anwendungen, die basierend auf Benutzerinteraktion in Echtzeit algorithmisch Musik generieren, eher experimentell und in Interaktivität oder Musikalität limitiert. Es gibt viele unterhaltsame Anwendungen, aber ebenso viele Möglichkeiten für Verbesserungen und neue Ansätze. Das Ziel dieser Arbeit ist es, einen allgemeinen und systematischen Ansatz zur Spezifikation und Implementierung von interaktiven Musiksystemen zu entwickeln. Wir stellen ein algebraisches Framework zur interaktiven Komposition von Musik in Echtzeit vor welches auf sog. ‚Soft Constraints‘ basiert, einer Methode aus dem Bereich der künstlichen Intelligenz. Mit dieser Methode ist es möglich, eine große Anzahl von Problemen zu modellieren und zu lösen. Sie ist besonders gut geeignet für Probleme mit unklaren und widersprüchlichen Optimierungszielen. Unser Framework basiert auf gut erforschten Theorien zu Musik und Soft Constraints und ermöglicht es, interaktive Musiksysteme zu spezifizieren, indem man deklarativ angibt, ‚wie sich die Musik anhören soll‘ in Bezug auf sowohl Benutzerinteraktion als auch musikalische Regeln. Basierend auf diesem Framework stellen wir einen neuen Ansatz vor, um interaktiv Musik zu generieren, die ähnlich zu existierendem melodischen Material ist. Dieser Ansatz ermöglicht es, durch das Spielen von Noten (nicht durch das Schreiben von Programmcode) musikalische Regeln zu definieren, nach denen interaktiv generierte Melodien an einen bestimmten Musikstil angepasst werden. Wir präsentieren eine Implementierung des algebraischen Frameworks in .NET sowie mehrere konkrete Anwendungen: ‚The Planets‘ ist eine Anwendung für einen interaktiven Tisch mit der man Musik komponieren kann, indem man Planetenkonstellationen arrangiert. ‚Fluxus‘ ist eine Anwendung, die sich an Musiker richtet. Sie erlaubt es, melodisches Material zu trainieren, das wiederum als Musikstil in Anwendungen benutzt werden kann, die sich an Nicht-Musiker richten. Basierend auf diesen trainierten Musikstilen stellen wir einen generellen Ansatz vor, um räumliche Bewegungen in Musik umzusetzen und zwei konkrete Anwendungen basierend auf einem Touch-Display bzw. einem Motion-Tracking-System. Abschließend untersuchen wir, wie interaktive Musiksysteme im Bereich ‚Pervasive Advertising‘ eingesetzt werden können und wie unser Ansatz genutzt werden kann, um ‚interaktive Werbejingles‘ zu realisieren

    Proceedings of the 2015 WA Chapter of MSA Symposium on Music Performance and Analysis

    Get PDF
    This publication, entitled Proceedings of the 2015 WA Chapter MSA Symposium on Music Performance and Analysis, is a double-blind peer-reviewed conference proceedings published by the Western Australian Chapter of the Musicological Society of Australia, in conjunction with the Western Australian Academy of Performing Arts, Edith Cowan University, edited by Jonathan Paget, Victoria Rogers, and Nicholas Bannan. The original symposium was held at the University of Western Australia, School of Music, on 12 December 2015. With the advent of performer-scholars within Australian Universities, the intersections between analytical knowledge and performance are constantly being re-evaluated and reinvented. This collection of papers presents several strands of analytical discourse, including: (1) the analysis of music recordings, particularly in terms of historical performance practices; (2) reinventions of the \u27page-to-stage\u27 paradigm, employing new analytical methods; (3) analytical knowledge applied to pedagogy, particularly concerning improvisation; and (4) so-called \u27practice-led\u27 research.https://ro.ecu.edu.au/ecubooks/1005/thumbnail.jp

    Adaptive music: Automated music composition and distribution

    Get PDF
    Creativity, or the ability to produce new useful ideas, is commonly associated to the human being; but there are many other examples in nature where this phenomenon can be observed. Inspired by this fact, in engineering, and particularly in computational sciences, many different models have been developed to tackle a number of problems. Music, a form of art broadly present along the human history, is the main field addressed in this thesis, taking advantage of the kind of ideas that bring diversity and creativity to nature and computation. We present Melomics, an algorithmic composition method based on evolutionary search, with a genetic encoding of the solutions, which are interpreted in a complex developmental process that leads to music in the standard formats. This bioinspired compositional system has exhibited a high creative power and versatility to produce music of different type, which in many occasions has proven to be indistinguishable from the music made by human composers. The system also has enabled the emergence of a set of completely novel applications: from effective tools to help anyone to easily obtain the precise music they need, to radically new uses like adaptive music for therapy, amusement or many other purposes. It is clear to us that there is much research work yet to do in this field; and that countless and new unimaginable uses will derive from it

    Creative Support Musical Composition System: a study on Multiple Viewpoints Representations in Variable Markov Oracle

    Get PDF
    Em meados do século XX, assistiu-se ao surgimento de uma área de estudo focada na geração au-tomática de conteúdo musical por meios computacionais. Os primeiros exemplos concentram-se no processamento offline de dados musicais mas, recentemente, a comunidade tem vindo a explorar maioritariamente sistemas musicais interativos e em tempo-real. Além disso, uma tendência recente enfatiza a importância da tecnologia assistiva, que promove uma abordagem centrada em escolhas do utilizador, oferecendo várias sugestões para um determinado problema criativo. Nesse contexto, a minha investigação tem como objetivo promover novas ferramentas de software para sistemas de suporte criativo, onde algoritmos podem participar colaborativamente no fluxo de composição. Em maior detalhe, procuro uma ferramenta que aprenda com dados musicais de tamanho variável para fornecer feedback em tempo real durante o processo de composição. À luz das características de multi-dimensionalidade e hierarquia presentes nas estruturas musicais, pretendo estudar as representações que abstraem os seus padrões temporais, para promover a geração de múltiplas soluções ordenadas por grau de optimização para um determinado contexto musical. Por fim, a natureza subjetiva da escolha é dada ao utilizador, ao qual é fornecido um número limitado de soluções 'ideais'. Uma representação simbólica da música manifestada como Modelos sob múltiplos pontos de vista, combinada com o autómato Variable Markov Oracle (VMO), é usada para testar a interação ideal entre a multi-dimensionalidade da representação e a idealidade do modelo VMO, fornecendo soluções coerentes, inovadoras e estilisticamente diversas. Para avaliar o sistema, foram realizados testes para validar a ferramenta num cenário especializado com alunos de composição, usando o modelo de testes do índice de suporte à criatividade.The mid-20th century witnessed the emergence of an area of study that focused on the automatic generation of musical content by computational means. Early examples focus on offline processing of musical data and recently, the community has moved towards interactive online musical systems. Furthermore, a recent trend stresses the importance of assistive technology, which pro-motes a user-in-loop approach by offering multiple suggestions to a given creative problem. In this context, my research aims to foster new software tools for creative support systems, where algorithms can collaboratively participate in the composition flow. In greater detail, I seek a tool that learns from variable-length musical data to provide real-time feedback during the composition process. In light of the multidimensional and hierarchical structure of music, I aim to study the representations which abstract its temporal patterns, to foster the generation of multiple ranked solutions to a given musical context. Ultimately, the subjective nature of the choice is given to the user to which a limited number of 'optimal' solutions are provided. A symbolic music representation manifested as Multiple Viewpoint Models combined with the Variable Markov Oracle (VMO) automaton, are used to test optimal interaction between the multi-dimensionality of the representation with the optimality of the VMO model in providing both style-coherent, novel, and diverse solutions. To evaluate the system, an experiment was conducted to validate the tool in an expert-based scenario with composition students, using the creativity support index test

    Automated manipulation of musical grammars to support episodic interactive experiences

    Get PDF
    Music is used to enhance the experience of participants and visitors in a range of settings including theatre, film, video games, installations and theme parks. These experiences may be interactive, contrastingly episodic and with variable duration. Hence, the musical accompaniment needs to be dynamic and to transition between contrasting music passages. In these contexts, computer generation of music may be necessary for practical reasons including distribution and cost. Automated and dynamic composition algorithms exist but are not well-suited to a highly interactive episodic context owing to transition-related problems including discontinuity, abruptness, extended repetitiveness and lack of musical granularity and musical form. Addressing these problems requires algorithms capable of reacting to participant behaviour and episodic change in order to generate formic music that is continuous and coherent during transitions. This thesis presents the Form-Aware Transitioning and Recovering Algorithm (FATRA) for realtime, adaptive, form-aware music generation to provide continuous musical accompaniment in episodic context. FATRA combines stochastic grammar adaptation and grammar merging in real time. The Form-Aware Transition Engine (FATE) implementation of FATRA estimates the time-occurrence of upcoming narrative transitions and generates a harmonic sequence as narrative accompaniment with a focus on coherent, form-aware music transitioning between music passages of contrasting character. Using FATE, FATRA has been evaluated in three perceptual user studies: An audioaugmented real museum experience, a computer-simulated museum experience and a music-focused online study detached from narrative. Music transitions of FATRA were benchmarked against common approaches of the video game industry, i.e. crossfading and direct transitions. The participants were overall content with the music of FATE during their experience. Transitions of FATE were significantly favoured against the crossfading benchmark and competitive against the direct transitions benchmark, without statistical significance for the latter comparison. In addition, technical evaluation demonstrated capabilities of FATRA including form generation, repetitiveness avoidance and style/form recovery in case of falsely predicted narrative transitions. Technical results along with perceptual preference and competitiveness against the benchmark approaches are deemed as positive and the structural advantages of FATRA, including form-aware transitioning, carry considerable potential for future research
    • …
    corecore