3,054 research outputs found

    Sensoring a Generative System to Create User-Controlled Melodies

    Get PDF
    [EN] The automatic generation of music is an emergent field of research that has attracted the attention of countless researchers. As a result, there is a broad spectrum of state of the art research in this field. Many systems have been designed to facilitate collaboration between humans and machines in the generation of valuable music. This research proposes an intelligent system that generates melodies under the supervision of a user, who guides the process through a mechanical device. The mechanical device is able to capture the movements of the user and translate them into a melody. The system is based on a Case-Based Reasoning (CBR) architecture, enabling it to learn from previous compositions and to improve its performance over time. The user uses a device that allows them to adapt the composition to their preferences by adjusting the pace of a melody to a specific context or generating more serious or acute notes. Additionally, the device can automatically resist some of the user’s movements, this way the user learns how they can create a good melody. Several experiments were conducted to analyze the quality of the system and the melodies it generates. According to the users’ validation, the proposed system can generate music that follows a concrete style. Most of them also believed that the partial control of the device was essential for the quality of the generated music

    Deep Learning Techniques for Music Generation -- A Survey

    Full text link
    This paper is a survey and an analysis of different ways of using deep learning (deep artificial neural networks) to generate musical content. We propose a methodology based on five dimensions for our analysis: Objective - What musical content is to be generated? Examples are: melody, polyphony, accompaniment or counterpoint. - For what destination and for what use? To be performed by a human(s) (in the case of a musical score), or by a machine (in the case of an audio file). Representation - What are the concepts to be manipulated? Examples are: waveform, spectrogram, note, chord, meter and beat. - What format is to be used? Examples are: MIDI, piano roll or text. - How will the representation be encoded? Examples are: scalar, one-hot or many-hot. Architecture - What type(s) of deep neural network is (are) to be used? Examples are: feedforward network, recurrent network, autoencoder or generative adversarial networks. Challenge - What are the limitations and open challenges? Examples are: variability, interactivity and creativity. Strategy - How do we model and control the process of generation? Examples are: single-step feedforward, iterative feedforward, sampling or input manipulation. For each dimension, we conduct a comparative analysis of various models and techniques and we propose some tentative multidimensional typology. This typology is bottom-up, based on the analysis of many existing deep-learning based systems for music generation selected from the relevant literature. These systems are described and are used to exemplify the various choices of objective, representation, architecture, challenge and strategy. The last section includes some discussion and some prospects.Comment: 209 pages. This paper is a simplified version of the book: J.-P. Briot, G. Hadjeres and F.-D. Pachet, Deep Learning Techniques for Music Generation, Computational Synthesis and Creative Systems, Springer, 201

    AI Methods in Algorithmic Composition: A Comprehensive Survey

    Get PDF
    Algorithmic composition is the partial or total automation of the process of music composition by using computers. Since the 1950s, different computational techniques related to Artificial Intelligence have been used for algorithmic composition, including grammatical representations, probabilistic methods, neural networks, symbolic rule-based systems, constraint programming and evolutionary algorithms. This survey aims to be a comprehensive account of research on algorithmic composition, presenting a thorough view of the field for researchers in Artificial Intelligence.This study was partially supported by a grant for the MELOMICS project (IPT-300000-2010-010) from the Spanish Ministerio de Ciencia e Innovación, and a grant for the CAUCE project (TSI-090302-2011-8) from the Spanish Ministerio de Industria, Turismo y Comercio. The first author was supported by a grant for the GENEX project (P09-TIC- 5123) from the Consejería de Innovación y Ciencia de Andalucía

    A Standardised Procedure for Evaluating Creative Systems: Computational Creativity Evaluation Based on What it is to be Creative

    Get PDF
    Computational creativity is a flourishing research area, with a variety of creative systems being produced and developed. Creativity evaluation has not kept pace with system development with an evident lack of systematic evaluation of the creativity of these systems in the literature. This is partially due to difficulties in defining what it means for a computer to be creative; indeed, there is no consensus on this for human creativity, let alone its computational equivalent. This paper proposes a Standardised Procedure for Evaluating Creative Systems (SPECS). SPECS is a three-step process: stating what it means for a particular computational system to be creative, deriving and performing tests based on these statements. To assist this process, the paper offers a collection of key components of creativity, identified empirically from discussions of human and computational creativity. Using this approach, the SPECS methodology is demonstrated through a comparative case study evaluating computational creativity systems that improvise music

    An explicitly structured control model for exploring search space: chorale harmonisation in the style of J.S. Bach

    Get PDF
    In this research, we present our computational model which performs four part har-monisation in the style of J.S. Bach. Harmonising Bach chorales is a hard AI problem, comparable to natural language understanding. In our approach, we explore the issue of gaining control in an explicit way for the chorale harmonisation tasks. Generally, the control over the search space may be from both domain dependent and domain inde-pendent control knowledge. Our explicit control emphasises domain dependent control knowledge. The control gained from domain d ependent control enables us to map a clearer relationship between the control applied and its effects. Two examples of do-main dependent control are a plan of tasks to be done and heuristics stating properties of the domain. Examples of domain independent control are notions such as temperature values in an annealing method; mutation rates in Genetic Algorithms; and weights in Artificial Neural Networks.The appeal of the knowledge based approach lies in the accessibility to the control if required. Our system exploits this concept extensively. Control is explicitly expressed by weaving different atomic definitions {i.e. the rules, tests and measures) together with appropriate control primitives. Each expression constructed is called a control definition, which is hierarchical by nature.One drawback of the knowledge based approach is that, as the system grows bigger, the exploitation of the new added knowledge grows exponentially. This leads to an intractable search space. To reduce this intractability problem, we partially search the search space at the meta-level. This meta-level architecture reduces the complexity in the search space by exploiting search at the meta-level which has a smaller search space.The experiment shows that an explicitly structured control offers a greater flexibility in controlling the search space as it allows the control definitions to be manipulated and modified with great flexibility. This is a crucial clement in performing partial search over a big search space. As the control is allowed to be examined, the system also potentially supports elaborate explanations of the system activities and reflections at the meta-level

    A Functional Taxonomy of Music Generation Systems

    Get PDF
    Digital advances have transformed the face of automatic music generation since its beginnings at the dawn of computing. Despite the many breakthroughs, issues such as the musical tasks targeted by different machines and the degree to which they succeed remain open questions. We present a functional taxonomy for music generation systems with reference to existing systems. The taxonomy organizes systems according to the purposes for which they were designed. It also reveals the inter-relatedness amongst the systems. This design-centered approach contrasts with predominant methods-based surveys and facilitates the identification of grand challenges to set the stage for new breakthroughs.Comment: survey, music generation, taxonomy, functional survey, survey, automatic composition, algorithmic compositio

    Algorithmic composition of music in real-time with soft constraints

    Get PDF
    Music has been the subject of formal approaches for a long time, ranging from Pythagoras’ elementary research on tonal systems to J. S. Bach’s elaborate formal composition techniques. Especially in the 20th century, much music was composed based on formal techniques: Algorithmic approaches for composing music were developed by composers like A. Schoenberg as well as in the scientific area. So far, a variety of mathematical techniques have been employed for composing music, e.g. probability models, artificial neural networks or constraint-based reasoning. In the recent time, interactive music systems have become popular: existing songs can be replayed with musical video games and original music can be interactively composed with easy-to-use applications running e.g. on mobile devices. However, applications which algorithmically generate music in real-time based on user interaction are mostly experimental and limited in either interactivity or musicality. There are many enjoyable applications but there are also many opportunities for improvements and novel approaches. The goal of this work is to provide a general and systematic approach for specifying and implementing interactive music systems. We introduce an algebraic framework for interactively composing music in real-time with a reasoning-technique called ‘soft constraints’: this technique allows modeling and solving a large range of problems and is suited particularly well for problems with soft and concurrent optimization goals. Our framework is based on well-known theories for music and soft constraints and allows specifying interactive music systems by declaratively defining ‘how the music should sound’ with respect to both user interaction and musical rules. Based on this core framework, we introduce an approach for interactively generating music similar to existing melodic material. With this approach, musical rules can be defined by playing notes (instead of writing code) in order to make interactively generated melodies comply with a certain musical style. We introduce an implementation of the algebraic framework in .NET and present several concrete applications: ‘The Planets’ is an application controlled by a table-based tangible interface where music can be interactively composed by arranging planet constellations. ‘Fluxus’ is an application geared towards musicians which allows training melodic material that can be used to define musical styles for applications geared towards non-musicians. Based on musical styles trained by the Fluxus sequencer, we introduce a general approach for transforming spatial movements to music and present two concrete applications: the first one is controlled by a touch display, the second one by a motion tracking system. At last, we investigate how interactive music systems can be used in the area of pervasive advertising in general and how our approach can be used to realize ‘interactive advertising jingles’.Musik ist seit langem Gegenstand formaler Untersuchungen, von Phytagoras‘ grundlegender Forschung zu tonalen Systemen bis hin zu J. S. Bachs aufwändigen formalen Kompositionstechniken. Vor allem im 20. Jahrhundert wurde vielfach Musik nach formalen Methoden komponiert: Algorithmische Ansätze zur Komposition von Musik wurden sowohl von Komponisten wie A. Schoenberg als auch im wissenschaftlichem Bereich entwickelt. Bislang wurde eine Vielzahl von mathematischen Methoden zur Komposition von Musik verwendet, z.B. statistische Modelle, künstliche neuronale Netze oder Constraint-Probleme. In der letzten Zeit sind interaktive Musiksysteme populär geworden: Bekannte Songs können mit Musikspielen nachgespielt werden, und mit einfach zu bedienenden Anwendungen kann man neue Musik interaktiv komponieren (z.B. auf mobilen Geräten). Allerdings sind die meisten Anwendungen, die basierend auf Benutzerinteraktion in Echtzeit algorithmisch Musik generieren, eher experimentell und in Interaktivität oder Musikalität limitiert. Es gibt viele unterhaltsame Anwendungen, aber ebenso viele Möglichkeiten für Verbesserungen und neue Ansätze. Das Ziel dieser Arbeit ist es, einen allgemeinen und systematischen Ansatz zur Spezifikation und Implementierung von interaktiven Musiksystemen zu entwickeln. Wir stellen ein algebraisches Framework zur interaktiven Komposition von Musik in Echtzeit vor welches auf sog. ‚Soft Constraints‘ basiert, einer Methode aus dem Bereich der künstlichen Intelligenz. Mit dieser Methode ist es möglich, eine große Anzahl von Problemen zu modellieren und zu lösen. Sie ist besonders gut geeignet für Probleme mit unklaren und widersprüchlichen Optimierungszielen. Unser Framework basiert auf gut erforschten Theorien zu Musik und Soft Constraints und ermöglicht es, interaktive Musiksysteme zu spezifizieren, indem man deklarativ angibt, ‚wie sich die Musik anhören soll‘ in Bezug auf sowohl Benutzerinteraktion als auch musikalische Regeln. Basierend auf diesem Framework stellen wir einen neuen Ansatz vor, um interaktiv Musik zu generieren, die ähnlich zu existierendem melodischen Material ist. Dieser Ansatz ermöglicht es, durch das Spielen von Noten (nicht durch das Schreiben von Programmcode) musikalische Regeln zu definieren, nach denen interaktiv generierte Melodien an einen bestimmten Musikstil angepasst werden. Wir präsentieren eine Implementierung des algebraischen Frameworks in .NET sowie mehrere konkrete Anwendungen: ‚The Planets‘ ist eine Anwendung für einen interaktiven Tisch mit der man Musik komponieren kann, indem man Planetenkonstellationen arrangiert. ‚Fluxus‘ ist eine Anwendung, die sich an Musiker richtet. Sie erlaubt es, melodisches Material zu trainieren, das wiederum als Musikstil in Anwendungen benutzt werden kann, die sich an Nicht-Musiker richten. Basierend auf diesen trainierten Musikstilen stellen wir einen generellen Ansatz vor, um räumliche Bewegungen in Musik umzusetzen und zwei konkrete Anwendungen basierend auf einem Touch-Display bzw. einem Motion-Tracking-System. Abschließend untersuchen wir, wie interaktive Musiksysteme im Bereich ‚Pervasive Advertising‘ eingesetzt werden können und wie unser Ansatz genutzt werden kann, um ‚interaktive Werbejingles‘ zu realisieren

    Playing with Cases: Rendering Expressive Music with Case-Based Reasoning

    Get PDF
    This article surveys long-term research on the problem of rendering expressive music by means of AI techniques with an emphasis on case-based reasoning (CBR). Following a brief overview discussing why people prefer listening to expressive music instead of nonexpressive synthesized music, we examine a representative selection of well-known approaches to expressive computer,music performance with an emphasis on AI-related approaches. In the main part of the article we focus on the existing CBR approaches to the problem of synthesizing expressive music, and particularly on Tempo-Express, a case-based reasoning system developed at our Institute, for applying musically acceptable tempo transformations to monophonic audio recordings of musical performances. Finally we briefly describe an ongoing extension of our previous work consisting of complementing audio information with information about the gestures of the musician. Music is played through our bodies, therefore capturing the gesture of the performer is a fundamental aspect that has to be taken into account in future expressive music renderings. This article is based on the >2011 Robert S. Engelmore Memorial Lecture> given by the first author at AAAI/IAAI 2011.This research is partially supported by the Ministry of Science and Innovation of Spain under the project NEXT-CBR (TIN2009-13692-C03-01) and the Generalitat de Catalunya AGAUR Grant 2009-SGR-1434Peer Reviewe
    corecore