4,308 research outputs found
Modelling the live-electronics in electroacoustic music using particle systems
Contemporary music is largely influenced by technology. Empowered by the current available
tools and resources, composers have the possibility to not only compose with sounds,
but also to compose the sounds themselves.
Personal computers powered with intuitive and interactive audio applications and development
tools allow the creation of a vast range of real-time manipulation of live instrumental
input and also real-time generation of sound through synthesis techniques. Consequently,
achieving a desired sonority and interaction between the electronic and acoustic sounds in
real-time, deeply rely on the choice and technical implementation of the audio processes
and logical structures that will perform the electronic part of the composition.
Due to the artistic and technical complexity of the development and implementation of
such a complex artistic work, a very common strategy historically adopted by composers is
to develop the composition in collaboration with a technology expert, which in this context
is known as a musical assistant. In this perspective, the work of the musical assistant can be
considered as one of translating musical, artistic and aesthetic concepts into mathematical
algorithms and audio processes.
The work presented in this dissertation addresses the problem of choosing, combining
and manipulating the audio processes and logical structures that take place on the liveelectronics
(i.e the electronic part of a mixed music composition) of a contemporary electroacoustic
music composition, by using particle systems to model and simulate the dynamic
behaviors that reflect the conceptual and aesthetic principles envisaged by the composer for
a determined musical piece.
The presented research work initiates with a thorough identification and analysis of the agents, processes and structures that are present in the live-electronics system of a mixed
music composition. From this analysis a logical formalization of a typical live-electronics
system is proposed, and then adapted to integrate a particle-based modelling strategy.
From the formalization, a theoretical and practical framework for developing and implementing
live-electronics systems for mixed music compositions using particle systems
is proposed. The framework is experimented and validated in the development of distinct
mixed music compositions by distinct composers, in real professional context.
From the analysis of the case studies and the logical formalization, and the feedback
given by the composers, it is possible to conclude that the proposed particle systems modelling
method proves to be effective in the task of assisting the conceptual translation of
musical and aesthetic ideas into implementable audio processing software.A música contemporânea é amplamente influenciada pela tecnologia. Os recursos tecnológicos
atualmente disponíveis permitem que os compositores criem com sons e ao mesmo
tempo criem os sons em si próprios.
Os atuais aplicativos e ferramentas de software focados no desenvolvimento, controle
e manipulação de processamentos de áudio, permitem a elaboração de diversos tipos de
tratamentos e sínteses de som com a capacidade de serem executados e manipulados em
tempo real. Consequentemente, a escolha dos algoritmos de processamento de áudio e suas
respectivas implementações técnicas em forma de software, são determinantes para que
a sonoridade desejada seja atingida, e para que o resultado sonoro satisfaça os objetivos
estéticos e conceituais da relação entre as fontes sonoras acústicas e os sons eletrônicos
presentes em uma composição eletroacústica de caráter misto.
Devido à complexidade artística e técnica do desenvolvimento e implementação do sistema
de eletrônica em tempo real de uma composição eletroacústica mista, uma estratégia
historicamente adotada por compositores é a de desenvolver a composição em colaboração
com um especialista em tecnologia, que neste contexto é usualmente referido como assistente
musical. Nesta perspectiva, o trabalho do assistente musical pode ser interpretado
como o de traduzir conceitos musicais, artísticos e estéticos em algoritmos matemáticos e
processamento de áudio.
O trabalho apresentado nesta dissertação aborda a problemática da escolha, combinação
e manipulação dos processamentos de áudio e estruturas lógicas presentes no sistema de
eletrônica em tempo real de uma composição de música eletroacústica contemporânea, e
propõem o uso de sistemas de partículas para modelar e simular os comportamentos dinâmicos
e morfológicos que refletem os princípios conceituais e estéticos previstos pelo compositor
para uma determinada composição.
A parte inicial do trabalho apresentado consiste na identificação e análise detalhada dos
agentes, estruturas e processos envolvidos na realização e execução do sistema de eletrônica
em tempo real. A partir desta análise é proposta uma formalização lógica e genérica de um
sistema de eletrônica em tempo real. Em seguida, esta formalização é modificada e adaptada para integrar uma estratégia de modelagem através de sistemas de partículas.
Em sequencia da formalização lógica, um método teórico e prático para o desenvolvimento
de sistemas de eletrônica em tempo real para composições de música mista é proposto.
O teste e consequente validação do método se dá através de sua utilização na realização
da eletrônica em tempo real para obras de diferentes compositores.
A análise dos casos de estudo e da formalização lógica, e também o parecer e opinião dos
compositores, permitem concluir que o método proposto é de fato eficaz na tarefa de auxiliar
o processo de tradução dos conceitos musicais e estéticos propostos pelos compositores em
forma de algoritmos e processamentos de som implementados em software
Modelling the live-electronics in electroacoustic music using particle systems
Developing the live-electronics for a contemporary electroacoustic piece is a complex process that normally involves the transfer of artistic and aesthetic concepts between the composer and the musical assistant. Translating in technical terms the musical, artistic and aesthetic concepts by means of algorithms and mathematical parameters is seldom an easy and straightforward task. The use of a particle system to describe the dynamics and characteristics of compositional parameters can reveal an effective way for achieving a significant relationship between compositional aspects and their technical implementation. This paper describes a method for creating and modelling a particle system based on compositional parameters and how to map those parameters into digital audio processes. An implementation of this method is described, as well as the use of such a method for the development of the work O Farfalhar das Folhas (The rustling of leaves) (2010), for one flutist, one clarinetist, violin, violoncello, piano and live-electronics, by Flo Menezes.info:eu-repo/semantics/publishedVersio
Deep Learning Techniques for Music Generation -- A Survey
This paper is a survey and an analysis of different ways of using deep
learning (deep artificial neural networks) to generate musical content. We
propose a methodology based on five dimensions for our analysis:
Objective - What musical content is to be generated? Examples are: melody,
polyphony, accompaniment or counterpoint. - For what destination and for what
use? To be performed by a human(s) (in the case of a musical score), or by a
machine (in the case of an audio file).
Representation - What are the concepts to be manipulated? Examples are:
waveform, spectrogram, note, chord, meter and beat. - What format is to be
used? Examples are: MIDI, piano roll or text. - How will the representation be
encoded? Examples are: scalar, one-hot or many-hot.
Architecture - What type(s) of deep neural network is (are) to be used?
Examples are: feedforward network, recurrent network, autoencoder or generative
adversarial networks.
Challenge - What are the limitations and open challenges? Examples are:
variability, interactivity and creativity.
Strategy - How do we model and control the process of generation? Examples
are: single-step feedforward, iterative feedforward, sampling or input
manipulation.
For each dimension, we conduct a comparative analysis of various models and
techniques and we propose some tentative multidimensional typology. This
typology is bottom-up, based on the analysis of many existing deep-learning
based systems for music generation selected from the relevant literature. These
systems are described and are used to exemplify the various choices of
objective, representation, architecture, challenge and strategy. The last
section includes some discussion and some prospects.Comment: 209 pages. This paper is a simplified version of the book: J.-P.
Briot, G. Hadjeres and F.-D. Pachet, Deep Learning Techniques for Music
Generation, Computational Synthesis and Creative Systems, Springer, 201
Compositions created with constraint programming
This chapter surveys music constraint programming systems, and how composers have used them. The chapter motivates and explains how users of such systems describe intended musical results with constraints. This approach to algorithmic composition is similar to the way declarative and modular compositional rules have successfully been used in music theory for centuries as a device to describe composition techniques. In a systematic overview, this survey highlights the respective strengths of different approaches and systems from a composer's point of view, complementing other more technical surveys of this field. This text describes the music constraint systems PMC, Score-PMC, PWMC (and its successor Cluster Engine), Strasheela and Orchidée -- most are libraries of the composition systems PWGL or OpenMusic. These systems are shown in action by discussing the composition process of specific works by Jacopo Baboni-Schilingi, Magnus Lindberg, Örjan Sandred, Torsten Anders, Johannes Kretz and Jonathan Harvey
Compositions created with constraint programming
This chapter surveys music constraint programming systems, and how composers have used them. The chapter motivates and explains how users of such systems describe intended musical results with constraints. This approach to algorithmic composition is similar to the way declarative and modular compositional rules have successfully been used in music theory for centuries as a device to describe composition techniques. In a systematic overview, this survey highlights the respective strengths of different approaches and systems from a composer's point of view, complementing other more technical surveys of this field. This text describes the music constraint systems PMC, Score-PMC, PWMC (and its successor Cluster Engine), Strasheela and Orchidée -- most are libraries of the composition systems PWGL or OpenMusic. These systems are shown in action by discussing the composition process of specific works by Jacopo Baboni-Schilingi, Magnus Lindberg, Örjan Sandred, Torsten Anders, Johannes Kretz and Jonathan Harvey
AI Methods in Algorithmic Composition: A Comprehensive Survey
Algorithmic composition is the partial or total automation of the process of music composition
by using computers. Since the 1950s, different computational techniques related to
Artificial Intelligence have been used for algorithmic composition, including grammatical
representations, probabilistic methods, neural networks, symbolic rule-based systems, constraint
programming and evolutionary algorithms. This survey aims to be a comprehensive
account of research on algorithmic composition, presenting a thorough view of the field for
researchers in Artificial Intelligence.This study was partially supported by a grant for the MELOMICS project
(IPT-300000-2010-010) from the Spanish Ministerio de Ciencia e Innovación, and a grant for
the CAUCE project (TSI-090302-2011-8) from the Spanish Ministerio de Industria, Turismo
y Comercio. The first author was supported by a grant for the GENEX project (P09-TIC-
5123) from the Consejería de Innovación y Ciencia de Andalucía
Intelligent assistant for music practice
Generally, the present disclosure is directed to techniques to automatically provide feedback and suggestions to musicians. In particular, in some implementations, the systems and methods of the present disclosure can include or otherwise leverage one or more machine-learned models to provide real-time feedback to musicians based on audio and/or video of the musician playing music. The techniques of this disclosure use various input features, e.g., the musician’s practice piece; references from a database of musical scores, data from different sensors, e.g., microphones, cameras, etc. to analyze the musician’s playing and provide real-time feedback or suggestions for corrections to be made, e.g., changing the tempo, playing a sharp or flat note (acting as an intelligent tuner), suggestions of practice pieces, etc
Generation of Two-Voice Imitative Counterpoint from Statistical Models
Generating new music based on rules of counterpoint has been deeply studied in music informatics. In this article, we try to go further, exploring a method for generating new music based on the style of Palestrina, based on combining statistical generation and pattern discovery. A template piece is used for pattern discovery, and the patterns are selected and organized according to a probabilistic distribution, using horizontal viewpoints to describe melodic properties of events. Once the template is covered with patterns, two-voice counterpoint in a florid style is generated into those patterns using a first-order Markov model. The template method solves the problem of coherence and imitation never addressed before in previous research in counterpoint music generation. For constructing the Markov model, vertical slices of pitch and rhythm are compiled over a large corpus of dyads from Palestrina masses. The template enforces different restrictions that filter the possible paths through the generation process. A double backtracking algorithm is implemented to handle cases where no solutions are found at some point within a generation path. Results are evaluated by both information content and listener evaluation, and the paper concludes with a proposed relationship between musical quality and information content. Part of this research has been presented at SMC 2016 in Hamburg, Germany
- …