73 research outputs found

    Amergent Music: behavior and becoming in technoetic & media arts

    Get PDF
    Merged with duplicate records 10026.1/1082 and 10026.1/2612 on 15.02.2017 by CS (TIS)Technoetic and media arts are environments of mediated interaction and emergence, where meaning is negotiated by individuals through a personal examination and experience—or becoming—within the mediated space. This thesis examines these environments from a musical perspective and considers how sound functions as an analog to this becoming. Five distinct, original musical works explore the possibilities as to how the emergent dynamics of mediated, interactive exchange can be leveraged towards the construction of musical sound. In the context of this research, becoming can be understood relative to Henri Bergson’s description of the appearance of reality—something that is making or unmaking but is never made. Music conceived of a linear model is essentially fixed in time. It is unable to recognize or respond to the becoming of interactive exchange, which is marked by frequent and unpredictable transformation. This research abandons linear musical approaches and looks to generative music as a way to reconcile the dynamics of mediated interaction with a musical listening experience. The specifics of this relationship are conceptualized in the structaural coupling model, which borrows from Maturana & Varela’s “structural coupling.” The person interacting and the generative musical system are compared to autopoietic unities, with each responding to mutual perturbations while maintaining independence and autonomy. Musical autonomy is sustained through generative techniques and organized within a psychogeographical framework. In the way that cities invite use and communicate boundaries, the individual sounds of a musical work create an aural context that is legible to the listener, rendering the consequences or implications of any choice audible. This arrangement of sound, as it relates to human presence in a technoetic environment, challenges many existing assumptions, including the idea “the sound changes.” Change can be viewed as a movement predicated by behavior. Amergent music is brought forth through kinds of change or sonic movement more robustly explored as a dimension of musical behavior. Listeners hear change, but it is the result of behavior that arises from within an autonomous musical system relative to the perturbations sensed within its environment. Amergence propagates through the effects of emergent dynamics coupled to the affective experience of continuous sonic transformation.Rutland Port Authoritie

    Modelling the live-electronics in electroacoustic music using particle systems

    Get PDF
    Contemporary music is largely influenced by technology. Empowered by the current available tools and resources, composers have the possibility to not only compose with sounds, but also to compose the sounds themselves. Personal computers powered with intuitive and interactive audio applications and development tools allow the creation of a vast range of real-time manipulation of live instrumental input and also real-time generation of sound through synthesis techniques. Consequently, achieving a desired sonority and interaction between the electronic and acoustic sounds in real-time, deeply rely on the choice and technical implementation of the audio processes and logical structures that will perform the electronic part of the composition. Due to the artistic and technical complexity of the development and implementation of such a complex artistic work, a very common strategy historically adopted by composers is to develop the composition in collaboration with a technology expert, which in this context is known as a musical assistant. In this perspective, the work of the musical assistant can be considered as one of translating musical, artistic and aesthetic concepts into mathematical algorithms and audio processes. The work presented in this dissertation addresses the problem of choosing, combining and manipulating the audio processes and logical structures that take place on the liveelectronics (i.e the electronic part of a mixed music composition) of a contemporary electroacoustic music composition, by using particle systems to model and simulate the dynamic behaviors that reflect the conceptual and aesthetic principles envisaged by the composer for a determined musical piece. The presented research work initiates with a thorough identification and analysis of the agents, processes and structures that are present in the live-electronics system of a mixed music composition. From this analysis a logical formalization of a typical live-electronics system is proposed, and then adapted to integrate a particle-based modelling strategy. From the formalization, a theoretical and practical framework for developing and implementing live-electronics systems for mixed music compositions using particle systems is proposed. The framework is experimented and validated in the development of distinct mixed music compositions by distinct composers, in real professional context. From the analysis of the case studies and the logical formalization, and the feedback given by the composers, it is possible to conclude that the proposed particle systems modelling method proves to be effective in the task of assisting the conceptual translation of musical and aesthetic ideas into implementable audio processing software.A música contemporânea é amplamente influenciada pela tecnologia. Os recursos tecnológicos atualmente disponíveis permitem que os compositores criem com sons e ao mesmo tempo criem os sons em si próprios. Os atuais aplicativos e ferramentas de software focados no desenvolvimento, controle e manipulação de processamentos de áudio, permitem a elaboração de diversos tipos de tratamentos e sínteses de som com a capacidade de serem executados e manipulados em tempo real. Consequentemente, a escolha dos algoritmos de processamento de áudio e suas respectivas implementações técnicas em forma de software, são determinantes para que a sonoridade desejada seja atingida, e para que o resultado sonoro satisfaça os objetivos estéticos e conceituais da relação entre as fontes sonoras acústicas e os sons eletrônicos presentes em uma composição eletroacústica de caráter misto. Devido à complexidade artística e técnica do desenvolvimento e implementação do sistema de eletrônica em tempo real de uma composição eletroacústica mista, uma estratégia historicamente adotada por compositores é a de desenvolver a composição em colaboração com um especialista em tecnologia, que neste contexto é usualmente referido como assistente musical. Nesta perspectiva, o trabalho do assistente musical pode ser interpretado como o de traduzir conceitos musicais, artísticos e estéticos em algoritmos matemáticos e processamento de áudio. O trabalho apresentado nesta dissertação aborda a problemática da escolha, combinação e manipulação dos processamentos de áudio e estruturas lógicas presentes no sistema de eletrônica em tempo real de uma composição de música eletroacústica contemporânea, e propõem o uso de sistemas de partículas para modelar e simular os comportamentos dinâmicos e morfológicos que refletem os princípios conceituais e estéticos previstos pelo compositor para uma determinada composição. A parte inicial do trabalho apresentado consiste na identificação e análise detalhada dos agentes, estruturas e processos envolvidos na realização e execução do sistema de eletrônica em tempo real. A partir desta análise é proposta uma formalização lógica e genérica de um sistema de eletrônica em tempo real. Em seguida, esta formalização é modificada e adaptada para integrar uma estratégia de modelagem através de sistemas de partículas. Em sequencia da formalização lógica, um método teórico e prático para o desenvolvimento de sistemas de eletrônica em tempo real para composições de música mista é proposto. O teste e consequente validação do método se dá através de sua utilização na realização da eletrônica em tempo real para obras de diferentes compositores. A análise dos casos de estudo e da formalização lógica, e também o parecer e opinião dos compositores, permitem concluir que o método proposto é de fato eficaz na tarefa de auxiliar o processo de tradução dos conceitos musicais e estéticos propostos pelos compositores em forma de algoritmos e processamentos de som implementados em software

    Agent-Based Graphic Sound Synthesis and Acousmatic Composition

    Get PDF
    For almost a century composers and engineers have been attempting to create systems that allow drawings and imagery to behave as intuitive and efficient musical scores. Despite the intuitive interactions that these systems afford, they are somewhat underutilised by contemporary composers. The research presented here explores the concept of agency and artificial ecosystems as a means of creating and exploring new graphic sound synthesis algorithms. These algorithms are subsequently designed to investigate the creation of organic musical gesture and texture using granular synthesis. The output of this investigation consists of an original software artefact, The Agent Tool, alongside a suite of acousmatic musical works which the former was designed to facilitate. When designing new musical systems for creative exploration with vast parametric controls, careful constraints should be put in place to encourage focused development. In this instance, an evolutionary computing model is utilised as part of an iterative development cycle. Each iteration of the system’s development coincides with a composition presented in this portfolio. The features developed as part of this process subsequently serve the author’s compositional practice and inspiration. As the software package is designed to be flexible and open ended, each composition represents a refinement of features and controls for the creation of musical gesture and texture. This document subsequently discusses the creative inspirations behind each composition alongside the features and agents that were created. This research is contextualised through a review of established literature on graphic sound synthesis, evolutionary musical computing and ecosystemic approaches to sound synthesis and control

    L-Music: uma abordagem para composição musical assistida usando L-Systems

    Get PDF
    Generative music systems have been researched for an extended period of time. The scientific corpus of this research field is translating, currently, into the world of the everyday musician and composer. With these tools, the creative process of writing music can be augmented or completely replaced by machines. The work in this document aims to contribute to research in assisted music composition systems. To do so, a review on the state of the art of these fields was performed and we found that a plethora of methodologies and approaches each provide their own interesting results (to name a few, neural networks, statistical models, and formal grammars). We identified Lindenmayer Systems, or L-Systems, as the most interesting and least explored approach to develop an assisted music composition system prototype, aptly named L-Music, due to the ability of producing complex outputs from simple structures. L-Systems were initially proposed as a parallel string rewriting grammar to model algae plant growth. Their applications soon turned graphical (e.g., drawing fractals), and eventually they were applied to music generation. Given that our prototype is assistive, we also took the user interface and user experience design into its well-deserved consideration. Our implemented interface is straightforward, simple to use with a structured visual hierarchy and flow and enables musicians and composers to select their desired instruments; select L-Systems for generating music or create their own custom ones and edit musical parameters (e.g., scale and octave range) to further control the outcome of L-Music, which is musical fragments that a musician or composer can then use in their own works. Three musical interpretations on L-Systems were implemented: a random interpretation, a scale-based interpretation, and a polyphonic interpretation. All three approaches produced interesting musical ideas, which we found to be potentially usable by musicians and composers in their own creative works. Although positive results were obtained, the developed prototype has many improvements for future work. Further musical interpretations can be added, as well as increasing the number of possible musical parameters that a user can edit. We also identified the possibility of giving the user control over what musical meaning L-Systems have as an interesting future challenge.Sistemas de geração de música têm sido alvo de investigação durante períodos alargados de tempo. Recentemente, tem havido esforços em passar o conhecimento adquirido de sistemas de geração de música autónomos e assistidos para as mãos do músico e compositor. Com estas ferramentas, o processo criativo pode ser enaltecido ou completamente substituído por máquinas. O presente trabalho visa contribuir para a investigação de sistemas de composição musical assistida. Para tal, foi efetuado um estudo do estado da arte destas temáticas, sendo que foram encontradas diversas metodologias que ofereciam resultados interessantes de um ponto de vista técnico e musical. Os sistemas de Lindenmayer, ou L-Systems, foram selecionados como a abordagem mais interessante, e menos explorada, para desenvolver um protótipo de um sistema de composição musical assistido com o nome L-Music, devido à sua capacidade de produzirem resultados complexos a partir de estruturas simples. Os L-Systems, inicialmente propostos para modelar o crescimento de plantas de algas, são gramáticas formais, cujo processo de reescrita de strings acontece de forma paralela. As suas aplicações rapidamente evoluíram para interpretações gráficas (p.e., desenhar fractais), e eventualmente também foram aplicados à geração de música. Dada a natureza assistida do protótipo desenvolvido, houve uma especial atenção dada ao design da interface e experiência do utilizador. Esta, é concisa e simples, tendo uma hierarquia visual estruturada para oferecer uma orientação coesa ao utilizador. Neste protótipo, os utilizadores podem selecionar instrumentos; selecionar L-Systems ou criar os seus próprios, e editar parâmetros musicais (p.e., escala e intervalo de oitavas) de forma a gerarem excertos musicais que possam usar nas suas próprias composições. Foram implementadas três interpretações musicais de L-Systems: uma interpretação aleatória, uma interpretação à base de escalas e uma interpretação polifónica. Todas as interpretações produziram resultados musicais interessantes, e provaram ter potencial para serem utilizadas por músicos e compositores nos seus trabalhos criativos. Embora tenham sido alcançados resultados positivos, o protótipo desenvolvido apresenta múltiplas melhorias para trabalho futuro. Entre elas estão, por exemplo, a adição de mais interpretações musicais e a adição de mais parâmetros musicais editáveis pelo utilizador. A possibilidade de um utilizador controlar o significado musical de um L-System também foi identificada como uma proposta futura relevante

    Application of Intermediate Multi-Agent Systems to Integrated Algorithmic Composition and Expressive Performance of Music

    Get PDF
    We investigate the properties of a new Multi-Agent Systems (MAS) for computer-aided composition called IPCS (pronounced “ipp-siss”) the Intermediate Performance Composition System which generates expressive performance as part of its compositional process, and produces emergent melodic structures by a novel multi-agent process. IPCS consists of a small-medium size (2 to 16) collection of agents in which each agent can perform monophonic tunes and learn monophonic tunes from other agents. Each agent has an affective state (an “artificial emotional state”) which affects how it performs the music to other agents; e.g. a “happy” agent will perform “happier” music. The agent performance not only involves compositional changes to the music, but also adds smaller changes based on expressive music performance algorithms for humanization. Every agent is initialized with a tune containing the same single note, and over the interaction period longer tunes are built through agent interaction. Agents will only learn tunes performed to them by other agents if the affective content of the tune is similar to their current affective state; learned tunes are concatenated to the end of their current tune. Each agent in the society learns its own growing tune during the interaction process. Agents develop “opinions” of other agents that perform to them, depending on how much the performing agent can help their tunes grow. These opinions affect who they interact with in the future. IPCS is not a mapping from multi-agent interaction onto musical features, but actually utilizes music for the agents to communicate emotions. In spite of the lack of explicit melodic intelligence in IPCS, the system is shown to generate non-trivial melody pitch sequences as a result of emotional communication between agents. The melodies also have a hierarchical structure based on the emergent social structure of the multi-agent system and the hierarchical structure is a result of the emerging agent social interaction structure. The interactive humanizations produce micro-timing and loudness deviations in the melody which are shown to express its hierarchical generative structure without the need for structural analysis software frequently used in computer music humanization

    AI Methods in Algorithmic Composition: A Comprehensive Survey

    Get PDF
    Algorithmic composition is the partial or total automation of the process of music composition by using computers. Since the 1950s, different computational techniques related to Artificial Intelligence have been used for algorithmic composition, including grammatical representations, probabilistic methods, neural networks, symbolic rule-based systems, constraint programming and evolutionary algorithms. This survey aims to be a comprehensive account of research on algorithmic composition, presenting a thorough view of the field for researchers in Artificial Intelligence.This study was partially supported by a grant for the MELOMICS project (IPT-300000-2010-010) from the Spanish Ministerio de Ciencia e Innovación, and a grant for the CAUCE project (TSI-090302-2011-8) from the Spanish Ministerio de Industria, Turismo y Comercio. The first author was supported by a grant for the GENEX project (P09-TIC- 5123) from the Consejería de Innovación y Ciencia de Andalucía
    corecore