11 research outputs found

    Reef Elegy: An Auditory Display of Hawaii's 2019 Coral Bleaching Data

    Full text link
    This paper describes an auditory display of Hawaii's 2019 coral bleaching data via means of spatial audio and parameter mapping methods. Selected data fields spanning 78 days are mapped to sound surrogates of coral reefs' natural soundscapes, which are progressively altered in their constituent elements as the corresponding coral locations undergo bleaching. For some of these elements, this process outlines a trajectory from a dense to a sparser, reduced soundscape, while for others it translates moving away from harmonic tones and towards complex spectra. This experiment is accompanied by a short evaluation study to contextualize it in an established aesthetic perspective space and to probe its potential for public engagement in the discourse around climate change.Comment: To appear in: Proceedings of the 28th International Conference on Auditory Display (ICAD 2023

    Satisficing Goals and Methods in Human-Machine Music Improvisations: Experiments with Dory

    Get PDF
    Current interactive music systems for human-machine improvisation often employ sophisticated machine learning algorithms, achieving competence in style imitation and interaction with human performers within defined musical domains. However, in the context of free musical improvisation, it is probably not desirable to interact with a musical partner which one can largely anticipate or predict, because this might hinder the critical re-examination of one’s improvisational practice, to the detriment of an open-endedness that is crucial in this particular form of musical expression. The author’s contention is that, just as one would strive to collaborate with highly original and diverse musical personalities when freely improvising, a similar scenario would be desirable when collaborating with a computer system. By settling for “good enough” solutions to the problems posed by the design of the latter, and negotiating expectations of the attainable, a more unpredictable and contradictory agent might arise. In this article, the author’s system’s conceptual framework, design and an evaluation of three performances using it are presented

    Computational Music Aesthetics: a survey and some thoughts

    Get PDF
    While computational aesthetic evaluation has been applied to images and visual output, it is not as widely employed for generative music systems. Computational aesthetic evaluation is not to be confounded with numerical evaluation of the system’s output; such a notion is in danger of offering a reduced and impoverished interpretation of the aesthetic experience, which is innately dialogical, between the creator or the user, the sociological context, and the creative process or product. This paper reviews common computational aesthetic measures that have been used for musical applications, whilst arguing for a pragmatist perspective and a framework foregrounding the primacy of intentionality and agency in inducing aesthetic responses

    Computational Systems for Music Improvisation

    Get PDF
    Computational music systems that afford improvised creative interaction in real time are often designed for a specific improviser and performance style. As such the field is diverse, fragmented and lacks a coherent framework. Through analysis of examples in the field we identify key areas of concern in the design of new systems, which we use as categories in the construction of a taxonomy. From our broad overview of the field we select significant examples to analyse in greater depth. This analysis serves to derive principles that may aid designers scaffold their work on existing innovation. We explore successful evaluation techniques from other fields and describe how they may be applied to iterative design processes for improvisational systems. We hope that by developing a more coherent design and evaluation process, we can support the next generation of improvisational music systems

    TĹŤkyĹŤ kion-on: Query-based generative sonification of atmospheric data

    No full text
    Presented at the 27th International Conference on Auditory Display (ICAD 2022) 24-27 June 2022, Virtual conference.Amid growing environmental concerns, interactive displays of data constitute an important tool for exploring and understanding the impact of climate change on the planet’s ecosystemic integrity. This paper presents Tokyo kion-on, a query-based sonification model of Tokyo’s air temperature from 1876 to 2021. The system uses a recurrent neural network architecture known as LSTM with attention trained on a small dataset of Japanese melodies and conditioned upon said atmospheric data. After describing the model’s implementation, a brief comparative illustration of the musical results is presented, along with a discussion on how the exposed hyper-parameters can promote active and non-linear exploration of the data

    High-order surrogacy for the audiovisual display of dance

    Get PDF
    Presented at the 26th International Conference on Auditory Display (ICAD 2021) 25-28 June 2021, Virtual conference.The current pandemic (COVID-19) has had considerable impact on many fronts, not least on the physical presence of humans, affecting how we relate to one another and to the natural environment. To investigate these two interactions, the notion of surrogacy, originally described by Smalley as remoteness between source and sonic gesture, is considered and extended to include bodily gesture, for the rendering of contemporary dance performances into abstract audiovisual compositions/objects. To this end, for a given dance performance, sonification of the motion capture data is combined with video-frame processing of the video recording. In this study, we focus on higher order surrogacy and associate this with 1) a soundscape ecology-inspired approach to sonification, whereby three species of sounds coexist and adapt in the environment according to the symbiotic paradigm of mutualism, and 2) a wave space method to sonify their coevolution. Aesthetic implications of this procedure in the context of multimodal, telematic/remote and virtual systems are discussed as disembodied presence emerges as a dominant trope in our daily experience

    MCMA: A Symbolic Multitrack Contrapuntal Music Archive

    No full text
    We present Multitrack Contrapuntal Music Archive (MCMA, available at https://mcma.readthedocs.io), a symbolic dataset of pieces specifically curated to comprise, for any given polyphonic work, independent voices. So far, MCMA consists only of pieces from the Baroque repertoire but we aim to extend it to other contrapuntal music. MCMA is FAIR-compliant and it is geared towards musicological tasks such as (computational) analysis or education, as it brings to the fore contrapuntal interactions by explicit and independent representation. Furthermore, it affords for a more apt usage of recent advances in the field of natural language processing (e.g., neural machine translation). For example, MCMA can be particularly useful in the context of language-based machine learning models for music generation. Despite its current modest size, we believe MCMA to be an important addition to online contrapuntal music databases, and we thus open it to contributions from the wider community, in the hope that MCMA can continue to grow beyond our efforts. In this article, we provide the rationale for this corpus, suggest possible use cases, offer an overview of the compiling process (data sourcing and processing), and present a brief statistical analysis of the corpus at the time of writing. Finally, future work that we endeavor to undertake is discussed

    Modeling Baroque Two-Part Counterpoint with Neural Machine Translation

    No full text
    International audienceWe propose a system for contrapuntal music generation based on a Neural Machine Translation (NMT) paradigm. We consider Baroque counterpoint and are interested in modeling the interaction between any two given parts as a mapping between a given source material and an appropriate target material. Like in translation, the former imposes some constraints on the latter, but doesn’t define it completely. We collate and edit a bespoke dataset of Baroque pieces, use it to train an attention-based neural network model, and evaluate the generated output via BLEU score and musicological analysis. We show that our model is able to respond with some idiomatic trademarks, such as imitation and appropriate rhythmic offset, although it falls short of having learned stylistically correct contrapuntal motion (e.g., avoidance of parallel fifths) or stricter imitative rules, such as cano
    corecore