4 research outputs found

    DMRN+17: Digital Music Research Network One-day Workshop 2022

    Get PDF
    DMRN+17: Digital Music Research Network One-day Workshop 2022 Queen Mary University of London - Tuesday 20th December 2022. The Digital Music Research Network (DMRN) aims to promote research in the area of Digital Music, by bringing together researchers from UK and overseas universities and industry for its annual workshop. The workshop will include invited and contributed talks and posters. The workshop will be an ideal opportunity for networking with other people working in the area. Keynote speakers: Sander Dieleman Tittle: On generative modelling and iterative refinement. Bio: Sander Dieleman is a Research Scientist at DeepMind in London, UK, where he has worked on the development of AlphaGo and WaveNet. He obtained his PhD from Ghent University in 2016, where he conducted research on feature learning and deep learning techniques for learning hierarchical representations of musical audio signals. His current research interests include representation learning and generative modelling of perceptual signals such as speech, music and visual data. DMRN+17 is sponsored by The UKRI Centre for Doctoral Training in Artificial Intelligence and Music (AIM); a leading PhD research programme aimed at the Music/Audio Technology and Creative Industries, based at Queen Mary University of London

    AI (r)evolution -- where are we heading? Thoughts about the future of music and sound technologies in the era of deep learning

    Full text link
    Artificial Intelligence (AI) technologies such as deep learning are evolving very quickly bringing many changes to our everyday lives. To explore the future impact and potential of AI in the field of music and sound technologies a doctoral day was held between Queen Mary University of London (QMUL, UK) and Sciences et Technologies de la Musique et du Son (STMS, France). Prompt questions about current trends in AI and music were generated by academics from QMUL and STMS. Students from the two institutions then debated these questions. This report presents a summary of the student debates on the topics of: Data, Impact, and the Environment; Responsible Innovation and Creative Practice; Creativity and Bias; and From Tools to the Singularity. The students represent the future generation of AI and music researchers. The academics represent the incumbent establishment. The student debates reported here capture visions, dreams, concerns, uncertainties, and contentious issues for the future of AI and music as the establishment is rightfully challenged by the next generation

    Reflection Across AI-based Music Composition

    Get PDF
    Reflection is fundamental to creative practice. However, the plurality of ways in which people reflect when using AI Generated Content (AIGC) is underexplored. This paper takes AI-based music composition as a case study to explore how artist-researcher composers reflected when integrating AIGC into their music composition process. The AI tools explored range from Markov Chains for music generation to Variational Auto-Encoders for modifying timbre. We used a novel method where our composers would pause and reflect back on screenshots of their composing after every hour, using this documentation to write first-person accounts showcasing their subjective viewpoints on their experience. We triangulate the first-person accounts with interviews and questionnaire measures to contribute descriptions on how the composers reflected. For example, we found that many composers reflect on future directions in which to take their music whilst curating AIGC. Our findings contribute to supporting future explorations on reflection in creative HCI contexts
    corecore