67 research outputs found

    Precipitating lights

    Get PDF

    Rethinking the musical ensemble: a model for collaborative learning in higher education music technology

    Get PDF
    Kudac (Kingston University Digital Arts Collective) is an electronic improvisation ensemble that brings staff and students together for weekly musicking with technology ā€“ incorporating resources ranging from conventional instruments, to computers, to hacked circuit boards. A central element of the ensemble from its inception has been its democratic approach ā€“ staff and students explore the musical possibilities and challenges together and gradually mould their practice through a free exchange. In this article we consider the contribution of this ensemble in several overlapping domains: in relation to the individual students, in the context of a higher education music department, and at the intersection of research and teaching. We first survey the structure and activities of the ensemble, contextualizing this with reference to existing research in the fields of laptop performance, free improvisation and musical identity formation. We use this as a platform for tracing how such an ensemble may aid the social construction and shaping of creative identities at both an individual and collective level. We then examine the opportunities and challenges for a music department hosting such an ensemble before highlighting areas for future study

    Taking the models back to music practice : evaluating generative transcription models built using deep learning

    Get PDF
    We extend our evaluation of generative models of music tran- scriptions that were first presented in Sturm, Santos, Ben-Tal, and Korshunova (2016). We evaluate the models in five different ways: 1) at the population level, comparing statistics of 30,000 generated transcriptions with those of over 23,000 training transcriptions; 2) at the practice level, examining the ways in which specific generated transcriptions are successful as music compositions; 3) as a ā€œnefarious testerā€, seeking the music knowledge limits of the models; 4) in the context of assisted music composition, using the models to create music within the conventions of the training data; and finally, 5) taking the models to real-world music practitioners. Our work attempts to demonstrate new approaches to evaluating the application of machine learning methods to modelling and making music, and the importance of taking the results back to the realm of music practice to judge their usefulness. Our datasets and software are open and available at https://github.com/IraKorshunova/folk-rnn

    Designing interaction for co-creation

    Get PDF
    This paper describes several compositions of live, interactive electronics where mutual listening between the performer and the computer form the basis for the interaction. The electronics combines algorithmically defined musical-logic with inputs from the performer. These become a musical partner to the performer mixing system creativity with the composer's and the performer's. The paper places this approach within the context of computational creativity on the one hand and live electronics on the other. Keywords: algorithmic composition, live-electronics, interaction, machine listening

    Weak interactions, strong bonds : live electronics as a complex system

    Get PDF

    Musical and meta-musical conversations

    Get PDF
    This collaboration emerged out of informal conversation between the authors about improvisation. Ben-Tal is a composer/researcher who has been using Music Information Retrieval (MIR) techniques and AI as tools for composition. Dolan is a performer/improviser and researcher on improvisation, creativity and expressive performance with little knowledge of music technology. Dolan became intrigued but also highly sceptical about Ben-Talā€™s ideas of musical dialogues between human and computer as a basis for co-creation. They agreed to meet and trial the possibility of real-time improvisation between piano and computer. By his own admission, Dolan came to this first session assuming he will prove the inadequacy of such a set-up for joint improvisation based on extended tonal music idiom.Ā  He found himself equally surprised and alarmed when he experienced moments that felt, to himself,Ā  as real dialogue with the machine. This proof-of-concept session provided the starting point for an ongoing collaboration: developing a unique duo-improvisation within the context of computationally creative tools, real-time interaction, tonal-music and human-computer interaction. Central to this work are musical dialogues between Dolan on the piano and Ben-Talā€™s computing system as they improvise together. These are surrounded and complemented by conversations between the authors about the system, about improvisation, composition, performance, music and AI.Ā  This presentation starts from a description of the current improvisation set-up and the development that allowed us to arrive at this stage. The following section re-enacts some of the conversations that the authors engaged in, which will illuminate the learning and discovery process they underwent together. We will end by drawing out important themes emerging from the musical and meta-musical conversations in relation to current debates around music and AI

    VR : Time Machine

    Get PDF
    Time Machine is an immersive Virtual Reality installation that explains ā€“ in simple terms ā€“ the Striatal Beat Frequency (SBF) model of time perception. The installation was created as a collaboration between neuroscientists within the field of time perception along with a team of digital designers and audio composers/engineers. This paper outlines the process, as well as the lessons learned, while designing the virtual reality experience that aims to simplify a complex idea to a novice audience. The authors describe in detail the process of creating the world, the user experience mechanics and the methods of placing information in the virtual place in order to enhance the learning experience. The work was showcased at the 4th International Conference on Time Perspective, where the authors collected feedback from the audience. The paper concludes with a reflection on the work and some suggestions for the next iteration of the project

    How music AI is useful : engagements with composers, performers, and audiences

    Get PDF
    Critical but often overlooked research questions in artificial intelligence (AI) applied to music involve the impact of the results for music. How and to what extent does such research contribute to the domain of music? How are the resulting models useful for music practitioners? In this article, we describe how we are addressing such questions by engaging composers, musicians, and audiences with our research. We first describe two websites we have created that make our AI models accessible to a wide audience. We then describe a professionally recorded album that we released to expert reviewers to gauge the plausibility of AI-generated material. Finally, we describe the use of our AI models as tools for co-creation. Evaluating AI research and music models in these ways illuminate their impact on music making in a range of styles and practices

    Comparing Perceptual and Computational Complexity for Short Rhythmic Patterns

    Get PDF
    According to Leibniz ā€˜Music is the hidden arithmetical exercise of a mind unconscious that it is calculating.ā€™ The perception or experience of time is an essential aspect of listenersā€™ engagement with music. As such listenersā€™ experience of rhythmic patterns and their aesthetic response can enhance our understanding of the perception of time. Studies by Berlyne suggest that aesthetic evaluations are low for stimuli that are too simple or too complex with a preference for intermediate level of complexity. In musical terms we would expect listeners to respond negatively to music that is purely repetitive or to music that seems incomprehensibly random and to prefer music that manages to balance familiarity with variation. We present a study that aims to match listenersā€™ evaluation of rhythmic complexity with computational measures of complexity. We selected five measures derived from information theory - Shannon's entropy, entropy rate, excess entropy, transient information, and Kolmogorov complexity. Rhythmic sequences, covering a wide spectrum of complexity levels according to these measures, were generated algorithmically as binary sequences. These sequences were synthesized as drum patterns with 1s as hits and 0s as rests. 32 participants were asked to guess whether the last beat of each sequence was supposed to be a drum hit or a rest. We averaged the participantsā€™ scores in order to assign an implicit rating of rhythm complexity to each sequence. We also obtained an explicit rating of complexity by asking the participants to rate the perceived difficulty of guessing the last beat for each sequence. Finally, the participants completed the Gold-MSI questionnaire and a shortened version of the Raven's matrices, in order to investigate the effects of musicality and visual pattern identification on the perception of rhythm complexity. The Kolmogorov complexity of the sequences was correlated with the scores on the explicit task (r=.973, p<.001), and the entropy rate of the sequences was correlated with the scores on both implicit (r=.670, p=.012) and explicit tasks (r=.909, p<.001). There was also a Kolmogorov complexity-by-musicality interaction (F=5.498, p=.026), confirming the influence of musical expertise in the perception of rhythm complexity. There was no effect of the scores on the Raven's matrices, showing that auditory sequence perception and visual pattern identification seem to be different abilities. These results show that information-theoretical concepts capture some salient features of rhythm perception, and provide the framework for further studies on the aesthetic perception of rhythm
    • ā€¦
    corecore