107 research outputs found

    Musical Micro-Timing for Live Coding

    Get PDF
    Micro-timing is an essential part of human music-making, yet it is absent from most computer music systems. Partly to address this gap, we present a novel system for generating music with style-specific micro-timing within the Sonic Pi live coding language. We use a probabilistic approach to control the exact timing according to patterns discovered in new analyses of existing micro-timing data (jembe drumming and Viennese waltz). This implementation also required the introduction of musical metre into Sonic Pi. The new metre and micro-timing systems are inherently flexible, and thus open to a wide range of creative possibilities including (but not limited to): creating new micro-timing profiles for additional styles; expanded definitions of metre; and the free mixing of one micro-timing style with the musical content of another. The code is freely available as a Sonic Pi plug-in and released open source at https://github.com/MaxTheComputerer/sonicpi-metre

    A multiple case study of high school perspectives making music with code in Sonic Pi

    Get PDF
    The purpose of this study was to investigate perceptions of high school students who made music with code in Sonic Pi. This qualitative multiple case study focused on individuals in an extracurricular club at a public charter high school who volunteered to participate on-site and remotely asynchronously via Canvas learning management system. This study was guided by five research questions, including: (1) What musical ideas, if any, do participants report learning or demonstrate through making music with code in Sonic Pi? (2) How does making music with code impact participants’ perceptions of their music making? (3) How does making music with code impact participants’ perceptions of their ability to learn to make music? (4) How does making music with code impact participants’ interest in music courses? (5) How does making music with code impact participants’ interest in computer science courses? Participants completed research study materials, including a series of tutorials for Sonic Pi. Data included answers to questionnaires and surveys, multimedia artifacts including the source code and exported audio of participants’ music making, and interviews of participants that were codified and analyzed in two cycles, utilizing descriptive coding, values coding, and longitudinal coding. Participants’ code and multimedia artifacts revealed a close alignment to the four properties of sound, including: pitch, duration, intensity/amplitude, and timbre. Participants’ artifacts revealed themes and demonstrated ideas extending beyond the four properties, including: form, non-traditional music notation, and randomization. Participants all agreed their coded artifacts are music. Additionally, participants’ varied responses about musicianship and composers suggests that making music is something anyone can engage in, regardless of how one identifies themself. All participants agreed that Sonic Pi is a useful tool for learning and understanding musical concepts and that Western staff notation is not required knowledge for making music. Participants’ interests in music or computer science courses were impacted by their prior experiences in music and/or coding. This study concludes with a discussion of themes based on the findings

    Sculpting Unrealities: Using Machine Learning to Control Audiovisual Compositions in Virtual Reality

    Get PDF
    This thesis explores the use of interactive machine learning (IML) techniques to control audiovisual compositions within the emerging medium of virtual reality (VR). Accompanying the text is a portfolio of original compositions and open-source software. These research outputs represent the practical elements of the project that help to shed light on the core research question: how can IML techniques be used to control audiovisual compositions in VR? In order to find some answers to this question, it was broken down into its constituent elements. To situate the research, an exploration of the contemporary field of audiovisual art locates the practice between the areas of visual music and generative AV. This exploration of the field results in a new method of categorising the constituent practices. The practice of audiovisual composition is then explored, focusing on the concept of equality. It is found that, throughout the literature, audiovisual artists aim to treat audio and visual material equally. This is interpreted as a desire for balance between the audio and visual material. This concept is then examined in the context of VR. A feeling of presence is found to be central to this new medium and is identified as an important consideration for the audiovisual composer in addition to the senses of sight and sound. Several new terms are formulated which provide the means by which the compositions within the portfolio are analysed. A control system, based on IML techniques, is developed called the Neural AV Mapper. This is used to develop a compositional methodology through the creation of several studies. The outcomes from these studies are incorporated into two live performance pieces, Ventriloquy I and Ventriloquy II. These pieces showcase the use of IML techniques to control audiovisual compositions in a live performance context. The lessons learned from these pieces are incorporated into the development of the ImmersAV toolkit. This open-source software toolkit was built specifically to allow for the exploration of the IML control paradigm within VR. The toolkit provides the means by which the immersive audiovisual compositions, Obj_#3 and Ag FĂĄs Ar Ais ArĂ­s are created. Obj_#3 takes the form of an immersive audiovisual sculpture that can be manipulated in real-time by the user. The title of the thesis references the physical act of sculpting audiovisual material. It also refers to the ability of VR to create alternate realities that are not bound to the physics of real-life. This exploration of unrealities emerges as an important aspect of the medium. The final piece in the portfolio, Ag FĂĄs Ar Ais ArĂ­s takes the knowledge gained from the earlier work and pushes the boundaries to maximise the potential of the medium and the material

    Multiparametric interfaces for fine-grained control of digital music

    Get PDF
    Digital technology provides a very powerful medium for musical creativity, and the way in which we interface and interact with computers has a huge bearing on our ability to realise our artistic aims. The standard input devices available for the control of digital music tools tend to afford a low quality of embodied control; they fail to realise our innate expressiveness and dexterity of motion. This thesis looks at ways of capturing more detailed and subtle motion for the control of computer music tools; it examines how this motion can be used to control music software, and evaluates musicians’ experience of using these systems. Two new musical controllers were created, based on a multiparametric paradigm where multiple, continuous, concurrent motion data streams are mapped to the control of musical parameters. The first controller, Phalanger, is a markerless video tracking system that enables the use of hand and finger motion for musical control. EchoFoam, the second system, is a malleable controller, operated through the manipulation of conductive foam. Both systems use machine learning techniques at the core of their functionality. These controllers are front ends to RECZ, a high-level mapping tool for multiparametric data streams. The development of these systems and the evaluation of musicians’ experience of their use constructs a detailed picture of multiparametric musical control. This work contributes to the developing intersection between the fields of computer music and human-computer interaction. The principal contributions are the two new musical controllers, and a set of guidelines for the design and use of multiparametric interfaces for the control of digital music. This work also acts as a case study of the application of HCI user experience evaluation methodology to musical interfaces. The results highlight important themes concerning multiparametric musical control. These include the use of metaphor and imagery, choreography and language creation, individual differences and uncontrol. They highlight how this style of interface can fit into the creative process, and advocate a pluralistic approach to the control of digital music tools where different input devices fit different creative scenarios

    Applications of Discriminative, Generative and Predictive Deep Learning Processes to Solo Saxophone Practice

    Get PDF
    Modelling of audio data through deep learning provides a means of creating novel sounds, processes, ideas and tools for musical creativity, yet its actual usefulness is relatively under-explored. Only a handful of researcher-practitioners are using AI models in their musical works, and artistic research into applications of deep learning modelling to instrumental practice and improvisation currently occupies an even smaller niche. The research presented in this thesis and accompanying portfolio is an examination of potential creative applications of statistical modelling of audio data, through deep learning processes, to instrumental music practice; these processes are classification of a live input, generation of raw audio samples and sequential prediction of pitch. The goal of this work is, through the development of processes and creation of musical works, to generate knowledge concerning the practicality of modelling the systematic aspects of an instrumental improvised practice, the creative usefulness of such models to the practitioner, and the musical and technical ‘behaviours’ of specific classes of deep learning architecture with respect to the data on which the models are trained. These concerns are addressed through a practice-based research methodology consisting of multiple steps: recording original audio datasets; pre-processing audio data as appropriate to model architecture and task; training statistical models; artistic experimentation and development of software, resulting in novel processes for musical creativity; and creation of artistic outputs, resulting in a portfolio of recordings and notated scores. This project finds that deep learning can play useful roles in both technical and creative musical processes: classification can not only form the basis of interactive systems for improvisation but also be suggestive of new compositional structures; outputs of generative models of raw audio not only return valuable information about the training data but also generate useful source material for technical instrumental practice, improvisation and composition; notated outputs from symbolic-domain predictive models can also be richly suggestive of compositional ideas and structures for electroacoustic improvisation. This rich diversity of applications found posits AI as creative assistant, teacher and as deeply personalised tool for the instrumental practitioner. When considering the utility of this work to others, there will be specific variances not covered by this project: appropriate choices of data representations, data-preprocessing techniques, model architectures and their training parameters will vary according to task, instrument, genre and taste, as will of course the character of others’ creative outputs. However, the abundance of affordances and future directions this work uncovers gives confidence of its utility for other instrumental practitioners and researchers. Given the pace of ongoing development of deep learning methods for modelling of audio and their still-limited adoption by creative practitioners, I hope that this thesis will motivate further explorations of the unique creative potential of these technologies by instrumental practitioners, improvisers and practice-based researchers in the wider field of AI for musical creativity

    Tracing the Compositional Process. Sound art that rewrites its own past: formation, praxis and a computer framework

    Get PDF
    The domain of this thesis is electroacoustic computer-based music and sound art. It investigates a facet of composition which is often neglected or ill-defined: the process of composing itself and its embedding in time. Previous research mostly focused on instrumental composition or, when electronic music was included, the computer was treated as a tool which would eventually be subtracted from the equation. The aim was either to explain a resultant piece of music by reconstructing the intention of the composer, or to explain human creativity by building a model of the mind. Our aim instead is to understand composition as an irreducible unfolding of material traces which takes place in its own temporality. This understanding is formalised as a software framework that traces creation time as a version graph of transactions. The instantiation and manipulation of any musical structure implemented within this framework is thereby automatically stored in a database. Not only can it be queried ex post by an external researcher—providing a new quality for the empirical analysis of the activity of composing—but it is an integral part of the composition environment. Therefore it can recursively become a source for the ongoing composition and introduce new ways of aesthetic expression. The framework aims to unify creation and performance time, fixed and generative composition, human and algorithmic “writing”, a writing that includes indeterminate elements which condense as concurrent vertices in the version graph. The second major contribution is a critical epistemological discourse on the question of ob- servability and the function of observation. Our goal is to explore a new direction of artistic research which is characterised by a mixed methodology of theoretical writing, technological development and artistic practice. The form of the thesis is an exercise in becoming process-like itself, wherein the epistemic thing is generated by translating the gaps between these three levels. This is my idea of the new aesthetics: That through the operation of a re-entry one may establish a sort of process “form”, yielding works which go beyond a categorical either “sound-in-itself” or “conceptualism”. Exemplary processes are revealed by deconstructing a series of existing pieces, as well as through the successful application of the new framework in the creation of new pieces

    Carboxylic Acids Under Vibrational Scrutiny: Experimental Reference Data to Benchmark Quantum Chemical Calculations

    Get PDF
    • 

    corecore