16 research outputs found

    and we continue

    Get PDF
    “and we continue” is an interactive online performance that tells a story about the behavior of complex systems through the lens of water. Each participant starts out as an iconic representation of various forms of water, such as Ice or Cloud, and explores its individual existence. Later, real-time interactions between participants are explored along with influences of outside actors to the system, creating unpredictability. In the last stage, participants come together to form a system that acts as an individual once again. The story is told through use of music, video and text, all of which react to the participants’ actions. Each of these three media, together with all participant interactions, plays a part in the story of water and complexity by highlighting shifting time scales as humans influence earth’s water systems and underscoring the unpredictable consequences of individual actions within such systems

    Performance Practice of Real-Time Notation

    Get PDF
    This paper addresses the performance practice issues encountered when the notation of a work loosens its bounds in the world of the fixed and knowable, and explores the realms of chance, spontaneity, and interactivity. Some of these performance practice issues include the problem of rehearsal, the problem of ensemble synchronization, the extreme limits of sight-reading, strategies for dealing with failure in performance, new freedoms for the performer and composer, and new opportunities offered by the ephemerality and multiplicity of real-time notation

    Live Coding, Live Notation, Live Performance

    Get PDF
    This paper/demonstration explores relationships between code, notation including representation, visualisation and performance. Performative aspects of live coding activities are increasingly being investigated as the live coding movement continues to grow and develop. Although live instrumental performance is sometimes included as an accompaniment to live coding, it is often not a fully integrated part of the performance, relying on improvisation and/or basic indicative forms of notation with varying levels of sophistication and universality. Technologies are developing which enable the use of fully explicit music notations as well as more graphic ones, allowing more fully integrated systems of code in and as performance which can also include notations of arbitrary complexity. This itself allows the full skills of instrumental musicians to be utilised and synchronised in the process. This presentation/demonstration presents work and performances already undertaken with these technologies, including technologies for body sensing and data acquisition in the translation of the movements of dancers and musicians into synchronously performable notation, integrated by live and prepared coding. The author together with clarinetist Ian Mitchell present a short live performance utilising these techniques, discuss methods for the dissemination and interpretation of live generated notations and investigate how they take advantage of instrumental musicians’ training-related neuroplasticity skills

    The Artists who Say Ni!: Incorporating the Python programming language into creative coding for the realisation of musical works

    Get PDF
    Even though Python is a very popular programming language with a wide range of applications, in the domain of music, specifically electronic music, it is much less used than other languages and programming environments that have been built explicitly for musical creation, such as SuperCollider, Pure Data, Csound, Max, and Chuck. Since 2010 a Python module for DSP called Pyo has been available. This module consists of a complete set of DSP algorithms, Unit Generators, filters, effects, and other tools for the creation of electronic music and sound, yet its community is rather limited. Being part of Python, this module can be combined with a big variety of native and external Python modules for musical or extra-musical tasks, facilitating the realisation of interdisciplinary artworks focusing on music and sound. Starting a creative journey with this module, I was led to more Pythonic techniques for tasks other than music, like mining tweets from Twitter or creating code poetry, which I incorporated into my musical activity. This practice-based research explores the field of the creation of musical works based on Python by focusing on three works. The first one is a live coding poetry opera where the libretto is written in Python. The second one is a live algorithmic composition for an acoustic ensemble based on input from Twitter. The last work is a combination of live coding with live patching on a hardware modular synthesiser system. The main objective of this thesis is to determine the creative potential of Python in music and mixed media art by posing questions that are answered through these works. By doing this, this research aims to provide a conceptual framework for artistic creation that can function as inspiration to other musicians and artists. The title of this thesis is based on one of the most popular lines of the Monty Python comedy troupe, “the Knights who say Ni!”, since the initial developer of the Python programming language, Guido van Rossum, gave this name to this language inspired by Monty Python

    Physical modelling meets machine learning: performing music with a virtual string ensemble

    Get PDF
    This dissertation describes a new method of computer performance of bowed string instruments (violin, viola, cello) using physical simulations and intelligent feedback control. Computer synthesis of music performed by bowed string instruments is a challenging problem. Unlike instruments whose notes originate with a single discrete excitation (e.g., piano, guitar, drum), bowed string instruments are controlled with a continuous stream of excitations (i.e. the bow scraping against the string). Most existing synthesis methods utilize recorded audio samples, which perform quite well for single-excitation instruments but not continuous-excitation instruments. This work improves the realism of synthesis of violin, viola, and cello sound by generating audio through modelling the physical behaviour of the instruments. A string's wave equation is decomposed into 40 modes of vibration, which can be acted upon by three forms of external force: A bow scraping against the string, a left-hand finger pressing down, and/or a right-hand finger plucking. The vibration of each string exerts force against the instrument bridge; these forces are summed and convolved with the instrument body impulse response to create the final audio output. In addition, right-hand haptic output is created from the force of the bow against the string. Physical constants from ten real instruments (five violins, two violas, and three cellos) were measured and used in these simulations. The physical modelling was implemented in a high-performance library capable of simulating audio on a desktop computer one hundred times faster than real-time. The program also generates animated video of the instruments being performed. To perform music with the physical models, a virtual musician interprets the musical score and generates actions which are then fed into the physical model. The resulting audio and haptic signals are examined with a support vector machine, which adjusts the bow force in order to establish and maintain a good timbre. This intelligent feedback control is trained with human input, but after the initial training is completed the virtual musician performs autonomously. A PID controller is used to adjust the position of the left-hand finger to correct any flaws in the pitch. Some performance parameters (initial bow force, force correction, and lifting factors) require an initial value for each string and musical dynamic; these are calibrated automatically using the previously-trained support vector machines. The timbre judgements are retained after each performance and are used to pre-emptively adjust bowing parameters to avoid or mitigate problematic timbre for future performances of the same music. The system is capable of playing sheet music with approximately the same ability level as a human music student after two years of training. Due to the number of instruments measured and the generality of the machine learning, music can be performed with ensembles of up to ten stringed instruments, each with a distinct timbre. This provides a baseline for future work in computer control and expressive music performance of virtual bowed string instruments

    Artistic and Musical Applications of Internet Search Technologies: Prospects and a Case Study

    Get PDF
    This paper explores the idea of internet search as a technology to underpin artistic creation. Concepts of interactivity in art and music are explored, and then an overview of different types of internet-based art is presented. A number of different ways in which internet search have the potential to underpin artistic and musical activity are then discussed, with ideas such as the idea of a collective readymade and aesthetics of mass and unexpected connections are used to give this discussion a theoretical basis. Finally, a case study is given, in which the author discusses one of their own multimedia artworks that makes substantial use of internet search

    Adaptive music: Automated music composition and distribution

    Get PDF
    Creativity, or the ability to produce new useful ideas, is commonly associated to the human being; but there are many other examples in nature where this phenomenon can be observed. Inspired by this fact, in engineering, and particularly in computational sciences, many different models have been developed to tackle a number of problems. Music, a form of art broadly present along the human history, is the main field addressed in this thesis, taking advantage of the kind of ideas that bring diversity and creativity to nature and computation. We present Melomics, an algorithmic composition method based on evolutionary search, with a genetic encoding of the solutions, which are interpreted in a complex developmental process that leads to music in the standard formats. This bioinspired compositional system has exhibited a high creative power and versatility to produce music of different type, which in many occasions has proven to be indistinguishable from the music made by human composers. The system also has enabled the emergence of a set of completely novel applications: from effective tools to help anyone to easily obtain the precise music they need, to radically new uses like adaptive music for therapy, amusement or many other purposes. It is clear to us that there is much research work yet to do in this field; and that countless and new unimaginable uses will derive from it

    Proceedings of the 11th Workshop on Ubiquitous Music (UbiMus 2021)

    Get PDF
    The 11th UbiMus — Ubiquitous Music Workshop (https://dei.fe.up.pt/ubimus/) was held at the Center for High Artistic Performance, the house of the Orquestra Jazz Matosinhos (OJM) in Portugal, during September 6–8, 2021. It was organized by the Sound and Music Computing (SMC) Group of the Faculty of Engineering, University of Porto and INESC TEC, Portugal, and OJM in collaboration with NAP, Federal University of Acre, Brazil. Due to mobility restrictions resulting from the Covid-19 pandemic, a hybrid format was adopted in this year’s workshop to accommodate the remote participation of delegates and authors that could not attend the workshop at Matosinhos

    Transforming musical performance: activating the audience as digital collaborators

    Get PDF
    Digital technologies have transformed the performance practice, recording and distribution technologies, economy and sonic landscape of music in a process of change that began in the early 1980s. Recent technological developments have opened up the possibility of embodied interaction between audiences and performers, reframing music performance as a collaborative improvisatory space that affords Interactive Musical Participation. The research in this practice-based thesis looks at the relationship and experience of audience members and musicians exploring Interactive Musical Participation within the wide stylistic framework of contemporary jazz. It also studies the potential for the creation of compositional, technological and performance protocols to enable successful Interactive Musical Participation. This has been achieved through a process of mapping the methodology behind the composition, technical infrastructure, performances and post-performance analysis of a series of musical artefacts. Cook (2001 and 2009) suggests that researchers in this field should “Make a piece, not an instrument or controller” and this dictum has influenced the development of the technical infrastructure for this research. Easily accessible and low-cost digital audio workstations Ableton Live (2017) and Logic Pro X (Apple, 2019) as well as the digital protocols Open Sound Control (OSC) (Opensoundcontrol.org) have been utilised to deliver the programming and networking requirements. A major innovation stemming from this project has been the development of the Deeper Love Soundpad App, a sample playback app for Apple smartphones and iPads, in collaboration with Dr. Rob Toulson. The theoretical background to this research has been informed by actornetwork theory, the sociological approach developed by Bruno Latour (2005), Michel Callon (1986) and John Law (1992). Actor-network theory (ANT) provides a framework for understanding the mechanics of power and organisation within heterogeneous non-hierarchical networks. Mapping and analysing the ANT networks and connections created by the research performances has provided valuable data in the Interactive Musical Participatio

    Designing performance systems for audience inclusion

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 154-168).We define the concept of the Hyperaudience and a unique approach towards designing real-time interactive performance systems: the design of these systems encourages audience participation and augments the experience of audience members through interconnected networks. In doing so, it embraces concepts found in ubiquitous computing, affective computing, interactive arts, music, theatrical tradition, and pervasive gaming. In addition, five new systems are demonstrated to develop a framework for thinking about audience participation and orchestrating social co-presence in and beyond the performance space. Finally, the principles and challenges that shaped the design of these five systems are defined by measuring, comparing, and evaluating their expressiveness and communicability.by Akito Van Troyer.S.M
    corecore