186 research outputs found
Recommended from our members
Disembodiment: Reproduction, Transcription, And Trace
This article poses the question, what is so great about the body? Recent scholarship has emphasized the concept of an embodied cognition and reminded us of the significance of embodiment in musical performance. Yet, vital as these observations may be, they offer only a limited view of what ‘touch’ can mean. Following the semiotic notion of the index as a sign with a real connection to its object, writers and artists such as Friedrich Kittler, Ai Weiwei, Kenneth Goldsmith and Nicolas Donin have reflected on how the reproductions of the gramophone needle, the calligrapher's brush, the blogger's keyboard, and the programmer's code can trace meaningful points of contact. Examples from my own practice illustrate some of the many possible ways that digital traces can be touching
Recommended from our members
Mapping the Klangdom Live: Cartographies for piano with two performers and electronics
The use of high-density loudspeaker arrays (HDLAs) has recently experienced rapid growth in a wide variety of technical and aesthetic approaches. Still less explored, however, are applications to interactive music with live acoustic instruments. How can immersive spatialization accompany an instrument already with its own rich spatial diffusion pattern, like the grand piano, in the context of a score-based concert work? Potential models include treating the spatialized electronic sound in analogy to the diffusion pattern of the instrument, with spatial dimensions parametrized as functions of timbral features. Another approach is to map the concert hall as a three-dimensional projection of the instrument’s internal physical layout, a kind of virtual sonic microscope. Or, the diffusion of electronic spatial sound can be treated as an independent polyphonic element, complementary to but not dependent upon the instrument’s own spatial characteristics. Cartographies (2014), for piano with two performers and electronics, explores each of these models individually and in combination, as well as their technical implementation with the Meyer Sound Matrix3 system of the Su ̈ dwestrundfunk Experimentalstudio in Freiburg, Germany, and the 43.4-channel Klangdom of the Institut fu ̈ r Musik und Akustik at the Zentrum fu ̈ r Kunst und Media in Karlsruhe, Germany. The process of composing, producing, and performing the work raises intriguing questions, and invaluable hints, for the composition and performance of live interactive works with HDLAs in the future
Recommended from our members
Corpus-Based Transcription as an Approach to the Compositional Control of Timbre
Timbre space is a cognitive model useful to address the problem of structuring timbre in electronic music. The recent concept of corpus-based concatenative sound synthesis is proposed as an approach to timbral control in both real- and deferred-time applications. Using CataRT and related tools in the FTM and Gabor libraries for Max/MSP we describe a technique for real-time analysis of a live signal to pilot corpus-based synthesis, along with examples of compositional realizations in works for instruments, electronics, and sound installation. To extend this technique to computer-assisted composition for acoustic instruments, we develop tools using the Sound Description Interchange Format (SDIF) to export sonic descriptors to OpenMusic where they may be further manipulated and transcribed into an instrumental score. This presents a flexible technique for the compositional organization of noise-based instrumental sounds
Introducing CatOracle: Corpus-based concatenative improvisation with the Audio Oracle algorithm
CATORACLE responds to the need to join high-level control of audio timbre with the organization of musical form in time. It is inspired by two powerful existing tools: CataRT for corpus-based concatenative synthesis based on the MUBU for MAX library, and PYORACLE for computer improvisation, combining for the first time audio descriptor analysis and learning and generation of musical structures. Harnessing a user-defined list of audio fea- tures, live or prerecorded audio is analyzed to construct an “Audio Oracle” as a basis for improvisation. CatOracle also extends features of classic concatenative synthesis to include live interactive audio mosaicking and score-based transcription using the BACH library for MAX. The project suggests applications not only to live performance of written and improvised electroacoustic music, but also computer-assisted composition and musical analysis
Recommended from our members
Musique instrumentale concrète: Timbral transcription in What the Blind See and Without Words
Transcription is an increasingly influential compositional model in the 21 st century. Bridging techniques of musique concrète and musique concrète instrumentale, my work since 2007 has focused on using timbral descriptors to transcribe audio recordings for live instrumental ensemble and electronics. The sources and results vary, including transformation of noise-rich playing techniques, transcription of improvised material produced by performer-collaborators, and fusion of instrumental textures with ambient field recordings. However the technical implementation employs a shared toolkit: sample databases are recorded, analysed, and organised into an audio mosaic with the CataRT package for corpus-based concatenative synthesis. Then OpenMusic is used to produce a corresponding instrumental transcription to be incorporated into the finished score. This chapter presents the approach in two works for ensemble and electronics, What the Blind See (2009) and Without Words (2012), as well as complementary real-time technologies including close miking and live audio mosaicking. In the process transcription is considered as a renewed expressive resource for the extended lexicon of electronically augmented instrumental sound
Spherical correlation as a similarity measure for 3-D radiation patterns of musical instruments
We investigate the use of spherical cross-correlation as a similarity measure of sound radiation patterns, with potential applications for their study, organization, and manipulation. This work is motivated by the application of corpus-based synthesis techniques to spatial projection based on the radiation patterns of orchestral instruments. To this end, we wish to derive spatial descriptors to complement other audio features available for the organization of the sample corpus. Considering two directivity functions on the sphere, their spherical correlation can be computed from their spherical harmonic coefficients. In addition, one can search for the 3-D rotation matrix which maximizes the cross-correlation, i.e. which offers the optimal spherical shape matching. The mathematical foundations of these tools are well established in the literature; however, their practical use in the field of acoustics remains relatively limited and challenging. As a proof of concept, we apply these techniques both to simulated radiation data and to measurements derived from an existing database of 3-D directivity patterns of orchestral instruments. Using these examples we present several test cases to compare the results of spherical correlation to mathematical and acoustical expectations. A range of visualization methods are applied to analyze the test cases, including multi-dimensional scaling, employed as an efficient technique for data reduction and navigation. This article is an extended version of a study previously published in [Carpentier and Einbond. 16th Congrès Français d’Acoustique (CFA), Marseille, France, April 2022, pp. 1–6. https://openaccess.city.ac.uk/id/eprint/28202/]
Recommended from our members
Spherical correlation as a similarity measure for 3D radiation patterns of musical instruments
This work is part of an artistic-research residency where composer Aaron Einbond seeks to apply audio descriptor analysis and corpus-based synthesis techniques to the spatial manipulation of instrumental radiation patterns for projection with a compact spherical loudspeaker array. Starting from a database of 3D directivity patterns of orchestral instruments, measured with spherical microphone arrays in anechoic conditions, we wish to derive spatial descriptors in order to classify the corpus. This paper investigates the use of spherical cross-correlation as a similarity measure between radiation patterns. Considering two directivity patterns f and g as bandlimited, square integrable functions on the 2-sphere, their correlation can be computed from their spherical harmonic spectra via a spatial inverse discrete Fourier transform. The magnitudes of these Fourier coefficients provide a rotation-invariant representation of the functions on the sphere. One can therefore search for the transformation matrix m, in the 3D rotation group SO(3), which maximizes the cross-correlation, i.e. which offers the optimal spherical shape matching between f and g. The mathematical foundations of these tools are well established in the literature ; however, their practical use in the field of acoustics remains limited and challenging. In this study, we apply these techniques to both simulated and measured radiation data, attempting to answer a number of practical questions : How does the similarity measure behave when f and g are not rotated cousins ? How can we adapt the cross-correlation formalism established for complex-valued harmonics to real-valued harmonics, as the latter are predominantly used in the field of Ambisonics ? Can we compute the correlation of spherical spectra of different bandwidths ? What is the impact of the finite sampling distribution used for integration on the SO(3) space? How do we normalize the cross-correlation function ? And most importantly, is the cross-correlation an efficient measure for the classification of 3D radiation patterns
Recommended from our members
Composing the Assemblage: Probing Aesthetic and Technical Dimensions of Artistic Creation with Machine Learning
In this article we address the role of machine learning (ML) in the composition of two new musical works for acoustic instruments and electronics through auto-ethnographic reflection on the experience. Our study poses the key question of how ML shapes, and is in turn shaped by, the aesthetic commitments characterizing distinctive compositional practices. Further, we ask how artistic research in these practices can be informed by critical themes from humanities scholarship on material engagement and critical data studies. Through these frameworks, we consider in what ways the interaction with ML algorithms as part of the compositional process differs from that with other music technology tools. Rather than focus on narrowly conceived ML algorithms, we take into account the heterogeneous assemblage brought into play: from composers, performers, and listeners, to loudspeakers, microphones, and audio descriptors. Our analysis focuses on a deconstructive critique of data as contingent on the decisions and material conditions involved in the data creation process. It also explores how interaction among the human and nonhuman collaborators in the ML assemblage has significant similarities to – as well as differences from – existing models of material engagement. Tracking the creative process of composing these works, we uncover the aesthetic implications of the many nonlinear collaborative decisions involved in composing the assemblage
Recommended from our members
Instrumental Radiation Patterns as Models for Corpus-Based Spatial Sound Synthesis: Cosmologies for Piano and 3D Electronics
The Cosmologies project aims to situate the listener inside a virtual grand piano by enabling computer processes to learn from the spatial presence of the live instrument and performer. We propose novel techniques that leverage mea- surements of natural acoustic phenomena to inform spatial sound composition and synthesis. Measured radiation pat- terns of acoustic instruments are applied interactively in response to a live input to synthesize spatial forms in real time. We implement this with software tools for the first time connecting audio descriptor analysis and corpus-based syn- thesis to spatialization using Higher-Order Ambisonics and machine learning. The resulting musical work, Cosmologies for piano and 3D electronics, explodes the space inside the grand piano out to the space of the concert hall, allowing the listener to experience its secret inner life
Recommended from our members
Fine-tuned Control of Concatenative Synthesis with CATART Using the BACH Library for MAX
The electronic musician’s toolkit is increasingly characterized by fluidity between software, techniques, and genres. By combining two of the most exciting recent packages for MAX, CATART corpus-based concatenative synthesis (CBCS) and BACH: AUTOMATED COMPOSER’S HELPER, we propose a rich tool for real-time creation, storage, editing, re-synthesis, and transcription of concatenative sound. The modular structures of both packages can be advantageously recombined to exploit the best of their real-time and computer-assisted composition (CAC) capabilities. After loading a sample corpus in CATART, each grain, or unit, played from CATART is stored as a notehead in the bach.roll object along with its descriptor data and granular synthesis parameters including envelope and spatialization. The data is attached to the note itself (pitch, velocity, duration) or stored in user-defined slots than can be adjusted by hand or batch-edited using lambda-loops. Once stored, the contents of bach.roll can be dynamically edited and auditioned using CATART for playback. The results can be output as a sequence for synthesis, or used for CAC score-generation through a process termed Corpus-Based Transcription: rhythms are output with bach.quantize and further edited in bach.roll before export as a MUSICXML file to a notation program to produce a performer-readable score. Together these techniques look toward a concatenative DAW with promising capabilities for composers, improvisers, installation artists, and performers
- …