39 research outputs found
Back To The Cross-Modal Object: A Look Back At Early Audiovisual Performance Through The Lens Of Objecthood
This paper looks at 2 early digital audiovisual performance works, solo work Overbow and the group Sensors_Sonics_Sights (S.S.S) and describes the compositional and performance strategies behind each one. We draw upon the concept of audiovisual objecthood proposed by Kubovy and Schutz to think about the different ways in which linkages between vision and audition can be established, and how audio-visual objects can be composed from the specific attributes of auditory and visual perception. The model is used as a means to analyze these live audio-visual works performed using sensor-based instruments. The fact that gesture is not the only visual component in these performances, and is the common source articulating sound and visual output, extends the classical 2-way audiovisual object into a three-way relationship between gesture, sound, and image, fulfilling a potential of cross-modal objects
NIME Identity from the Performerâs Perspective
The term âNIMEâ - New Interfaces for Musical Expression - has come to signify both technical and cultural characteristics. Not all new musical instruments are NIMEs, and not all NIMEs are defined as such for the sole ephemeral condition of being new. So, what are the typical characteristics of NIMEs and what are their roles in performersâ practice? Is there a typical NIME repertoire? This paper aims to address these questions with a bottom up approach. We reflect on the answers of 78 NIME performers to an online questionnaire discussing their performance experience with NIMEs. The results of our investigation explore the role of NIMEs in the performersâ practice and identify the values that are common among performers. We find that most NIMEs are viewed as exploratory tools created by and for performers, and that they are constantly in development and almost in no occasions in a finite state. The findings of our survey also reflect upon virtuosity with NIMEs, whose peculiar performance practice results in learning trajectories that often do not lead to the development of virtuosity as it is commonly understood in traditional performanc
On Mapping EEG Information into Music
With the rise of ever-more affordable EEG equipment available to musicians, artists and researchers, designing and building a Brain-Computer Music Interface (BCMI) system has recently become a realistic achievement. This chapter discusses previous research in the fields of mapping, sonification and musification in the context of designing a BCMI system and will be of particular interest to those who seek to develop their own. Design of a BCMI requires unique consider-ations due to the characteristics of the EEG as a human interface device (HID). This chapter analyses traditional strategies for mapping control from brain waves alongside previous research in bio-feedback musical systems. Advances in music technology have helped provide more complex approaches with regards to how music can be affected and controlled by brainwaves. This, paralleled with devel-opments in our understanding of brainwave activity has helped push brain-computer music interfacing into innovative realms of real-time musical perfor-mance, composition and applications for music therapy
BCI for Music Making: Then, Now, and Next
Brainâcomputer music interfacing (BCMI) is a growing field with a history of experimental applications derived from the cutting edge of BCI research as adapted to music making and performance. BCMI offers some unique possibilities over traditional music making, including applications for emotional music selection and emotionally driven music creation for individuals as communicative aids (either in cases where users might have physical or mental disabilities that otherwise preclude them from taking part in music making or in music therapy cases where emotional communication between a therapist and a patient by means of traditional music making might otherwise be impossible). This chapter presents an overview of BCMI and its uses in such contexts, including existing techniques as they are adapted to musical control, from P300 and SSVEP (steady-state visually evoked potential) in EEG (electroencephalogram) to asymmetry, hybrid systems, and joint fMRI (functional magnetic resonance imaging) studies correlating affective induction (by means of music) with neurophysiological cues. Some suggestions for further work are also volunteered, including the development of collaborative platforms for music performance by means of BCMI
Cinema Fabriqué : a gestural environment for realtime video performance
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2003.Includes bibliographical references (p. [107]-[108]).This thesis presents an environment that enables a single person to improvise video and audio programming in real time through gesture control. The goal of this system is to provide the means to compose and edit video stories for a live audience with an interface that is exposed and engaging to watch. Many of the software packages used today for realtime audio-visual performance were not built with this use in mind, and have been repurposed or modified with plug-ins to meet the performer's needs. Also, these applications are typically controlled by standard keyboard, mouse, or MIDI inputs, which were not designed for precise video control or live spectacle. As an alternative I built a system called Cinema FabriqueÌ which integrates video editing and effects software and hand gesture tracking methods into a single system for audio-visual performance.by Justin Manor.S.M
Recommended from our members
Sirens/Cyborgs: Sound Technologies and the Musical Body
This dissertation investigates the political stakes of womenâs work with sound technologies engaging the body since the 1970s by drawing on frameworks and methodologies from music history, sound studies, feminist theory, performance studies, critical theory, and the history of technology. Although the body has been one of the principal subjects of new musicology since the early 1990s, its role in electronic music is still frequently shortchanged. I argue that the way we hear electro-bodily music has been shaped by extra-musical, often male-controlled contexts. I offer a critique of the gendered and racialized foundations of terminology such as âextended,â ânon-human,â and âdis/embodied,â which follows these repertories. In the work of American composers Joan La Barbara, Laurie Anderson, Wendy Carlos, Laetitia Sonami, and Pamela Z, I trace performative interventions in technoscientific paradigms of the late twentieth century.
The voice is perceived as the locus of the musical body and has long been feminized in musical discourse. The first three chapters explore how this discourse is challenged by compositions featuring the processed, broadcast, and synthesized voices of women. I focus on how these works stretch the limits of traditional vocal epistemology and, in turn, engage the bodies of listeners. In the final chapter on musical performance with gesture control, I question the characterization of hand/arm gesture as a ânaturalâ musical interface and return to the voice, now sampled and mapped onto movement. Drawing on Cyborg feminist frameworks which privilege hybridity and multiplicity, I show that the above composers audit the dominant technoscientific imaginary by constructing musical bodies that are never essentially manifested nor completely erased