1,206 research outputs found
The sound motion controller: a distributed system for interactive music performance
We developed an interactive system for music performance, able to
control sound parameters in a responsive way with respect to the
user’s movements. This system is conceived as a mobile application,
provided with beat tracking and an expressive parameter modulation,
interacting with motion sensors and effector units, which are
connected to a music output, such as synthesizers or sound effects.
We describe the various types of usage of our system and our
achievements, aimed to increase the expression of music
performance and provide an aid to music interaction. The results
obtained outline a first level of integration and foresee future
cognitive and technological research related to it
A wireless, real-time, social music performance system for mobile phones
The paper reports on the Cellmusic system: a real-time, wireless distributed composition and performance system designed for domestic mobile devices. During a performance, each mobile device communicates with others, and may create sonic events in a passive (non interactive) mode or may influence the output of other devices. Cellmusic distinguishes itself from other mobile phone performance environments in that it is intended for performance in ad hoc locations, with services and performances automatically and dynamically adapting to the number of devices within a given proximity. It is designed to run on a number of mobile phone platforms to allow as wider distribution as possible, again distinguishing itself from other mobile performance systems which primarily run on a single device. Rather than performances being orchestrated or managed, it is intended that users will access it and create a performance in the same manner that they use mobile phones for interacting socially at different times throughout the day. However, this does not preclude the system being used in a more traditional performance environment. This accessibility and portability make it an ideal platform for sonic artists who choose to explore a variety of physical environments (such as parks and other public spaces)
Wireless Audio Interactive Knot
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2001.Includes bibliographical references (leaves 44-45).The Sound Transformer is a new type of musical instrument. It looks a little like a saxophone, but when you sing or "kazoo" into it, astonishing transforms and mutations come out. What actually happens is that the input sound is sent via 802.11 wireless link to a net server that transforms the sound and sends it back to the instrument's speaker. In other words, instead of a resonant acoustic body, or a local computer synthesizer, this architecture allows sound to be sourced or transformed by an infinite array of online services, and channeled through a gesturally expressive handheld. Emerging infrastructures (802.11, Bluetooth, 3G and 4G, etc) seem to aim at this new class of instrument. But can such an architecture really work? In particular, given the delays incurred by decoupling the sound transformation from the instrument over a wireless network, are interactive music applications feasible? My thesis is that they are. To prove this, I built a platform called WAI-KNOT (for Wireless Audio Interactive Knot) in order to examine the latency issues as well as other design elements, and test their viability and impact on real music making. The Sound Transformer is a WAI-KNOT application.Adam Douglas Smith.S.M
Amarok Pikap: interactive percussion playing automobile
Alternative interfaces that imitate the audio-structure of authentic musical instruments are often equipped with sound generation techniques that feature physical attributes similar to those of the instruments they imitate. Amarok Pikap project utilizes an interactive system on the surface of an automobile that is specially modified with the implementation of various electronic sensors attached to its bodywork. Sur-faces that will be struck to produce sounds in percussive instrument modeling are commonly selected as distinctive surfaces such as electronic pads or keys. In this article we will carry out a status analysis to examine to what extent a percussion- playing interface using FSR and Piezo sensors can represent an authentic musical instrument, and how a new interactive musical interface may draw the interests of the public to a promotional event of an automobile campaign: Amarok Pikap. The structure that forms the design will also be subjected to a technical analysis
ERG final report
Text of the report is in Swedish
Recommended from our members
Seeking out the spaces between: Using improvisation for collaborative composition and interactive technology
Copyright © 2010 ISASTThis article presents findings from experiments into piano performance live electronics undertaken by the author since early 2007. The use of improvisation has infused every step of the process---both as a methodology to obtain meaningful results using interactive technology and as a way to generate and characterize a collaborative musical space with composers. The technology used has included pre-built MIDI interfaces such as the PianoBar, actuators such as miniature DC motors and sensor interfaces including iCube and the Wii controller. Collaborators have included researchers at the Centre for Digital Music (QMUL), Richard Barrett, Pierre Alexandre Tremblay and Atau Tanaka. In seeking to create responsive “performance environments” at the piano, I explore live, performative control of electronics to create better connections for both performer (providing the same level of interpretive freedom as with a “pure” instrumental performance) and audience (communicating clearly to them). I have been lucky to witness first-hand many live interactive performances and to work with various empathetic composers/performers in flexible working environments. Collaborating with experienced technologists and musicians, I have witnessed time and again what, for me, is a fundamental truth in interactive instrumental performance: As a living, spontaneous form it must be nurtured and informed by the performer’s physicality and imagination as much as by the creativity or knowledge of the composer and/or technologist. Specifically in the case of sensors, their dependence on the detail of each person’s body and reactions is so refined as to necessitate, I would argue, an entirely collaborative approach and therefore one that involves at least directed improvisation and, more likely, fairly extensive improvised exploration. The fundamentally personal and intimate nature of sensor readings---the amount of tension created by each performer, the shape of the ancillary gestures or the level of emotional involvement (especially relevant when using galvanic skin response or EEG)---makes creating pieces with sensors extremely difficult for a composer to do in isolation. Improvisation therefore provides a way for performer and composer to generate a common musical and gestural language. Related to these issues is the fact that the technical or notational parameters in interactive music are not yet (and may never be) standardized, thereby creating a very real and practical need for improvisation to figure at least somewhere in the process.This study is funded by the Arts and Humanities Research Council
Action-Sound Latency: Are Our Tools Fast Enough?
Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Copyright remains with the author(s).Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Copyright remains with the author(s).Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Copyright remains with the author(s).Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Copyright remains with the author(s).Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Copyright remains with the author(s).Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Copyright remains with the author(s).The importance of low and consistent latency in interactive music systems is well-established. So how do commonly-used tools for creating digital musical instruments and other tangible interfaces perform in terms of latency from user action to sound output? This paper examines several common configurations where a microcontroller (e.g. Arduino) or wireless device communicates with computer-based sound generator (e.g. Max/MSP, Pd). We find that, perhaps surprisingly, almost none of the tested configurations meet generally-accepted guidelines for latency and jitter. To address this limitation, the paper presents a new embedded platform, Bela, which is capable of complex audio and sensor processing at submillisecond latency
MindMusic: Brain-Controlled Musical Improvisation
MindMusic explores a new form of creative expression through brain controlled musical improvisation. Using EEG technology and a musical improviser system, Impro-Visor (Keller, 2018), MindMusic engages users in musical improvisation sessions controlled with their brainwaves. Brain-controlled musical improvisation offers a unique blend of mindfulness meditation, EEG biofeedback, and real-time music generation, and stands to assist with stress reduction and widen access to musical creativity
- …