5 research outputs found

    That Syncing Feeling: Networked Strategies for Enabling Ensemble Creativity in iPad Musicians

    No full text
    The group experience of synchronisation is a key aspect of ensemble musical performance. This paper presents a number of strategies for syncing performance information across networked iPad-instruments to enable creativity among an ensemble of improvising musicians. Acoustic instrumentalists sync without mechanical intervention. Electronic instruments frequently synchronise rhythm using MIDI or OSC connections. In contrast, our system syncs other aspects of performance, such as tonality, instrument functions, and gesture classifications, to support and enhance improvised performance. Over a number of performances with an iPad and percussion group, Ensemble Metatone, various syncing scenarios have been explored that support, extend, and disrupt ensemble creativity

    DIY in Early Live Electroacoustic Music: John Cage, Gordon Mumma, David Tudor, and the Migration of Live Electronics from the Studio to Performance

    Get PDF
    This research examines early live electronic works by Gordon Mumma, David Tudor, and John Cage—three influential American experimental music composers who designed, built, and recontextualized electronics for live performance—and the Do-It-Yourself (DIY) aesthetic embodied by their instruments and the compositions written for them. This dissertation serves as a presentation of original research into the earliest composers of live electronic works and the necessary DIY approach used in building independent systems. Previous research on the DIY perspectives in music often touch on the grass-roots nature of contemporary electroacoustic systems but there is not yet research specific to the DIY approach taken by these three composers, who collaborated together on the earliest live electronic systems used in performance in the late 1960s and 1970s. Composers today continue to be influenced by the works of Mumma, Tudor, and Cage as they follow the same DIY traditions in the experimentation and implementation of circuitry and adaptation of emerging technologies in instrument design. The DIY tradition continues within the circuit design and engineering techniques that continue to be implemented in systems that are customized and tailored specifically for music performance. These individualistic and self-built systems are reflective of the composer’s skills in and adaptability to nascent technologies. Innovation and experimentalism have become standard procedure for today’s composers, who are driven forward to create, as well as to adapt, electronics for performance and the underlying DIY aesthetic of electroacoustic systems can be credited as far back as the instruments and systems build for live performance in the late 1960s and 1970s (known as live electronics), which was a period of transition of electronics from the studio to live performance. The efforts of Mumma, Tudor, and Cage remain influential on composers and performers today and it is important to recognize how the concept of DIY existed in their works as well as push forward a new area of research into the significance of DIY in music and technology

    Establishing a laptop orchestra in South Africa : an emic-centred inquiry into computer music performance

    Get PDF
    Dissertation (MMus (Music Technology))--University of Pretoria, 2022.A few months into the final year of my undergraduate degree an opportunity emerged to oversee and coordinate the technical and organisational aspects of UPLOrc (University of Pretoria Laptop Orchestra), an ensemble of laptops consisting of undergraduate and post-graduate students whose focus is to explore collective live coding practices. In addition to coordinating the activities of UPLOrc, in April 2020 I was invited to collaborate with SuperContinent, a networked live coding ensemble whose members are located across various continents at a minimum distance of more than 500 kilometres apart. A qualitatively-driven mixed-methods research paradigm was implemented guiding the collection of data from multiple sources in order to obtain a broader understanding of the complexities involved with live coding in collaborative contexts. A netnographic methodology was chosen for the qualitative component of this research, and incorporated an intersecting secondary quantitative component in the form of a survey administered to members of the networked performance community. The research is presented from an emic (insider’s) perspective in the form of an autoethnographic account of my experiences as a performer and instructor of live-coded music. Adopting the perspective of an insider initiated a process of critical self-reflection in which I attempted to understand my role as a student, teacher and collaborator in both performance and educational contexts. The procedures implemented in this research prompted by my collaboration, communication, active participation, and performance with the members of both ensembles over a two-year period, have allowed me to realise the purpose and power of collaborative networked live coding in terms of its potential for cultivating transformative spaces for musical creativity. In addition, conducting this research has provided me with the opportunity to begin the process of building an identity as a live coder, an identity that is multifaceted, complex and constantly negotiated no matter the context in which it operates.MusicMMus (Music Technology)Unrestricte

    Apps, Agents, and Improvisation: Ensemble Interaction with Touch-Screen Digital Musical Instruments

    No full text
    This thesis concerns the making and performing of music with new digital musical instruments (DMIs) designed for ensemble performance. While computer music has advanced to the point where a huge variety of digital instruments are common in educational, recreational, and professional music-making, these instruments rarely seek to enhance the ensemble context in which they are used. Interaction models that map individual gestures to sound have been previously studied, but the interactions of ensembles within these models are not well understood. In this research, new ensemble-focussed instruments have been designed and deployed in an ongoing artistic practice. These instruments have also been evaluated to find out whether, and if so how, they affect the ensembles and music that is made with them. Throughout this thesis, six ensemble-focussed DMIs are introduced for mobile touch-screen computers. A series of improvised rehearsals and performances leads to the identification of a vocabulary of continuous performative touch-gestures and a system for tracking these collaborative performances in real time using tools from machine learning. The tracking system is posed as an intelligent agent that can continually analyse the gestural states of performers, and trigger a response in the performers' user interfaces at appropriate moments. The hypothesis is that the agent interaction and UI response can enhance improvised performances, allowing performers to better explore creative interactions with each other, produce better music, and have a more enjoyable experience. Two formal studies are described where participants rate their perceptions of improvised performances with a variety of designs for agent-app interaction. The first, with three expert performers, informed refinements for a set of apps. The most successful interface was redesigned and investigated further in a second study with 16 non-expert participants. In the final interface, each performer freely improvised with a limited number of notes; at moments of peak gestural change, the agent presented users with the opportunity to try different notes. This interface is shown to produce performances that are longer, as well as demonstrate improved perceptions of musical structure, group interaction, enjoyment and overall quality. Overall, this research examined ensemble DMI performance in unprecedented scope and detail, with more than 150 interaction sessions recorded. Informed by the results of lab and field studies using quantitative and qualitative methods, four generations of ensemble-focussed interface have been developed and refined. The results of the most recent studies assure us that the intelligent agent interaction does enhance improvised performances

    Proceedings of the International Conference on New Interfaces for Musical Expression

    Get PDF
    Editors: Alexander Refsum Jensenius, Anders Tveit, Rolf Inge Godøy, Dan Overholt Table of Contents -Tellef Kvifte: Keynote Lecture 1: Musical Instrument User Interfaces: the Digital Background of the Analog Revolution - page 1 -David Rokeby: Keynote Lecture 2: Adventures in Phy-gital Space - page 2 -Sergi Jordà: Keynote Lecture 3: Digital Lutherie and Multithreaded Musical Performance: Artistic, Scientific and Commercial Perspectives - page 3 Paper session A — Monday 30 May 11:00–12:30 -Dan Overholt: The Overtone Fiddle: an Actuated Acoustic Instrument - page 4 -Colby Leider, Matthew Montag, Stefan Sullivan and Scott Dickey: A Low-Cost, Low-Latency Multi-Touch Table with Haptic Feedback for Musical Applications - page 8 -Greg Shear and Matthew Wright: The Electromagnetically Sustained Rhodes Piano - page 14 -Laurel Pardue, Christine Southworth, Andrew Boch, Matt Boch and Alex Rigopulos: Gamelan Elektrika: An Electronic Balinese Gamelan - page 18 -Jeong-Seob Lee and Woon Seung Yeo: Sonicstrument: A Musical Interface with Stereotypical Acoustic Transducers - page 24 Poster session B— Monday 30 May 13:30–14:30 -Scott Smallwood: Solar Sound Arts: Creating Instruments and Devices Powered by Photovoltaic Technologies - page 28 -Niklas Klügel, Marc René Frieß and Georg Groh: An Approach to Collaborative Music Composition - page 32 -Nicolas Gold and Roger Dannenberg: A Reference Architecture and Score Representation for Popular Music Human-Computer Music Performance Systems - page 36 -Mark Bokowiec: V’OCT (Ritual): An Interactive Vocal Work for Bodycoder System and 8 Channel Spatialization - page 40 -Florent Berthaut, Haruhiro Katayose, Hironori Wakama, Naoyuki Totani and Yuichi Sato: First Person Shooters as Collaborative Multiprocess Instruments - page 44 -Tilo Hähnel and Axel Berndt: Studying Interdependencies in Music Performance: An Interactive Tool - page 48 -Sinan Bokesoy and Patrick Adler: 1city 1001vibrations: development of a interactive sound installation with robotic instrument performance - page 52 -Tim Murray-Browne, Di Mainstone, Nick Bryan-Kinns and Mark D. Plumbley:The medium is the message: Composing instruments and performing mappings - page 56 -Seunghun Kim, Luke Keunhyung Kim, Songhee Jeong and Woon Seung Yeo: Clothesline as a Metaphor for a Musical Interface - page 60 -Pietro Polotti and Maurizio Goina: EGGS in action - page 64 -Berit Janssen: A Reverberation Instrument Based on Perceptual Mapping - page 68 -Lauren Hayes: Vibrotactile Feedback-Assisted Performance - page 72 -Daichi Ando: Improving User-Interface of Interactive EC for Composition-Aid by means of Shopping Basket Procedure - page 76 -Ryan McGee, Yuan-Yi Fan and Reza Ali: BioRhythm: a Biologically-inspired Audio-Visual Installation - page 80 -Jon Pigott: Vibration, Volts and Sonic Art: A practice and theory of electromechanical sound - page 84 -George Sioros and Carlos Guedes: Automatic Rhythmic Performance in Max/MSP: the kin.rhythmicator - page 88 -Andre Goncalves: Towards a Voltage-Controlled Computer — Control and Interaction Beyond an Embedded System - page 92 -Tae Hun Kim, Satoru Fukayama, Takuya Nishimoto and Shigeki Sagayama: Polyhymnia: An automatic piano performance system with statistical modeling of polyphonic expression and musical symbol interpretation - page 96 -Juan Pablo Carrascal and Sergi Jorda: Multitouch Interface for Audio Mixing - page 100 -Nate Derbinsky and Georg Essl: Cognitive Architecture in Mobile Music Interactions - page 104 -Benjamin D. Smith and Guy E. Garnett: The Self-Supervising Machine - page 108 -Aaron Albin, Sertan Senturk, Akito Van Troyer, Brian Blosser, Oliver Jan and Gil Weinberg: Beatscape, a mixed virtual-physical environment for musical ensembles - page 112 -Marco Fabiani, Gaël Dubus and Roberto Bresin: MoodifierLive: Interactive and collaborative expressive music performance on mobile devices - page 116 -Benjamin Schroeder, Marc Ainger and Richard Parent: A Physically Based Sound Space for Procedural Agents - page 120 -Francisco Garcia, Leny Vinceslas, Esteban Maestre and Josep Tubau Acquisition and study of blowing pressure profiles in recorder playing - page 124 -Anders Friberg and Anna Källblad:Experiences from video-controlled sound installations - page 128 -Nicolas d’Alessandro, Roberto Calderon and Stefanie Müller: ROOM#81 —Agent-Based Instrument for Experiencing Architectural and Vocal Cues - page 132 Demo session C — Monday 30 May 13:30–14:30 -Yasuo Kuhara and Daiki Kobayashi: Kinetic Particles Synthesizer Using Multi-Touch Screen Interface of Mobile Devices - page 136 -Christopher Carlson, Eli Marschner and Hunter Mccurry: The Sound Flinger: A Haptic Spatializer - page 138 -Ravi Kondapalli and Benzhen Sung: Daft Datum – an Interface for Producing Music Through Foot-Based Interaction - page 140 -Charles Martin and Chi-Hsia Lai: Strike on Stage: a percussion and media performance - page 142 Paper session D — Monday 30 May 14:30–15:30 -Baptiste Caramiaux, Patrick Susini, Tommaso Bianco, Frédéric Bevilacqua, Olivier Houix, Norbert Schnell and Nicolas Misdariis: Gestural Embodiment of Environmental Sounds: an Experimental Study - page 144 -Sebastian Mealla, Aleksander Valjamae, Mathieu Bosi and Sergi Jorda: Listening to Your Brain: Implicit Interaction in Collaborative Music Performances - page 149 -Dan Newton and Mark Marshall: Examining How Musicians Create Augmented Musical Instruments - page 155 Paper session E — Monday 30 May 16:00–17:00 -Zachary Seldess and Toshiro Yamada: Tahakum: A Multi-Purpose Audio Control Framework - page 161 -Dawen Liang, Guangyu Xia and Roger Dannenberg: A Framework for Coordination and Synchronization of Media - page 167 -Edgar Berdahl and Wendy Ju: Satellite CCRMA: A Musical Interaction and Sound Synthesis Platform - page 173 Paper session F — Tuesday 31 May 09:00–10:50 -Nicholas J. Bryan and Ge Wang: Two Turntables and a Mobile Phone - page 179 -Nick Kruge and Ge Wang: MadPad: A Crowdsourcing System for Audiovisual Sampling - page 185 -Patrick O’Keefe and Georg Essl: The Visual in Mobile Music Performance - page 191 -Ge Wang, Jieun Oh and Tom Lieber: Designing for the iPad: Magic Fiddle - page 197 -Benjamin Knapp and Brennon Bortz: MobileMuse: Integral Music Control Goes Mobile - page 203 -Stephen Beck, Chris Branton, Sharath Maddineni, Brygg Ullmer and Shantenu Jha: Tangible Performance Management of Grid-based Laptop Orchestras - page 207 Poster session G— Tuesday 31 May 13:30–14:30 -Smilen Dimitrov and Stefania Serafin: Audio Arduino—an ALSA (Advanced Linux Sound Architecture) audio driver for FTDI-based Arduinos - page 211 -Seunghun Kim and Woon Seung Yeo: Musical control of a pipe based on acoustic resonance - page 217 -Anne-Marie Hansen, Hans Jørgen Andersen and Pirkko Raudaskoski: Play Fluency in Music Improvisation Games for Novices - page 220 -Izzi Ramkissoon: The Bass Sleeve: A Real-time Multimedia Gestural Controller for Augmented Electric Bass Performance - page 224 -Ajay Kapur, Michael Darling, James Murphy, Jordan Hochenbaum, Dimitri Diakopoulos and Trimpin: The KarmetiK NotomotoN: A New Breed of Musical Robot for Teaching and Performance - page 228 -Adrian Barenca Aliaga and Giuseppe Torre: The Manipuller: Strings Manipulation and Multi-Dimensional Force Sensing - page 232 -Alain Crevoisier and Cécile Picard-Limpens: Mapping Objects with the Surface Editor - page 236 -Jordan Hochenbaum and Ajay Kapur: Adding Z-Depth and Pressure Expressivity to Tangible Tabletop Surfaces - page 240 -Andrew Milne, Anna Xambó, Robin Laney, David B. Sharp, Anthony Prechtl and Simon Holland: Hex Player—A Virtual Musical Controller - page 244 -Carl Haakon Waadeland: Rhythm Performance from a Spectral Point of View - page 248 -Josep M Comajuncosas, Enric Guaus, Alex Barrachina and John O’Connell: Nuvolet : 3D gesture-driven collaborative audio mosaicing - page 252 -Erwin Schoonderwaldt and Alexander Refsum Jensenius: Effective and expressive movements in a French-Canadian fiddler’s performance - page 256 -Daniel Bisig, Jan Schacher and Martin Neukom: Flowspace – A Hybrid Ecosystem - page 260 -Marc Sosnick and William Hsu: Implementing a Finite Difference-Based Real-time Sound Synthesizer using GPUs - page 264 -Axel Tidemann: An Artificial Intelligence Architecture for Musical Expressiveness that Learns by Imitation - page 268 -Luke Dahl, Jorge Herrera and Carr Wilkerson: TweetDreams: Making music with the audience and the world using real-time Twitter data - page 272 -Lawrence Fyfe, Adam Tindale and Sheelagh Carpendale: JunctionBox: A Toolkit for Creating Multi-touch Sound Control Interfaces - page 276 -Andrew Johnston: Beyond Evaluation: Linking Practice and Theory in New Musical Interface Design - page 280 -Phillip Popp and Matthew Wright: Intuitive Real-Time Control of Spectral Model Synthesis - page 284 -Pablo Molina, Martin Haro and Sergi Jordà: BeatJockey: A new tool for enhancing DJ skills - page 288 -Jan Schacher and Angela Stoecklin: Traces – Body, Motion and Sound - page 292 -Grace Leslie and Tim Mullen: MoodMixer: EEG-based Collaborative Sonification - page 296 -Ståle A. Skogstad, Kristian Nymoen, Yago de Quay and Alexander Refsum Jensenius: OSC Implementation and Evaluation of the Xsens MVN suit - page 300 -Lonce Wyse, Norikazu Mitani and Suranga Nanayakkara: The effect of visualizing audio targets in a musical listening and performance task - page 304 -Freed Adrian, John Maccallum and Andrew Schmeder: Composability for Musical Gesture Signal Processing using new OSC-based Object and Functional Programming Extensions to Max/MSP - page 308 -Kristian Nymoen, Ståle A. Skogstad and Alexander Refsum Jensenius: SoundSaber —A Motion Capture Instrument - page 312 -Øyvind Brandtsegg, Sigurd Saue and Thom Johansen: A modulation matrix for complex parameter sets - page 316 Demo session H— Tuesday 31 May 13:30–14:30 -Yu-Chung Tseng, Che-Wei Liu, Tzu-Heng Chi and Hui-Yu Wang: Sound Low Fun- page 320 -Edgar Berdahl and Chris Chafe: Autonomous New Media Artefacts (AutoNMA) - page 322 -Min-Joon Yoo, Jin-Wook Beak and In-Kwon Lee: Creating Musical Expression using Kinect - page 324 -Staas de Jong: Making grains tangible: microtouch for microsound - page 326 Baptiste Caramiaux, Frederic Bevilacqua and Norbert Schnell: Sound Selection by Gestures - page 329 Paper session I — Tuesday 31 May 14:30–15:30 -Hernán KerlleÃevich, Manuel Eguia and Pablo Riera: An Open Source Interface based on Biological Neural Networks for Interactive Music Performance - page 331 -Nicholas Gillian, R. Benjamin Knapp and Sile O’Modhrain: Recognition Of Multivariate Temporal Musical Gestures Using N-Dimensional Dynamic Time Warping - page 337 -Nicholas Gillian, R. Benjamin Knapp and Sile O’Modhrain: A Machine Learning Toolbox For Musician Computer Interaction - page 343 Paper session J — Tuesday 31 May 16:00–17:00 -Elena Jessop, Peter Torpey and Benjamin Bloomberg: Music and Technology in Death and the Powers - page 349 -Victor Zappi, Dario Mazzanti, Andrea Brogni and Darwin Caldwell: Design and Evaluation of a Hybrid Reality Performance - page 355 -Jérémie Garcia, Theophanis Tsandilas, Carlos Agon and Wendy Mackay: InkSplorer : Exploring Musical Ideas on Paper and Computer - page 361 Paper session K — Wednesday 1 June 09:00–10:30 -Pedro Lopes, Alfredo Ferreira and Joao Madeiras Pereira: Battle of the DJs: an HCI perspective of Traditional, Virtual, Hybrid and Multitouch DJing - page 367 -Adnan Marquez-Borbon, Michael Gurevich, A. Cavan Fyans and Paul Stapleton: Designing Digital Musical Interactions in Experimental Contexts - page 373 -Jonathan Reus: Crackle: A mobile multitouch topology for exploratory sound interaction - page 377 -Samuel Aaron, Alan F. Blackwell, Richard Hoadley and Tim Regan: A principled approach to developing new languages for live coding - page 381 -Jamie Bullock, Daniel Beattie and Jerome Turner: Integra Live: a new graphical user interface for live electronic music - page 387 Paper session L — Wednesday 1 June 11:00–12:30 -Jung-Sim Roh, Yotam Mann, Adrian Freed and David Wessel: Robust and Reliable Fabric, Piezoresistive Multitouch Sensing Surfaces for Musical Controllers - page 393 -Mark Marshall and Marcelo Wanderley: Examining the Effects of Embedded Vibrotactile Feedback on the Feel of a Digital Musical Instrument - page 399 -Dimitri Diakopoulos and Ajay Kapur: HIDUINO: A firmware for building driverless USB-MIDI devices using the Arduino microcontroller - page 405 -Emmanuel Flety and Côme Maestracci: Latency improvement in sensor wireless transmission using IEEE 802.15.4 - page 409 -Jeff Snyder: The Snyderphonics Manta, a Novel USB Touch Controller - page 413 Poster session M — Wednesday 1 June 13:30–14:30 -William Hsu: On Movement, Structure and Abstraction in Generative Audiovisual Improvisation - page 417 -Claudia Robles Angel: Creating Interactive Multimedia Works with Bio-data - page 421 -Paula Ustarroz: TresnaNet: musical generation based on network protocols - page 425 -Matti Luhtala, Tiina Kymäläinen and Johan Plomp: Designing a Music Performance Space for Persons with Intellectual Learning Disabilities - page 429 -Tom Ahola, Teemu Ahmaniemi, Koray Tahiroglu, Fabio Belloni and Ville Ranki: Raja —A Multidisciplinary Artistic Performance - page 433 -Emmanuelle Gallin and Marc Sirguy: Eobody3: A ready-to-use pre-mapped & multi-protocol sensor interface- page 437 -Rasmus Bååth, Thomas Strandberg and Christian Balkenius: Eye Tapping: How to Beat Out an Accurate Rhythm using Eye Movements - page 441 -Eric Rosenbaum: MelodyMorph: A Reconfigurable Musical Instrument - page 445 -Karmen Franinovic: Flo)(ps: Between Habitual and Explorative Action-Sound Relationships - page 448 -Margaret Schedel, Rebecca Fiebrink and Phoenix Perry: Wekinating 000000Swan: Using Machine Learning to Create and Control Complex Artistic Systems - page 453 -Carles F. Julià, Daniel Gallardo and Sergi Jordà: MTCF: A framework for designing and coding musical tabletop applications directly in Pure Data - page 457 -David Pirrò and Gerhard Eckel: Physical modelling enabling enaction: an example - page 461 -Thomas Mitchell and Imogen Heap: SoundGrasp: A Gestural Interface for the Performance of Live Music - page 465 -Tim Mullen, Richard Warp and Adam Jansch: Minding the (Transatlantic) Gap: An Internet-Enabled Acoustic Brain-Computer Music Interface - page 469 -Stefano Papetti, Marco Civolani and Federico Fontana: Rhythm’n’Shoes: a wearable foot tapping interface with audio-tactile feedback - page 473 -Cumhur Erkut, Antti Jylhä and Reha Di¸sçio˘glu: A structured design and evaluation model with application to rhythmic interaction displays - page 477 -Marco Marchini, Panos Papiotis, Alfonso Perez and Esteban Maestre: A Hair Ribbon Deflection Model for Low-Intrusiveness Measurement of Bow Force in Violin Performance - page 481 -Jonathan Forsyth, Aron Glennon and Juan Bello: Random Access Remixing on the iPad - page 487 -Erika Donald, Ben Duinker and Eliot Britton: Designing the EP trio: Instrument identities, control and performance practice in an electronic chamber music ensemble - page 491 -Cavan Fyans and Michael Gurevich: Perceptions of Skill in Performances with Acoustic and Electronic Instruments - page 495 -Hiroki Nishino: Cognitive Issues in Computer Music Programming - page 499 -Roland Lamb and Andrew Robertson: Seaboard: a new piano keyboard-related interface combining discrete and continuous control - page 503 -Gilbert Beyer and Max Meier: Music Interfaces for Novice Users: Composing Music on a Public Display with Hand Gestures - page 507 -Birgitta Cappelen and Anders-Petter Andersson: Expanding the role of the instrument - page 511 -Todor Todoroff: Wireless Digital/Analog Sensors for Music and Dance Performances - page 515 -Trond Engum: Real-time control and creative convolution— exchanging techniques between distinct genres - page 519 -Andreas Bergsland: The Six Fantasies Machine – an instrument modelling phrases from Paul Lansky’s Six Fantasies - page 523 Demo session N — Wednesday 1 June 13:30–14:30 -Jan Trützschler von Falkenstein: Gliss: An Intuitive Sequencer for the iPhone and iPad - page 527 -Jiffer Harriman, Locky Casey, Linden Melvin and Mike Repper: Quadrofeelia — A New Instrument for Sliding into Notes - page 529 -Johnty Wang, Nicolas D’Alessandro, Sidney Fels and Bob Pritchard: SQUEEZY: Extending a Multi-touch Screen with Force Sensing Objects for Controlling Articulatory Synthesis - page 531 -Souhwan Choe and Kyogu Lee: SWAF: Towards a Web Application Framework for Composition and Documentation of Soundscape - page 533 -Norbert Schnell, Frederic Bevilacqua, Nicolas Rasamimana, Julien Blois, Fabrice Guedy and Emmanuel Flety: Playing the "MO" —Gestural Control and Re-Embodiment of Recorded Sound and Music - page 535 -Bruno Zamborlin, Marco Liuni and Giorgio Partesana: (LAND)MOVES - page 537 -Bill Verplank and Francesco Georg: Can Haptics make New Music? —Fader and Plank Demos - page 53
    corecore