7 research outputs found

    Large-scale mobile audio environments for collaborative musical interaction.

    Get PDF
    ABSTRACT New application spaces and artistic forms can emerge when users are freed from constraints. In the general case of human-computer interfaces, users are often confined to a fixed location, severely limiting mobility. To overcome this constraint in the context of musical interaction, we present a system to manage large-scale collaborative mobile audio environments, driven by user movement. Multiple participants navigate through physical space while sharing overlaid virtual elements. Each user is equipped with a mobile computing device, GPS receiver, orientation sensor, microphone, headphones, or various combinations of these technologies. We investigate methods of location tracking, wireless audio streaming, and state management between mobile devices and centralized servers. The result is a system that allows mobile users, with subjective 3-D audio rendering, to share virtual scenes. The audio elements of these scenes can be organized into large-scale spatial audio interfaces, thus allowing for immersive mobile performance, locative audio installations, and many new forms of collaborative sonic activity

    USER-SPECIFIC AUDIO RENDERING AND STEERABLE SOUND FOR DISTRIBUTED VIRTUAL ENVIRONMENTS

    No full text
    We present a method for user-specific audio rendering of a virtual environment that is shared by multiple participants. The technique differs from methods such as amplitude differencing, HRTF filtering, and wave field synthesis. Instead we model virtual microphones within the 3-D scene, each of which captures audio to be rendered to a loudspeaker. Spatialization of sound sources is accomplished via acoustic physical modelling, yet our approach also allows for localized signal processing within the scene. In order to control the flow of sound within the scene, the user has the ability to steer audio in specific directions. This paradigm leads to many novel applications where groups of individuals can share one continuous interactive sonic space. [Keywords: multi-user, spatialization, 3-D arrangement of DSP, steerable audio] 1

    A spatial interface for audio and music production

    No full text
    In an effort to find a better suited interface for musical performance, a novel approach has been discovered and developed. At the heart of this approach is the concept of physical interaction with sound in space, where sound processing occurs at various 3-D locations and sending sound signals from one area to another is based on physical models of sound propagation. The control is based on a gestural vocabulary that is familiar to users, involving natural spatial interaction such as translating, rotating, and pointing in 3-D. This research presents a framework to deal with realtime control of 3-D audio, and describes how to construct audio scenes to accomplish various musical tasks. The generality and effectiveness of this approach has enabled us to re-implement several conventional applications, with the benefit of a substantially more powerful interface, and has further led to the conceptualization of several novel applications. 1

    SoundPark : Exploring Ubiquitous Computing through a Mixed Reality Multi-player Game Experiment.

    No full text
    noteWe describe an ubiquitous computing architecture through a multi-player game application based on the objective of collecting audio clips and depositing them in a staging area. Central to the game are the themes of highly coupled interaction and communication between players with different roles and an engaging blend of interaction with both the physical and virtual worlds. To this end, numerous technologies including locative sensing, miniature computing, and portable displays had to be integrated with a game middleware and audio scene rendering engine. The result provides a compelling example of future distributed systems that this paper describes
    corecore