3 research outputs found

    Composing and realising a game-like performance for disklavier and electronics

    Get PDF
    “Climb!” is a musical composition that combines the ideas of a classical virtuoso piece and a computer game. We present a case study of the composition process and realization of “Climb!”, written for Disklavier and a digital interactive engine, which was co-developed together with the musical score. Specifically, the engine combines a system for recognising and responding to musical trigger phrases along with a dynamic digital score renderer. This tool chain allows for the composer’s original scoring to include notational elements such as trigger phrases to be automatically extracted to auto-configure the engine for live performance. We reflect holistically on the development process to date and highlight the emerging challenges and opportunities. For example, this includes the potential for further developing the workflow around the scoring process and the ways in which support for musical triggers has shaped the compositional approach

    Lessons Learned in Exploring the Leap Motion™ Sensor for Gesture-based Instrument Design

    Get PDF
    The Leap Motion™ sensor offers fine-grained gesture-recognition and hand tracking. Since its release, there have been several uses of the device for instrument design, musical interaction and expression control, documented through online video. However, there has been little formal documented investigation of the potential and challenges of the platform in this context. This paper presents lessons learned from work-in-progress on the development of musical instruments and control applications using the Leap Motion™ sensor. Two instruments are presented: Air-Keys and Air-Pads and the potential for augmentation of a traditional keyboard is explored. The results show that the platform is promising in this context but requires various challenges, both physical and logical, to be overcome

    Not All Gestures Are Created Equal: Gesture and Visual Feedback in Interaction Spaces.

    Full text link
    As multi-touch mobile computing devices and open-air gesture sensing technology become increasingly commoditized and affordable, they are also becoming more widely adopted. It became necessary to create new interaction design specifically for gesture-based interfaces to meet the growing needs of users. However, a deeper understanding of the interplay between gesture, and visual and sonic output is needed to make meaningful advances in design. This thesis addresses this crucial step in development by investigating the interrelation between gesture-based input, and visual representation and feedback, in gesture-driven creative computing. This thesis underscores the importance that not all gestures are created equal, and there are multiple factors that affect their performance. For example, a drag gesture in visual programming scenario performs differently than in a target acquisition task. The work presented here (i) examines the role of visual representation and mapping in gesture input, (ii) quantifies user performance differences in gesture input to examine the effect of multiple factors on gesture interactions, and (iii) develops tools and platforms for exploring visual representations of gestures. A range of gesture spaces and scenarios, from continuous sound control with open-air gestures to mobile visual programming with discrete gesture-driven commands, was assessed. Findings from this thesis reveals a rich space of complex interrelations between gesture input and visual feedback and representations. The contributions of this thesis also includes the development of an augmented musical keyboard with 3-D continuous gesture input and projected visualization, as well as a touch-driven visual programming environment for interactively constructing dynamic interfaces. These designs were evaluated by a series of user studies in which gesture-to-sound mapping was found to have a significant affect on user performance, along with other factors such as the selection of visual representation and device size. A number of counter-intuitive findings point to the potentially complex interactions between factors such as device size, task and scenarios, which exposes the need for further research. For example, the size of the device was found to have contradictory effects in two different scenarios. Furthermore, this work presents a multi-touch gestural environment to support the prototyping of gesture interactions.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113456/1/yangqi_1.pd
    corecore