13 research outputs found

    Embodied Musical Interaction

    Get PDF
    Music is a natural partner to human-computer interaction, offering tasks and use cases for novel forms of interaction. The richness of the relationship between a performer and their instrument in expressive musical performance can provide valuable insight to human-computer interaction (HCI) researchers interested in applying these forms of deep interaction to other fields. Despite the longstanding connection between music and HCI, it is not an automatic one, and its history arguably points to as many differences as it does overlaps. Music research and HCI research both encompass broad issues, and utilize a wide range of methods. In this chapter I discuss how the concept of embodied interaction can be one way to think about music interaction. I propose how the three “paradigms” of HCI and three design accounts from the interaction design literature can serve as a lens through which to consider types of music HCI. I use this conceptual framework to discuss three different musical projects—Haptic Wave, Form Follows Sound, and BioMuse

    Music: Ars Bene Movandi

    No full text

    Improving Postural Stability by Means of Novel Multimodal Biofeedback System Based on an Inertial Measurement Unit

    No full text
    In this paper we propose a system based on multimodal biofeedback for improving postural stability. The core elements of the system are: a sensor unit based on accelerometers and rate-gyroscopes;a coding unit able to A/D convert the signals from the sensor unit and generate three different biofeedback restitutions;a multimodal biofeedback system embedding: (a) a sound system for audio biofeedback including 4 speakers, (b) a belt with four vibrotactile actuators for tactile biofeedback, (c) a Video Display terminal for visual biofeedback. The study reports results from an application of the sound biofeedback to 15 subjects in a dedicated protocol

    Computational models for musical sounds sources

    No full text
    As a result of the progress in information technologies, algorithms for sound generation and transformation are now ubiquitous in multimedia systems, even though their performance and quality is rarely satisfactory. For the specific needs of music production and multimedia art, sound models are needed which are versatile, responsive to user's expectations, and having high audio quality. Moreover, for human-machine interaction model flexibility is a major issue. We will review some of the most important computational models that axe being used in musical sound production, and we will see that models based on the physics of actual or virtual objects can meet most of the requirements, thus allowing the user to rely on high-level descriptions of the sounding entities
    corecore