27 research outputs found

    Accessible interactive digital signage for visually impaired

    Get PDF
    In this workshop we discuss the potential of cross-modal haptic-auditory feedback for empowering visually impaired people to experience Interactive Digital Signage

    MyoSpat: a system for manipulating sound and light projections through hand gestures

    Get PDF
    MyoSpat is an interactive audio-visual system that aims to augment musical performances by empowering musicians and allowing them to directly manipulate sound and light projections through hand gestures. We present the second iteration of the system which draws from research findings that emerged from an evaluation of the first system. MyoSpat 2 is designed and developed using the Myo ges- ture control armband as input device and Pure Data (Pd) as\ud gesture recognition and audio-visual engine. The system is informed by human-computer interaction (HCI) principles: tangible computing and embodied, sonic and music inter- action design (MiXD). This paper reports a description of the system and its audio-visual feedback design. Finally, we present an evaluation of the system, its potential use in different multimedia contexts and in exploring embodied, sonic and music interaction principles

    Accessible interactive digital signage for visually impaired

    Get PDF
    In this workshop we discuss the potential of cross-modal haptic-auditory feedback for empowering visually impaired people to experience Interactive Digital Signage

    The Effect of Co-adaptive Learning & Feedback in Interactive Machine Learning

    Get PDF
    In this paper, we consider the effect of co-adaptive learning on the training and evaluation of real-time, interactive machine learning systems, referring to specific examples in our work on action-perception loops, feedback for virtual tasks, and training of regression and temporal models. Through these studies we have encountered challenges when designing and assessing expressive, multimodal interactive systems. We discuss those challenges to machine learning and human-computer interaction, proposing future directions and research

    Gesture-Timbre Space: Multidimensional Feature Mapping Using Machine Learning & Concatenative Synthesis

    Get PDF
    This paper presents a method for mapping embodied gesture, acquired with electromyography and motion sensing, to a corpus of small sound units, organised by derived timbral features using concatenative synthesis. Gestures and sounds can be associated directly using individual units and static poses, or by using a sound tracing method that leverages our intuitive associations between sound and embodied movement. We propose a method for augmenting corporal density to enable expressive variation on the original gesture-timbre space

    HarpCI, Empowering Performers to Control and Transform Harp Sounds in Live Performance

    Get PDF
    The goal of our research is to provide harpists with the tools to control and transform the sounds of their instrument in a natural and musical way. We consider the development of music with live electronics, with particular reference to the harp repertoire, and include interviews with six harpists that use technology in their professional performance practice. We then present HarpCI, a case study that explores how gestures can be used to control and transform sound and light projection in live performance with the electric harp. HarpCI draws on research from the areas Human Computer Interaction (HCI) and Music Interaction Design (MiXD) to extend the creative possibilities available to the performer, and demonstrates our approach to bridging the gap between the performer/composer and the harp on one side, and the technology on the other. We discuss the use of guitar pedals with the electric harp, and the limitations they impose, and then introduce the MyoSpat system as a potential solution to this issue. MyoSpat aims to give musicians control over auditory and visual aspects of the performance through easy to learn, intuitive and natural hand gestures. It also aims to enhance the compositional process for instrument and live electronics, through a new way of music notation for gesturally controlled interactive systems. The system uses the Myo┬о armband gestural controller, a device to control live sound processing that is non-invasive to instrumental technique and performer. The combination of these elements allows the performer to experience a tangible connection between gesture and sound production. Finally, we describe the experience of Eleanor Turner, who composed and performed The Wood and the Water using MyoSpat, and we conclude by presenting the outcomes from HarpCI workshops delivered at Cardiff Metropolitan University for Camac Harp Weekend, Royal Birmingham ConservatoireтАЩs Integra Lab and Southampton University

    Myo Mapper: a Myo armband to OSC mapper

    Get PDF
    Myo Mapper is a free and open source cross-platform application to map data from the gestural device Myo armband into Open Sound Control (OSC) messages. It represents a `quick and easy' solution for exploring the Myo's potential for realising new interfaces for musical expression. Together with details of the software, this paper reports some applications in which Myo Mapper has been successfully used and a qualitative evaluation. We then proposed guidelines for using Myo data in interactive artworks based on insight gained from the works described and the evaluation. Findings show that Myo Mapper empowers artists and non-skilled developers to easily take advantage of Myo data high-level features for realising interactive artistic works. It also facilitates the recognition of poses and gestures beyond those included with the product by using third-party interactive machine learning software

    Improvising through the senses: a performance approach with the indirect use of technology

    Get PDF
    This article explores and proposes new ways of performing in a technology-mediated environment. We present a case study that examines feedback loop relationships between a dancer and a pianist. Rather than using data from sensor technologies to directly control and affect musical parameters, we captured data from a dancer's arm movements and mapped them onto a bespoke device that stimulates the pianist's tactile sense through vibrations. The pianist identifies and interprets the tactile sensory experience, with his improvised performance responding to the changes in haptic information received. Our system presents a new way of technology-mediated performer interaction through tactile feedback channels, enabling the user to establish new creative pathways. We present a classification of vibrotactile interaction as means of communication, and we conclude how users experience multi-point vibrotactile feedback as one holistic experience rather than a collection of discrete feedback points

    Designing Gestures for Continuous Sonic Interaction

    Get PDF
    We present a system that allows users to try different ways to train neural networks and temporal modelling to asso- ciate gestures with time-varying sound. We created a soft- ware framework for this and evaluated it in a workshop- based study. We build upon research in sound tracing and mapping-by-demonstration to ask participants to de- sign gestures for performing time-varying sounds using a multimodal, inertial measurement (IMU) and muscle sens- ing (EMG) device. We presented the user with two classical techniques from the literature, Static Position regression and Hidden Markov based temporal modelling, and pro- pose a new technique for capturing gesture anchor points on the fly as training data for neural network based regression, called Windowed Regression. Our results show trade- offs between accurate, predictable reproduction of source sounds and exploration of the gesture-sound space. Several users were attracted to our windowed regression technique. This paper will be of interest to musicians engaged in going from sound design to gesture design and offers a workflow for interactive machine learning
    corecore