23 research outputs found

    A New Medium for Remote Music Tuition

    Get PDF
    It is common to learn to play an orchestral musical instrument through regular one-to-one lessons with an experienced musician as a tutor. Students may work with the same tutor for many years, meeting regularly to receive real-time, iterative feedback on their performance. However, musicians travel regularly to audition, teach and perform and this can sometimes make it difficult to maintain regular contact. In addition, an experienced tutor for a specific instrument or musical style may not be available locally. General instrumental tuition may not be available at all in geographically distributed communities. One solution is to use technology such as videoconference to facilitate a remote lesson; however, this fundamentally changes the teaching interaction. For example, as a result of the change in communication medium, the availability of non-verbal cues and perception of relative spatiality is reduced. We describe a study using video-ethnography, qualitative video analysis and conversation analysis to make a fine-grained examination of student–tutor interaction during five co-present and one video-mediated woodwind lesson. Our findings are used to propose an alternative technological solution – an interactive digital score. Rather than the face-to-face configuration enforced by videoconference, interacting through a shared digital score, augmented by visual representation of the social cues found to be commonly used in co-present lessons, will better support naturalistic student–tutor interaction during the remote lesson experience. Our findings may also be applicable to other fields where knowledge and practice of a physical skill sometimes need to be taught remotely, such as surgery or dentistry

    Robot Comedy Lab: experimenting with the social dynamics of live performance

    Get PDF
    Copyright © 2015 Katevas, Healey and Harris. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.This document is protected by copyright and was first published by Frontiers. All rights reserved. It is reproduced with permission.This work was funded by EPSRC (EP/G03723X/1) through the Media and Arts Technology Program, an RCUK Center for Doctoral Training

    Sensing Social Behavior With Smart Trousers

    Get PDF
    Nonverbal signals play an important role in social interaction. Body orientation, posture, hand, and leg movements all contribute to successful communication, though research has typically focused on cues transmitted from the torso alone. Here, we explore lower body movements and address two issues. First, the empirical question of what social signals they provide. Second, the technical question of how these movements could be sensed unintrusively and in situations where traditional methods prove challenging. To approach these issues, we propose a soft, wearable sensing system for clothing. Bespoke “smart” trousers with embedded textile pressure sensors are designed and deployed in seated, multiparty conversations. Using simple machine learning techniques and evaluating individual and community models, our results show that it is possible to distinguish basic conversational states. With the trousers picking up speaking, listening, and laughing, they present an appropriate modality to ubiquitously sense human behavior

    Participation during First Social Encounters in Schizophrenia

    Get PDF
    This work was funded by the Engineering and Physical Sciences Research Council Doctoral Training Programme (EP/P502683/1

    Co-ordinating Non-mutual Realities: The Asymmetric Impact of Delay on Video-Mediated Music Lessons

    Get PDF
    During a music lesson, participants need to co-ordinate both their turns at talk and their turns at playing. Verbal and musical contributions are shaped by their organisation within the turntaking system. When lessons are conducted remotely by video conference, these mechanisms are disrupted by the asymmetric effects of delay on the interaction; in effect a “non-mutual reality” comprised of two different conversations at each end of the link. Here we compare detailed case studies of a copresent and a remote music lesson, in order to show how this effect arises, and how it impacts conduct during the lesso

    Refining musical performance through overlap

    Get PDF
    © 2018, Hacettepe University. All rights reserved. Whilst the focus of attention in an instrumental music lesson is refinement of the student’s musical performance, conversation plays an essential role; not just as a way to analyse the student’s musical contributions, but to organise them within the lesson flow. Participants may respond to talk through performance and vice versa, or even spend periods of time exchanging purely musical contributions. The short musical fragments exchanged by the participants are managed within lesson dialogue in ways analogous to conversational turn-taking. Problems in the student’s performance are refined through both student self-initiated and tutor other-initiated repair, initiated by embodied action and play. A fundamental part of turn-taking is managing the transition to a new speaker. The presence of musical contributions allows for additional types of transition, for example from a turn at talk, to a musical contribution. In conversation, there is generally a preference for a short pause at the transition to a new speaker, and overlap tends to be minimised when it occurs. Through detailed qualitative video analysis of a one-to-one clarinet lesson, we find differences in the preferences regarding overlap when purely musical contributions are being exchanged, and that the duration of overlap during these exchanges of fragments of music are significant

    Speakers Raise their Hands and Head during Self-Repairs in Dyadic Conversations

    Get PDF
    People often encounter difficulties in building shared understanding during everyday conversation. The most common symptom of these difficulties are self-repairs, when a speaker restarts, edits or amends their utterances mid-turn. Previous work has focused on the verbal signals of self-repair, i.e. speech disfluences (filled pauses, truncated words and phrases, word substitutions or reformulations), and computational tools now exist that can automatically detect these verbal phenomena. However, face-to-face conversation also exploits rich non-verbal resources and previous research suggests that self-repairs are associated with distinct hand movement patterns. This paper extends those results by exploring head and hand movements of both speakers and listeners using two motion parameters: height (vertical position) and 3D velocity. The results show that speech sequences containing self-repairs are distinguishable from fluent ones: speakers raise their hands and head more (and move more rapidly) during self-repairs. We obtain these results by analysing data from a corpus of 13 unscripted dialogues, and we discuss how these findings could support the creation of improved cognitive artificial systems for natural human-machine and human-robot interaction

    Co-ordinating Non-mutual Realities: The Asymmetric Impact of Delay on Video-Mediated Music Lessons.

    Get PDF
    During a music lesson, participants need to co-ordinate both their turns at talk and their turns at playing. Verbal and musical contributions are shaped by their organisation within the turntaking system. When lessons are conducted remotely by video conference, these mechanisms are disrupted by the asymmetric effects of delay on the interaction; in effect a “non-mutual reality” comprised of two different conversations at each end of the link. Here we compare detailed case studies of a copresent and a remote music lesson, in order to show how this effect arises, and how it impacts conduct during the lesson

    Drawing as transcription: how do graphical techniques inform interaction analysis?

    Get PDF
    This is an Open Access Article. It is published by Aarhus University Library under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) licence. Full details of this licence are available at: https://creativecommons.org/licenses/by-nc-nd/4.0/Drawing as a form of analytical inscription can provide researchers with highly flexible methods for exploring embodied interaction. Graphical techniques can combine spatial layouts, trajectories of action and anatomical detail, as well as rich descriptions of movement and temporal effects. This paper introduces some of the possibilities and challenges of adapting graphical techniques from life drawing and still life for interaction research. We demonstrate how many of these techniques are used in interaction research by illustrating the postural configurations and movements of participants in a ballet class. We then discuss a prototype software tool that is being developed to support interaction analysis specifically in the context of a collaborative data analysis session

    Interactive Audio Augmented Reality in Participatory Performance

    Get PDF
    Interactive Audio Augmented Reality (AAR) facilitates collaborative storytelling and human interaction in participatory performance. Spatial audio enhances the auditory environment and supports real-time control of media content and the experience. Nevertheless, AAR applied to interactive performance practices remains under-explored. This study examines how audio human-computer interaction can prompt and support actions, and how AAR can contribute to developing new kinds of interactions in participatory performance.This study investigates an AAR participatory performance based on the theater and performance practice by theater maker Augusto Boal. It draws from aspects of multi-player audio-only games and interactive storytelling. A user experience study of the performance shows that people are engaged with interactive content and interact and navigate within the spatial audio content using their whole body. Asymmetric audio cues, playing distinctive content for each participant, prompt verbal and non-verbal communication. The performative aspect was well-received and participants took on roles and responsibilities within their group during the experience
    corecore