181 research outputs found

    Processing of symbolic music notation via multimodal performance data: Brian Ferneyhough's Lemma-Icon-Epigram for solo piano, phase 1

    Get PDF
    In the "Performance Notes" to his formidable solo piano work Lemma-Icon-Epigram, British composer Brian Ferneyhough proposes a top-down learning strategy: Its first phase would consist in an "overview of gestural patterning", before delving into the notorious rhythmic intricacies of this most complex notation. In the current paper, we propose a methodology for inferring such patterning from multimodal performance data. In particular, we have a) conducted qualitative analysis of the correlations between the performance data -an audio recording, 12-axis acceleration and gyroscope signals captured by inertial sensors, kinect video and MIDI- and the implicit annotation of pitch during a 'sight-reading' performance; b) observed and documented the correspondence between patterns in the gestural signals and patterns in the score annotations and c) produced joint tablature-like representations, which inscribe the gestural patterning back into the notation, while reducing the pitch material by 70-80% of the original. In addition, we have incorporated this representation in videos and interactive multimodal tablatures using the INScore. Our work is drawing from recent studies in the fields of gesture modeling and interaction. It is extending the authors' previous work on an embodied model of navigation of complex notation and on an application for offline and real-time gestural control of complex notation by the name GesTCom. Future prospects include the probabilistic modeling of gesture-to-notation mappings, towards the design of interactive systems which learn along with the performer while cutting through textual complexity

    Segments and Mapping for Scores and Signal Representations

    Get PDF
    We present a general theoretical framework to describe segments and the different possible mapping that can be established between them. Each segment can be related to different music representations, graphical scores, music signals or gesture signals. This theoretical formalism is general and is compatible with large number of problems found in sound and gesture computing. We describe some examples we developed in interactive score representation, superposed with signal representation, and the description of synchronization between gesture and sound signals

    Gesture cutting through textual complexity: Towards a tool for online gestural analysis and control of complex piano notation processing

    Get PDF
    International audienceThis project introduces a recently developed prototype for real-time processing and control of complex piano notation through the pianist’s gesture. The tool materializes an embodied cognition-influenced paradigm of interaction of pianists with complex notation (embodied or corporeal navigation), drawing from latest developments in the computer music fields of musical representation (augmented and interactive musical scores via INScore) and of multimodal interaction (Gesture Follower). Gestural, video, audio and MIDI data are appropriately mapped on the musical score, turning it into a personalized, dynamic, multimodal tablature. This tablature may be used for efficient learning, performance and archiving, with potential applications in pedagogy, composition, improvisation and score following. The underlying metaphor for such a tool is that instrumentalists touch or cut through notational complexity using performative gestures, as much as they touch their own keyboards. Their action on the instrument forms integral part of their understanding, which can be represented as a gestural processing of the notation. Next to the already mentioned applications, new perspectives in piano performance of post-1945 complex notation and in musicology (‘performative turn’), as well as the emerging field of ‘embodied and extended cognition’, are indispensable for this project

    Capture, modeling and recognition of expert technical gestures in wheel-throwing art of pottery

    No full text
    International audienceThis research has been conducted in the context of the ArtiMuse project that aims at the modeling and renewal of rare gestural knowledge and skills involved in the traditional craftsmanship and more precisely in the art of the wheel-throwing pottery. These knowledge and skills constitute the Intangible Cultural Heritage and refer to the fruit of diverse expertise founded and propagated over the centuries thanks to the ingeniousness of the gesture and the creativity of the human spirit. Nowadays, this expertise is very often threatened with disappearance because of the difficulty to resist to globalization and the fact that most of those "expertise holders" are not easily accessible due to geographical or other constraints. In this paper, a methodological framework for capturing and modeling gestural knowledge and skills in wheel-throwing pottery is proposed. It is based on capturing gestures using wireless inertial sensors and statistical modeling. In particular, we used a system that allows for online alignment of gestures using a modified Hidden Markov Model. This methodology is implemented into a Human-Computer Interface, which permits both the modeling and recognition of expert technical gestures. This system could be used to assist in the learning of these gestures by giving continuous feedback in real-time by measuring the difference between expert and learner gestures. The system has been tested and evaluated on different potters with a rare expertise, which is strongly related to their local identity

    Altering body perception and emotion in physically inactive people through movement sonification

    Get PDF
    Reino Unido. Cambridge (3-6 Septiembre 2019)Physical inactivity is an increasing problem. It has been linked to psychological and emotional barriers related to the perception of one's body, such as physical capabilities. It remains a challenge to design technologies to increase physical activity in inactive people. We propose the use of a sound interactive system where inputs from movement sensors integrated in shoes are transformed into sounds that evoke body sensations at a metaphorical level. Our user study investigates the effects of various gesture-sound mappings on the perception of one's body and its movement qualities (e.g. being flexible or agile), the related emotional state and movement patterns, when people performed two exercises, walking and thigh stretch. The results confirm the effect of the "metaphor" conditions vs. the control conditions in feelings of body weight; feeling less tired and more in control; or being more comfortable, motivated, and happier. These changes linked to changes in affective state and body movement. We discuss the results in terms of how acting upon body perception and affective states through sensory feedback may in turn enhance physical activity, and the opportunities opened by our findings for the design of wearable technologies and interventions in inactive populations.The work is supported by Ministerio de Economía, Industria y Competitividad of Spain Grants RYC-2014–15421 and PSI2016-79004-R (“MAGIC SHOES”; AEI/FEDER, UE) and doctoral training grant BES-2017-080471. FB was supported by the ELEMENT project (ANR-18-CE33-0002)

    Beyond Recognition: Using Gesture Variation for Continuous Interaction

    Get PDF
    Gesture-based interaction is widespread in touch screen interfaces. The goal of this paper is to tap the richness of expressive variation in gesture to facilitate continuous interaction. We achieve this through novel techniques of adaptation and estimation of gesture characteristics. We describe two experiments. The first aims at understanding whether users can control certain gestural characteristics and if that control depends on gesture vocabulary. The second study uses a machine learning technique based on particle filtering to simultaneously recognize and measure variation in a gesture. With this technology, we create a gestural interface for a playful photo processing application. From these two studies, we show that 1) multiple characteristics can be varied independently in slower gestures (Study 1), and 2) users find gesture-only interaction less pragmatic but more stimulating than traditional menu-based systems (Study 2)

    Outils innovants pour la crĂ©ation d’esquisses sonores combinant vocalisations et gestes

    Get PDF
    Les designers produisent différents types de représentations physiques et/ou digitales lors des différentes phases d'un processus de design. Ces objets intermédiaires de représentation permettent et supportent l'incarnation des idées du designer, de les externaliser, mais aussi la médiation entre les personnes qui sont impliquées dans les différentes phases du design (designers produits, ingénieurs, marketing, ...). Les designers sonores, eux aussi, produisent des sons intermédiaires pour les présenter aux commanditaires par un processus itératif de raffinement de ces propositions. Ainsi ces différents sons intermédiaires sont des esquisses sonores qui représentent les différentes étapes intermédiaires d'un processus de création en constante évolution. Nous présentons ici une proposition d'une méthode d'esquisse sonore basée sur la voix et étendue par l'utilisation d'une synthÚse sonore par corpus de son. Cet outil a été développé dans le cadre du projet Européen SkAT-VG (Sketching Audio Technologies using Vocalizations and Gestures). L'utilisation de la vocalisation s'ancre dans la pratique du design permettant de stimuler la génération de proposition sonore et la médiation entre les créatifs

    ModĂšles Probabilistes pour l'Interaction entre agents

    Get PDF
    Dans un contexte d'interaction Humain-Machine, notre objectif est l'élaboration d'un modÚle probabiliste d'interaction générique et vraisemblable capable de commander à la fois un Agent Conversationnel Animé (ACA) dans un cadre d'interaction et un Agent Musical Créatif (AMC) dans un contexte d'improvisation musicale
    • 

    corecore