1,653 research outputs found

    Dynamic Primitives for Gestural Interaction

    Full text link

    BodySpace: inferring body pose for natural control of a music player

    Get PDF
    We describe the BodySpace system, which uses inertial sensing and pattern recognition to allow the gestural control of a music player by placing the device at different parts of the body. We demonstrate a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based techniques can shape gestural interaction

    Classifying types of gesture and inferring intent

    Get PDF
    In order to infer intent from gesture, a rudimentary classification of types of gestures into five main classes is introduced. The classification is intended as a basis for incorporating the understanding of gesture into human-robot interaction (HRI). Some requirements for the operational classification of gesture by a robot interacting with humans are also suggested

    Sensing and mapping for interactive performance

    Get PDF
    This paper describes a trans-domain mapping (TDM) framework for translating meaningful activities from one creative domain onto another. The multi-disciplinary framework is designed to facilitate an intuitive and non-intrusive interactive multimedia performance interface that offers the users or performers real-time control of multimedia events using their physical movements. It is intended to be a highly dynamic real-time performance tool, sensing and tracking activities and changes, in order to provide interactive multimedia performances. From a straightforward definition of the TDM framework, this paper reports several implementations and multi-disciplinary collaborative projects using the proposed framework, including a motion and colour-sensitive system, a sensor-based system for triggering musical events, and a distributed multimedia server for audio mapping of a real-time face tracker, and discusses different aspects of mapping strategies in their context. Plausible future directions, developments and exploration with the proposed framework, including stage augmenta tion, virtual and augmented reality, which involve sensing and mapping of physical and non-physical changes onto multimedia control events, are discussed

    Autonomous learning and reproduction of complex sequences: a multimodal architecture for bootstraping imitation games

    Get PDF
    This paper introduces a control architecture for the learning of complex sequence of gestures applied to autonomous robots. The architecture is designed to exploit the robot internal sensory-motor dynamics generated by visual, proprioceptive, and predictive informations in order to provide intuitive behaviors in the purpose of natural interactions with humans

    The ixiQuarks: merging code and GUI in one creative space

    Get PDF
    This paper reports on ixiQuarks; an environment of instruments and effects that is built on top of the audio programming language SuperCollider. The rationale of these instruments is to explore alternative ways of designing musical interaction in screen-based software, and investigate how semiotics in interface design affects the musical output. The ixiQuarks are part of external libraries available to SuperCollider through the Quarks system. They are software instruments based on a non- realist design ideology that rejects the simulation of acoustic instruments or music hardware and focuses on experimentation at the level of musical interaction. In this environment we try to merge the graphical with the textual in the same instruments, allowing the user to reprogram and change parts of them in runtime. After a short introduction to SuperCollider and the Quark system, we will describe the ixiQuarks and the philosophical basis of their design. We conclude by looking at how they can be seen as epistemic tools that influence the musician in a complex hermeneutic circle of interpretation and signification

    Speech perception and production as constructs of action: Implications for models of L2 development

    Get PDF
    Speech production involves an intricate set of actions. Its underlying cognitive mechanisms are thus historically seen as distant from those of speech perception, usually assumed to be a passive process. However, dynamic perspectives on language congregate grammar and language use, approximate phonetics and phonology, and value the role of speech perception in language development. Recent studies argue that speech production and perception are overlaying or at least highly interacting. Some scholars claim that the link between these two processes surpasses the acoustics, as studies have revealed that action also has a role in language comprehension. Phonic gestures are not just mechanisms by means of which one experiences speech production, but are supporting to perception. In this perspective, models interested in L2 development face a twofold challenge: to amalgamate speech perception and production, and to consider that speech transcends the acoustics, since - in a dynamic frame of reference - phonetic-phonological representations are auditory, gestural and general. This paper aims at presenting evidence for a gesture-driven perspective to L2 speech development in which the gesture is a phonological primitive that pervades and connects speech perception and production. By emphasizing a gesture-driven point of view, this work presents congruent and incongruent tenets among some hegemonic models for L2 speech development and an ecological/dynamic account

    Listening to accented speech in Brazilian Portuguese : on the role of fricative voicing and vowel duration in the identification of /s/ – /z/ minimal pairs produced by speakers of L1 Spanish

    Get PDF
    This article reports the results of two experiments investigating the combined role of vowel length and length of fricative voicing in the identification, by Brazilians, of minimal pairs such as casa /z/ – caça /s/ produced by speakers of Spanish (L1). In Experiment 1, stimuli were manipulated so that length of voicing in the fricative was tested in two levels (100% or 0% of voicing) and vowel length was tested in four levels (25%, 50%, 75% and 100% of the length of the total vowel). In Experiment 2, voicing length was tested in three levels (25%, 50% and 75% of voicing), combined with the four levels of vowel length (25%, 50%, 75% and 100% of the length of the total vowel). Both experiments were run on TP Software (Rauber et al. 2012), and forty Brazilian listeners with no experience with Spanish took part in both tasks. The results show an interaction between the two cues, especially in the stimuli with no full voicing in the fricative. These findings provide additional evidence to the gradient status of speech in production and perceptual phenomena (Albano 2001; Albano 2012; Perozzo 2017), besides shedding light on the teaching of Brazilian Portuguese as an Additional Language
    • …
    corecore