93 research outputs found

    Multimodal (multisensory) integration, in technology

    No full text
    International audienceThe concept of modality has two sides, depending on the domain in which it is defined (typically cognitive sciences and human computer interaction). The current item, though written in the framework of technology and system design, uses mainly the meaning of cognitive sciences: a modality is understood as a perceptual modality, and multimodality is understood as multisensory

    Interface, multimodal / multisensory

    No full text
    International audienceIn general, a multimodal interface is a class of interfaces, designed to make the interaction process between a human and a computer more similar to human-to-human communication. What is important in a multimodal interface, is that these kinds of systems strive for meaning, as defined in multimodality from the point of view of HCI

    Social retrieval of music content in multi-user performance

    Get PDF
    An emerging trend in interactive music performance consists of the audience directly participating in the performance by means of mobile devices. This is a step forward with respect to concepts like active listening and collaborative music making: non-expert members of an audience are enabled to directly participate in a creative activity such as the performance. This requires the availability of technologies for capturing and analysing in real-time the natural behaviour of the users/performers, with particular reference to non- verbal expressive and social behaviour. This paper presents a prototype of a non-verbal expressive and social search engine and active listening system, enabling two teams of non-expert users to act as performers. The performance consists of real-time sonic manipulation and mixing of music pieces selected according to features characterising performers\u2019 movements captured by mobile devices. The system is described with specific reference to the SIEMPRE Podium Performance, a non-verbal socio-mobile music performance presented at the Art & ICT Exhibition that took place in Vilnius (LI) in November 2013

    Embodied cooperation using mobile devices: presenting and evaluating the Sync4All application

    Get PDF
    ABSTRACT Embodied cooperation "arises when two co-present, individuals in motion coordinate their goal-directed actions". The adoption of the embodied cooperation paradigm for the development of embodied and social multimedia systems opens new perspectives for future User Centric Media. Systems for embodied music listening, which enable users to influence music in real-time by movement and gesture, can greatly benefit from the embodied cooperation paradigm. This paper presents the design and the evaluation of an application, Sync4All, based on such a paradigm, allowing users to experience social embodied music listening. Each user rhythmically and freely moves a mobile phone trying to synchronise her movements with those of the other ones. The level of such a synchronisation influences the music experience. The evaluation of Sync4All was aimed at finding out which is the overall attitude of the users towards the application, and how the participants perceived embodied cooperation and music embodiment

    Guiding attention in Sequence-to-sequence models for Dialogue Act prediction

    Full text link
    The task of predicting dialog acts (DA) based on conversational dialog is a key component in the development of conversational agents. Accurately predicting DAs requires a precise modeling of both the conversation and the global tag dependencies. We leverage seq2seq approaches widely adopted in Neural Machine Translation (NMT) to improve the modelling of tag sequentiality. Seq2seq models are known to learn complex global dependencies while currently proposed approaches using linear conditional random fields (CRF) only model local tag dependencies. In this work, we introduce a seq2seq model tailored for DA classification using: a hierarchical encoder, a novel guided attention mechanism and beam search applied to both training and inference. Compared to the state of the art our model does not require handcrafted features and is trained end-to-end. Furthermore, the proposed approach achieves an unmatched accuracy score of 85% on SwDA, and state-of-the-art accuracy score of 91.6% on MRDA

    Go-with-the-Flow: Tracking, Analysis and Sonification of Movement and Breathing to Build Confidence in Activity Despite Chronic Pain

    Get PDF
    Chronic (persistent) pain (CP) affects 1 in 10 adults; clinical resources are insufficient, and anxiety about activity restricts lives. Technological aids monitor activity but lack necessary psychological support. This article proposes a new sonification framework, Go-with-the-Flow, informed by physiotherapists and people with CP. The framework proposes articulation of user-defined sonified exercise spaces (SESs) tailored to psychological needs and physical capabilities that enhance body and movement awareness to rebuild confidence in physical activity. A smartphone-based wearable device and a Kinect-based device were designed based on the framework to track movement and breathing and sonify them during physical activity. In control studies conducted to evaluate the sonification strategies, people with CP reported increased performance, motivation, awareness of movement, and relaxation with sound feedback. Home studies, a focus group, and a survey of CP patients conducted at the end of a hospital pain management session provided an in-depth understanding of how different aspects of the SESs and their calibration can facilitate self-directed rehabilitation and how the wearable version of the device can facilitate transfer of gains from exercise to feared or demanding activities in real life. We conclude by discussing the implications of our findings on the design of technology for physical rehabilitation

    Predicting extraversion from non-verbal features during a face-to-face human-robot interaction

    Get PDF
    International audienceIn this paper we present a system for automatic prediction of extraversion during the first thin slices of human-robot interaction (HRI). This work is based on the hypothesis that personality traits and attitude towards robot appear in the behavioural response of humans during HRI. We propose a set of four non-verbal movement features that characterize human behavior during interaction. We focus our study on predicting Extraversion using these features , extracted from a dataset consisting of 39 healthy adults interacting with the humanoid iCub. Our analysis shows that it is possible to predict to a good level (64%) the Extraversion of a human from a thin slice of interaction relying only on non-verbal movement features. Our results are comparable to the state-of-the-art obtained in HHI [ 23 ]
    corecore