12,803 research outputs found

    A Mimetic Strategy to Engage Voluntary Physical Activity In Interactive Entertainment

    Full text link
    We describe the design and implementation of a vision based interactive entertainment system that makes use of both involuntary and voluntary control paradigms. Unintentional input to the system from a potential viewer is used to drive attention-getting output and encourage the transition to voluntary interactive behaviour. The iMime system consists of a character animation engine based on the interaction metaphor of a mime performer that simulates non-verbal communication strategies, without spoken dialogue, to capture and hold the attention of a viewer. The system was developed in the context of a project studying care of dementia sufferers. Care for a dementia sufferer can place unreasonable demands on the time and attentional resources of their caregivers or family members. Our study contributes to the eventual development of a system aimed at providing relief to dementia caregivers, while at the same time serving as a source of pleasant interactive entertainment for viewers. The work reported here is also aimed at a more general study of the design of interactive entertainment systems involving a mixture of voluntary and involuntary control.Comment: 6 pages, 7 figures, ECAG08 worksho

    Bodily sensation maps: Exploring a new direction for detecting emotions from user self-reported data

    Get PDF
    The ability of detecting emotions is essential in different fields such as user experience (UX), affective computing, and psychology. This paper explores the possibility of detecting emotions through user-generated bodily sensation maps (BSMs). The theoretical basis that inspires this work is the proposal by Nummenmaa et al. (2014) of BSMs for 14 emotions. To make it easy for users to create a BSM of how they feel, and convenient for researchers to acquire and classify users’ BSMs, we created a mobile app, called EmoPaint. The app includes an interface for BSM creation, and an automatic classifier that matches the created BSM with the BSMs for the 14 emotions. We conducted a user study aimed at evaluating both components of EmoPaint. First, it shows that the app is easy to use, and is able to classify BSMs consistently with the considered theoretical approach. Second, it shows that using EmoPaint increases accuracy of users’ emotion classification when compared with an adaptation of the well-known method of using the Affect Grid with the Circumplex Model, focused on the same set of 14 emotions of Nummenmaa et al. Overall, these results indicate that the novel approach of using BSMs in the context of automatic emotion detection is promising, and encourage further developments and studies of BSM-based methods

    Affect Conveying Instant Messaging

    Get PDF
    Instant messaging applications cannot convey non-verbal communication through text-based messages. That can lead to an unpleasant misunderstanding between dyads when the discussion is held on a computer or smartphone. This study aims to determine if the affect conveying instant messaging applications has any usage within users who are daily users of instant messaging applications. Furthermore, does the application benefit the test users in the real variant group compared with the control group users? The tests were conducted with an instant messaging prototype application developed just for this experiment. To test the affect conveying instant messaging prototype, we gathered a test group, which were randomly divided into two different groups, those that tested the correct version and the control group. Both test groups tested the same application but with different affect conveying module or variant. The real group tested the real variant, and the random or control group tested the variant, which randomly chooses the conveyed affect or emotion. The affect is conveyed with emojis in both of the variants. After the tests were done, testers had to answer nine different interview questions. Finally, for three interview questions, testers give a grade on how satisfied they were with that particular function. The grades were analyzed with descriptive statistical methods, and the verbal interview answers were analyzed by gathering recurring themes across the answers. The study results show that the real variant of the affect conveying instant messaging prototype performed better overall than the random variant. Test users also think that the prototype and its affect conveying functionality was fun. However, they did not see any exact situations where they would use the affect conveying functionality in an instant messaging application. Testers thought that they would use it with friends and family rather than in professional life. Generally, the way emotions were conveyed in the prototype was well-received. Test users did not see any significant issues in it or if the same functionality would be used in applications like games.Pikaviestintäsovellukset eivät oletusarvoisesti välitä sanatonta viestintää tekstimuodossa olevien viestien mukana. Se on omiaan aiheuttamaan epämiellyttäviä väärinymmärryksiä keskusteluosapuolien välillä, kun keskustelu käydään tietokoneiden tai älypuhelimien avulla. Tämän tutkielman tavoitteena on selvittää, onko tunteita automaattisesti välittävällä pikaviestintäsovelluksella käyttöä niiden käyttäjien keskuudessa, jotka käyttävät pikaviestintäsovelluksia päivittäin omassa elämässä. Lisäksi selvitämme, tuoko testisovellus hyötyä sovelluksen oikeaa versiota testanneille verrattuna kontrolliryhmään. Testit ovat suoritettu sille erikseen luodulla pikaviestintäsovelluksella. Testatakseen testiin erikseen tehtyä pikaviestintäsovellus prototyyppiä, kokosimme testiryhmän, joka satunnaisesti jaettiin kahteen eri ryhmään, oikeaa versiota testanneisiin ja kontrolliryhmään. Molemmat testiryhmät testasivat samaa sovellusta, mutta joissa olivat kuitenkin eri affektiivinen moduuli. Todellisen testin suorittaneet testasivat niin sanottua oikeaa versiota, jossa tunteiden välitys toimi kuten sen oli tarkoitus toimia ja kontrolliryhmä testasi sovelluksen versiota, jossa tunteet, joita välitettiin, olivat sovelluksen satunnaisesti valitsemia tunteita, eivätkä oikeasti sovelluksen analysoimia tunteita käyttäjästä. Sovellusprototyypin testin jälkeen testaajat vastasivat yhdeksään haastattelukysymykseen, joista kolme kysymystä olivat sellaisia, joihin testaajan tuli antaa numeerinen arvosana siten kuinka tyytyväisiä he olivat kyseiseen toiminnallisuuteen. Numeeriset arvosanat analysoitiin kuvailevia tilastointimenetelmiä käyttäen ja sanalliset haastatteluvastaukset analysointiin teemoittamalla haastattelut ja löytämällä sieltä toistuvia teemoja. Tutkimustulokset osoittavat, että pikaviestintäsovellus prototyypin niin sanottu oikea versio toimi yleisesti paremmin mitä satunnaistettu tunteiden välitys. Testaajat myös ajattelivat, että prototyyppi ja sen automaattinen tunteidenvälitys oli hauskaa ja mielekästä seurattavaa keskustelun aikana. Testaajat eivät kuitenkaan haastattelun aikana löytäneet mitään tarkkaa reaalielämän käyttöä sovellukselle. Testaajat käyttäisivät sovellusta mieluimmin läheisten ja jo tuntemien ihmisten kanssa kuin esimerkiksi työelämässä. Testaajat yleisesti pitivät pikaviestintäsovelluksen tunteiden välityksestä, ja he eivät nähneet mitään suurta ongelmaa sen toiminnan kanssa tai että samanlainen toiminto olisi vapaaehtoisesti käytössä joissain muissa sovelluksissa, kuten videopeleissä

    Spectators’ aesthetic experiences of sound and movement in dance performance

    Get PDF
    In this paper we present a study of spectators’ aesthetic experiences of sound and movement in live dance performance. A multidisciplinary team comprising a choreographer, neuroscientists and qualitative researchers investigated the effects of different sound scores on dance spectators. What would be the impact of auditory stimulation on kinesthetic experience and/or aesthetic appreciation of the dance? What would be the effect of removing music altogether, so that spectators watched dance while hearing only the performers’ breathing and footfalls? We investigated audience experience through qualitative research, using post-performance focus groups, while a separately conducted functional brain imaging (fMRI) study measured the synchrony in brain activity across spectators when they watched dance with sound or breathing only. When audiences watched dance accompanied by music the fMRI data revealed evidence of greater intersubject synchronisation in a brain region consistent with complex auditory processing. The audience research found that some spectators derived pleasure from finding convergences between two complex stimuli (dance and music). The removal of music and the resulting audibility of the performers’ breathing had a significant impact on spectators’ aesthetic experience. The fMRI analysis showed increased synchronisation among observers, suggesting greater influence of the body when interpreting the dance stimuli. The audience research found evidence of similar corporeally focused experience. The paper discusses possible connections between the findings of our different approaches, and considers the implications of this study for interdisciplinary research collaborations between arts and sciences

    Chronic-Pain Protective Behavior Detection with Deep Learning

    Get PDF
    In chronic pain rehabilitation, physiotherapists adapt physical activity to patients' performance based on their expression of protective behavior, gradually exposing them to feared but harmless and essential everyday activities. As rehabilitation moves outside the clinic, technology should automatically detect such behavior to provide similar support. Previous works have shown the feasibility of automatic protective behavior detection (PBD) within a specific activity. In this paper, we investigate the use of deep learning for PBD across activity types, using wearable motion capture and surface electromyography data collected from healthy participants and people with chronic pain. We approach the problem by continuously detecting protective behavior within an activity rather than estimating its overall presence. The best performance reaches mean F1 score of 0.82 with leave-one-subject-out cross validation. When protective behavior is modelled per activity type, performance is mean F1 score of 0.77 for bend-down, 0.81 for one-leg-stand, 0.72 for sit-to-stand, 0.83 for stand-to-sit, and 0.67 for reach-forward. This performance reaches excellent level of agreement with the average experts' rating performance suggesting potential for personalized chronic pain management at home. We analyze various parameters characterizing our approach to understand how the results could generalize to other PBD datasets and different levels of ground truth granularity.Comment: 24 pages, 12 figures, 7 tables. Accepted by ACM Transactions on Computing for Healthcar

    Neurolaw: Brain-Computer Interfaces

    Get PDF

    Multisensory Perception and Learning: Linking Pedagogy, Psychophysics, and Human–Computer Interaction

    Get PDF
    In this review, we discuss how specific sensory channels can mediate the learning of properties of the environment. In recent years, schools have increasingly been using multisensory technology for teaching. However, it still needs to be sufficiently grounded in neuroscientific and pedagogical evidence. Researchers have recently renewed understanding around the role of communication between sensory modalities during development. In the current review, we outline four principles that will aid technological development based on theoretical models of multisensory development and embodiment to foster in-depth, perceptual, and conceptual learning of mathematics. We also discuss how a multidisciplinary approach offers a unique contribution to development of new practical solutions for learning in school. Scientists, engineers, and pedagogical experts offer their interdisciplinary points of view on this topic. At the end of the review, we present our results, showing that one can use multiple sensory inputs and sensorimotor associations in multisensory technology to improve the discrimination of angles, but also possibly for educational purposes. Finally, we present an application, the ‘RobotAngle’ developed for primary (i.e., elementary) school children, which uses sounds and body movements to learn about angles

    Affective Computing

    Get PDF
    This book provides an overview of state of the art research in Affective Computing. It presents new ideas, original results and practical experiences in this increasingly important research field. The book consists of 23 chapters categorized into four sections. Since one of the most important means of human communication is facial expression, the first section of this book (Chapters 1 to 7) presents a research on synthesis and recognition of facial expressions. Given that we not only use the face but also body movements to express ourselves, in the second section (Chapters 8 to 11) we present a research on perception and generation of emotional expressions by using full-body motions. The third section of the book (Chapters 12 to 16) presents computational models on emotion, as well as findings from neuroscience research. In the last section of the book (Chapters 17 to 22) we present applications related to affective computing
    corecore