937 research outputs found

    SelVReflect: A Guided VR Experience Fostering Reflection on Personal Challenges

    Get PDF
    Reflecting on personal challenges can be difficult. Without encouragement, the reflection process often remains superficial, thus inhibiting deeper understanding and learning from past experiences. To allow people to immerse themselves in and deeply reflect on past challenges, we developed SelVReflect, a VR experience which offers active voice-based guidance and a space to freely express oneself. SelVReflect was developed in an iterative design process (N=5) and evaluated in a user study with N=20 participants. We found that SelVReflect enabled participants to approach their challenge and its (emotional) components from different perspectives and to discover new relationships between these components. By making use of the spatial possibilities in VR, participants developed a better understanding of the situation and of themselves. We contribute empirical evidence of how a guided VR experience can support reflection. We discuss opportunities and design requirements for guided VR experiences that aim to foster deeper reflection

    Designing Affective Loop Experiences

    Get PDF
    There is a lack of attention to the emotional and the physical aspects of communication in how we up to now have been approaching communication between people in the field of Human Computer Interaction (HCI). As de-signers of digital communication tools we need to consider altering the un-derlying model for communication that has been prevailing in HCI: the in-formation transfer model. Communication is about so much more than trans-ferring information. It is about getting to know yourself, who you are and what part you play in the communication as it unfolds. It is also about the experience of a communication process, what it feels like, how that feeling changes, when it changes, why and perhaps by whom the process is initiated, altered, or disrupted. The idea of Affective Loop experiences in design aims to create new expressive and experiential media for whole users, embodied with the social and physical world they live in, and where communication not only is about getting the message across but also about living the experi-ence of communication- feeling it. An Affective Loop experience is an emerging, in the moment, emotional experience where the inner emotional experience, the situation at hand and the social and physical context act together, to create for one complete em-bodied experience. The loop perspective comes from how this experience takes place in communication and how there is a rhythmic pattern in com-munication where those involved take turns in both expressing themselves and standing back interpreting the moment. To allow for Affective Loop experiences with or through a computer system, the user needs to be allowed to express herself in rich personal ways involv-ing our many ways of expressing and sensing emotions – muscles tensions, facial expressions and more. For the user to become further engaged in inter-action, the computer system needs the capability to return relevant, either diminishing, enforcing or disruptive feedback to those emotions expressed by the user so that the she wants to continue express herself by either strengthening, changing or keeping her expression. We describe how we used the idea of Affective Loop experiences as a con-ceptual tool to navigate a design space of gestural input combined with rich instant feedback. In our design journey, we created two systems, eMoto and FriendSense

    From the Inside Out: A Literature Review on Possibilities of Mobile Emotion Measurement and Recognition

    Get PDF
    Information systems are becoming increasingly intelligent and emotion artificial intelligence is an important component for the future. Therefore, the measurement and recognition of emotions is necessary and crucial. This paper presents a state of the art in the research field of mobile emotion measurement and recognition. The aim of this structured literature analysis using the PRISMA statement is to collect and classify the relevant literature and to provide an overview of the current status of mobile emotion recording and its future trends. A total of 59 articles were identified in the relevant literature databases, which can be divided into four main categories of emotion measurement. There was an increase of publications over the years in all four categories, but with a particularly strong increase in the areas of optical and vital-data-based recording. Over time, both the speed as well as the accuracy of the measurement has improved considerably in all four categories

    Using Textual Emotion Extraction in Context-Aware Computing

    Get PDF
    In 2016, the number of global smartphone users will surpass 2 billion. The common owner uses about 27 apps monthly. On average, users of SwiftKey, an alternative Android software keyboard, type approximately 1800 characters a day. Still, all of the user-generated data of these apps is, for the most part, unused by the owner itself. To change this, we conducted research in Context-Aware Computing, Natural Language Processing and Affective Computing. The goal was to create an environment for recording this non-used contextual data without losing its historical context and to create an algorithm that is able to extract emotions from text. Therefore, we are introducing Emotext, a textual emotion extraction algorithm that uses conceptnet5’s realworld knowledge for word-interpretation, as well as Cofra, a framework for recording contextual data with time-based versioning

    Teaching robot’s proactive behavior using human assistance

    Get PDF
    The final publication is available at link.springer.comIn recent years, there has been a growing interest in enabling autonomous social robots to interact with people. However, many questions remain unresolved regarding the social capabilities robots should have in order to perform this interaction in an ever more natural manner. In this paper, we tackle this problem through a comprehensive study of various topics involved in the interaction between a mobile robot and untrained human volunteers for a variety of tasks. In particular, this work presents a framework that enables the robot to proactively approach people and establish friendly interaction. To this end, we provided the robot with several perception and action skills, such as that of detecting people, planning an approach and communicating the intention to initiate a conversation while expressing an emotional status.We also introduce an interactive learning system that uses the person’s volunteered assistance to incrementally improve the robot’s perception skills. As a proof of concept, we focus on the particular task of online face learning and recognition. We conducted real-life experiments with our Tibi robot to validate the framework during the interaction process. Within this study, several surveys and user studies have been realized to reveal the social acceptability of the robot within the context of different tasks.Peer ReviewedPostprint (author's final draft

    Automatic Emotion Recognition from Mandarin Speech

    Get PDF
    • …
    corecore