15 research outputs found

    Towards Advanced Learner Modeling: Discussions on Quasi Real-time Adaptation with Physiological Data

    Full text link

    Analysing user physiological responses for affective video summarisation

    Get PDF
    This is the post-print version of the final paper published in Displays. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2009 Elsevier B.V.Video summarisation techniques aim to abstract the most significant content from a video stream. This is typically achieved by processing low-level image, audio and text features which are still quite disparate from the high-level semantics that end users identify with (the ‘semantic gap’). Physiological responses are potentially rich indicators of memorable or emotionally engaging video content for a given user. Consequently, we investigate whether they may serve as a suitable basis for a video summarisation technique by analysing a range of user physiological response measures, specifically electro-dermal response (EDR), respiration amplitude (RA), respiration rate (RR), blood volume pulse (BVP) and heart rate (HR), in response to a range of video content in a variety of genres including horror, comedy, drama, sci-fi and action. We present an analysis framework for processing the user responses to specific sub-segments within a video stream based on percent rank value normalisation. The application of the analysis framework reveals that users respond significantly to the most entertaining video sub-segments in a range of content domains. Specifically, horror content seems to elicit significant EDR, RA, RR and BVP responses, and comedy content elicits comparatively lower levels of EDR, but does seem to elicit significant RA, RR, BVP and HR responses. Drama content seems to elicit less significant physiological responses in general, and both sci-fi and action content seem to elicit significant EDR responses. We discuss the implications this may have for future affective video summarisation approaches

    Using physiological measures for emotional assessment: A computer-aided tool for cognitive and behavioral therapy

    Get PDF
    In the context of cognitive and behavioural therapies, the use of immersion technologies to replace classical exposure often improves the therapeutic process. As it is necessary to validate the efficiency of such a technique, both therapists and VR specialists need tools to monitor the impact of virtual reality exposure on patients. The present study investigates two possible solutions to assess affective states from physiological measurements; automatic evaluation of the arousal and valence components of affective reactions and classification into classes of emotions. Results show that these dimensional reductions of physiological data could not lead to statistically a fine identification of affective states statistically speaking, but the correlations we found could be used in a biofeedback loop with the virtual environment or in combination with other cognitive and behavioural assessments tools

    ELVIS: Entertainment-led video summaries

    Get PDF
    © ACM, 2010. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Transactions on Multimedia Computing, Communications, and Applications, 6(3): Article no. 17 (2010) http://doi.acm.org/10.1145/1823746.1823751Video summaries present the user with a condensed and succinct representation of the content of a video stream. Usually this is achieved by attaching degrees of importance to low-level image, audio and text features. However, video content elicits strong and measurable physiological responses in the user, which are potentially rich indicators of what video content is memorable to or emotionally engaging for an individual user. This article proposes a technique that exploits such physiological responses to a given video stream by a given user to produce Entertainment-Led VIdeo Summaries (ELVIS). ELVIS is made up of five analysis phases which correspond to the analyses of five physiological response measures: electro-dermal response (EDR), heart rate (HR), blood volume pulse (BVP), respiration rate (RR), and respiration amplitude (RA). Through these analyses, the temporal locations of the most entertaining video subsegments, as they occur within the video stream as a whole, are automatically identified. The effectiveness of the ELVIS technique is verified through a statistical analysis of data collected during a set of user trials. Our results show that ELVIS is more consistent than RANDOM, EDR, HR, BVP, RR and RA selections in identifying the most entertaining video subsegments for content in the comedy, horror/comedy, and horror genres. Subjective user reports also reveal that ELVIS video summaries are comparatively easy to understand, enjoyable, and informative

    Identifying Emotions Expressed by Mobile Users through 2D Surface and 3D Motion Gestures

    Get PDF
    Session: Feelings and emotionsInternational audienceOnly intrusive and expensive ways of precisely expressing emotions has been proposed, which are not likely to appear soon in everyday Ubicomp environments. In this paper, we study to which extent we can identify the emotion a user is explicitly expressing through 2D and 3D gestures. Indeed users already often manipulate mobile devices with touch screen and accelerometers. We conducted a field study where we asked participants to explicitly express their emotion through gestures and to report their affective states. We contribute by (1) showing a high number of significant correlations in 3D motion descriptors of gestures and in the arousal dimension; (2) defining a space of affective gestures. We identify (3) groups of descriptors that structure the space and are related to arousal. Finally, we provide with (4) a preliminary model of arousal and we identify (5) interesting patterns in particular classes of gestures. Such results are useful for Ubicomp application designers in order to envision the use of gestures as a cheap and non-intrusive affective modality

    Developing a Hand Gesture Recognition System for Mapping Symbolic Hand Gestures to Analogous Emoji in Computer-mediated Communication

    Get PDF
    Recent trends in computer-mediated communications (CMC) have not only led to expanded instant messaging through the use of images and videos, but have also expanded traditional text messaging with richer content, so-called visual communication markers (VCM) such as emoticons, emojis, and stickers. VCMs could prevent a potential loss of subtle emotional conversation in CMC, which is delivered by nonverbal cues that convey affective and emotional information. However, as the number of VCMs grows in the selection set, the problem of VCM entry needs to be addressed. Additionally, conventional ways for accessing VCMs continues to rely on input entry methods that are not directly and intimately tied to expressive nonverbal cues. One such form of expressive nonverbal that does exist and is well-studied comes in the form of hand gestures. In this work, I propose a user-defined hand gesture set that is highly representative to VCMs and a two-stage hand gesture recognition system (trajectory-based, shape-based) that distinguishes the user-defined hand gestures. While the trajectory-based recognizer distinguishes gestures based on the movements of hands, the shape-based recognizer classifies gestures based on the shapes of hands. The goal of this research is to allow users to be more immersed, natural, and quick in generating VCMs through gestures. The idea is for users to maintain the lower-bandwidth online communication of text messaging to largely retain its convenient and discreet properties, while also incorporating the advantages of higher-bandwidth online communication of video messaging by having users naturally gesture their emotions that are then closely mapped to VCMs. Results show that the accuracy of user-dependent is approximately 86% and the accuracy of user-independent is about 82%
    corecore