18 research outputs found

    Designing real-time, continuous emotion annotation techniques for 360° VR videos

    Get PDF
    With the increasing availability of head-mounted displays (HMDs) that show immersive 360° VR content, it is important to understand to what extent these immersive experiences can evoke emotions. Typically to collect emotion ground truth labels, users rate videos through post-experience self-reports that are discrete in nature. However, post-stimuli self-reports are temporally imprecise, especially after watching 360° videos. In this work, we design six continuous emotion annotation techniques for the Oculus Rift HMD aimed at minimizing workload and distraction. Based on a co-design session with six experts, we contribute HaloLight and DotSize, two continuous annotation methods deemed unobtrusive and easy to understand. We discuss the next challenges for evaluating the usability of these techniques, and reliability of continuous annotations

    RCEA-360VR: Real-time, continuous emotion annotation in 360◦ VR videos for collecting precise viewport-dependent ground truth labels

    Get PDF
    Precise emotion ground truth labels for 360◦ virtual reality (VR) video watching are essential for fne-grained predictions under varying viewing behavior. However, current annotation techniques either rely on post-stimulus discrete self-reports, or real-time, con- tinuous emotion annotations (RCEA) but only for desktop/mobile settings. We present RCEA for 360◦ VR videos (RCEA-360VR), where we evaluate in a controlled study (N=32) the usability of two peripheral visualization techniques: HaloLight and DotSize. We furthermore develop a method that considers head movements when fusing labels. Using physiological, behavioral, and subjective measures, we show that (1) both techniques do not increase users’ workload, sickness, nor break presence (2) our continuous valence and arousal annotations are consistent with discrete within-VR and original stimuli ratings (3) users exhibit high similarity in viewing behavior, where fused ratings perfectly align with intended labels. Our work contributes usable and efective techniques for collecting fne-grained viewport-dependent emotion labels in 360◦ VR

    Linking Categorical and Dimensional Approaches to Assess Food-Related Emotions

    Get PDF
    Reflecting the two main prevailing and opposing views on the nature of emotions, emotional responses to food and beverages are typically measured using either (a) a categorical (lexicon-based) approach where users select or rate the terms that best express their food-related feelings or (b) a dimensional approach where they rate perceived food items along the dimensions of valence and arousal. Relating these two approaches is problematic since a response in terms of valence and arousal is not easily expressed in terms of emotions (like happy or disgusted). In this study, we linked the dimensional approach to a categorical approach by establishing mapping between a set of 25 emotion terms (EsSense25) and the valence–arousal space (via the EmojiGrid graphical response tool), using a set of 20 food images. In two ‘matching’ tasks, the participants first imagined how the food shown in a given image would make them feel and then reported either the emotional terms or the combination of valence and arousal that best described their feelings. In two labeling tasks, the participants first imagined experiencing a given emotion term and then they selected either the foods (images) that appeared capable to elicit that feeling or reported the combination of valence and arousal that best reflected that feeling. By combining (1) the mapping between the emotion terms and the food images with (2) the mapping of the food images to the valence–arousal space, we established (3) an indirect (via the images) mapping of the emotion terms to the valence–arousal space. The results show that the mapping between terms and images was reliable and that the linkages have straightforward and meaningful interpretations. The valence and arousal values that were assigned to the emotion terms through indirect mapping to the valence–arousal space were typically less extreme than those that were assigned through direct mapping

    Collaborative interaction in immersive 360º experiences

    Get PDF
    Os sistemas de reprodução de vídeo tornaram-se, a cada dia, mais habituais e utilizados. Consequentemente, foram criadas extensões desta tecnologia permitindo colaboração multipessoal de modo a poder assistir remotamente e sincronamente. Exemplos conhecidos são o Watch2gether, Sync Video e Netflix Party, que nos permitem assistir vídeos síncrona e remotamente com amigos. Estas aplicações de visualização conjunta, apesar de bem desenvolvidas, estão limitadas ao típico formato, não se estendendo a vídeos 360. O principal objetivo deste projeto é então expandir a pesquisa nesta área ao desenvolver um sistema colaborativo para vídeos 360. Já foram direcionados vários esforços na área de vídeos 360o, um deles sendo o projeto AV360, aplicação que permite ao utilizador visualizar e editar este tipo de vídeos com anotações e guias. O sistema que se pretende integrar é um seguimento ao AV360, utilizando parte das tecnologias já implementadas. De maneira a compartimentalizar e facilitar a pesquisa são considerados os seguintes temas de forma individual: a visualização de vídeos 360o, a generalidade dos sistemas colaborativos, a aplicação de colaboração em ambientes virtuais e os sistemas de vídeo colaborativos. É importante ter noção das vantagens e desvantagens de assistir a um vídeo 360o, conseguir retirar o que é a essência nestes vídeos e mantê-la, integrando também a inclusão de outros utilizadores. Na escolha das atividades colaborativas a aplicar, é imprescindível analisar o estado em que os sistemas colaborativos se encontram hoje em dia e posteriormente afunilar a pesquisa para a colaboração em ambientes virtuais e em vídeos. Dentro de todos os métodos analisados só os adaptáveis a ambientes imersivos e a vídeos são escolhidos e desenvolvidos neste projeto. Com base numa pesquisa aprofundada sobre o assunto, é criado um sistema de colaboração em vídeos 360o. O software permite que os utilizadores assistam em simultâneo a um vídeo enquanto comunicam de uma forma verbal e não verbal para se expressarem e partilharem a experiência do momento. Este trabalho tem em mente que parte das ideias implementadas possam ser reutilizáveis para outros projetos de experiências imersivas

    Novel Techniques to Measure the Sensory, Emotional, and Physiological (Biometric) Responses of Consumers toward Foods and Packaging

    Get PDF
    This book reprinted from articles published in the Special Issue “Novel Techniques to Measure the Sensory, Emotional, and Physiological (Biometric) Responses of Consumers toward Foods and Packaging” of the journal Foods aims to provide a deeper understanding of novel techniques to measure the different sensory, emotional, and physiological responses toward foods. The editor hopes that the findings from this Special Issue can help the broader scientific community to understand the use of novel sensory science techniques that can be used in the evaluation of products

    Methods to assess food-evoked emotion across cultures

    Get PDF
    corecore