12 research outputs found

    EmoStim: A Database of Emotional Film Clips with Discrete and Componential Assessment

    Full text link
    Emotion elicitation using emotional film clips is one of the most common and ecologically valid methods in Affective Computing. However, selecting and validating appropriate materials that evoke a range of emotions is challenging. Here we present EmoStim: A Database of Emotional Film Clips as a film library with a rich and varied content. EmoStim is designed for researchers interested in studying emotions in relation to either discrete or componential models of emotion. To create the database, 139 film clips were selected from literature and then annotated by 638 participants through the CrowdFlower platform. We selected 99 film clips based on the distribution of subjective ratings that effectively distinguished between emotions defined by the discrete model. We show that the selected film clips reliably induce a range of specific emotions according to the discrete model. Further, we describe relationships between emotions, emotion organization in the componential space, and underlying dimensions representing emotional experience. The EmoStim database and participant annotations are freely available for research purposes. The database can be used to enrich our understanding of emotions further and serve as a guide to select or create additional materials.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Aesthetic Highlight Detection in Movies Based on Synchronization of Spectators’ Reactions.

    Get PDF
    Detection of aesthetic highlights is a challenge for understanding the affective processes taking place during movie watching. In this paper we study spectators’ responses to movie aesthetic stimuli in a social context. Moreover, we look for uncovering the emotional component of aesthetic highlights in movies. Our assumption is that synchronized spectators’ physiological and behavioral reactions occur during these highlights because: (i) aesthetic choices of filmmakers are made to elicit specific emotional reactions (e.g. special effects, empathy and compassion toward a character, etc.) and (ii) watching a movie together causes spectators’ affective reactions to be synchronized through emotional contagion. We compare different approaches to estimation of synchronization among multiple spectators’ signals, such as pairwise, group and overall synchronization measures to detect aesthetic highlights in movies. The results show that the unsupervised architecture relying on synchronization measures is able to capture different properties of spectators’ synchronization and detect aesthetic highlights based on both spectators’ electrodermal and acceleration signals. We discover that pairwise synchronization measures perform the most accurately independently of the category of the highlights and movie genres. Moreover, we observe that electrodermal signals have more discriminative power than acceleration signals for highlight detection

    Finding emotional-laden resources on the World Wide Web

    Get PDF
    Some content in multimedia resources can depict or evoke certain emotions in users. The aim of Emotional Information Retrieval (EmIR) and of our research is to identify knowledge about emotional-laden documents and to use these findings in a new kind of World Wide Web information service that allows users to search and browse by emotion. Our prototype, called Media EMOtion SEarch (MEMOSE), is largely based on the results of research regarding emotive music pieces, images and videos. In order to index both evoked and depicted emotions in these three media types and to make them searchable, we work with a controlled vocabulary, slide controls to adjust the emotions’ intensities, and broad folksonomies to identify and separate the correct resource-specific emotions. This separation of so-called power tags is based on a tag distribution which follows either an inverse power law (only one emotion was recognized) or an inverse-logistical shape (two or three emotions were recognized). Both distributions are well known in information science. MEMOSE consists of a tool for tagging basic emotions with the help of slide controls, a processing device to separate power tags, a retrieval component consisting of a search interface (for any topic in combination with one or more emotions) and a results screen. The latter shows two separately ranked lists of items for each media type (depicted and felt emotions), displaying thumbnails of resources, ranked by the mean values of intensity. In the evaluation of the MEMOSE prototype, study participants described our EmIR system as an enjoyable Web 2.0 service

    Affective Ranking of Movie Scenes Using Physiological Signals and Content Analysis

    No full text
    In this paper, we propose an approach for affective ranking of movie scenes based on the emotions that are actually felt by spectators. Such a ranking can be used for characterizing the affective, or emotional, content of video clips. The ranking can for instance help determine which video clip from a database elicits, for a given user, the most joy. This in turn will permit video indexing and retrieval based on affective criteria corresponding to a personalized user affective profile.A dataset of 64 different scenes from 8 movies was shown to eight participants. While watching, their physiological responses were recorded; namely, five peripheral physiological signals (GSR - galvanic skin resistance, EMG - electromyograms, blood pressure, respiration pattern, skin temperature) were acquired. After watching each scene, the participants were asked to self-assess their felt arousal and valence for that scene. In addition, movie scenes were analyzed in order to characterize each with various audio- and video-based features capturing the key elements of the events occurring within that scene.Arousal and valence levels were estimated by a linear combination of features from physiological signals, as well as by a linear combination of content-based audio and video features. We show that a correlation exists between arousal- and valence-based rankings provided by the spectator's self-assessments, and rankings obtained automatically from either physiological signals or audio-video features. This demonstrates the ability of using physiological responses of participants to characterize video scenes and to rank them according to their emotional content. This further shows that audio-visual features, either individually or combined, can fairly reliably be used to predict the spectator's felt emotion for a given scene. The results also confirm that participants exhibit different affective responses to movie scenes, which emphasizes the need for the emotional profiles to be user-dependant

    Role of emotion in information retrieval

    Get PDF
    The main objective of Information Retrieval (IR) systems is to satisfy searchers’ needs. A great deal of research has been conducted in the past to attempt to achieve a better insight into searchers’ needs and the factors that can potentially influence the success of an Information Retrieval and Seeking (IR&S) process. One of the factors which has been considered is searchers’ emotion. It has been shown in previous research that emotion plays an important role in the success of an IR&S process, which has the purpose of satisfying an information need. However, these previous studies do not give a sufficiently prominent position to emotion in IR, since they limit the role of emotion to a secondary factor, by assuming that a lack of knowledge (the need for information) is the primary factor (the motivation of the search). In this thesis, we propose to treat emotion as the principal factor in the system of needs of a searcher, and therefore one that ought to be considered by the retrieval algorithms. We present a more realistic view of searchers’ needs by considering not only theories from information retrieval and science, but also from psychology, philosophy, and sociology. We extensively report on the role of emotion in every aspect of human behaviour, both at an individual and social level. This serves not only to modify the current IR views of emotion, but more importantly to uncover social situations where emotion is the primary factor (i.e., source of motivation) in an IR&S process. We also show that the emotion aspect of documents plays an important part in satisfying the searcher’s need, in particular when emotion is indeed a primary factor. Given the above, we define three concepts, called emotion need, emotion object and emotion relevance, and present a conceptual map that utilises these concepts in IR tasks and scenarios. In order to investigate the practical concepts such as emotion object and emotion relevance in a real-life application, we first study the possibility of extracting emotion from text, since this is the first pragmatic challenge to be solved before any IR task can be tackled. For this purpose, we developed a text-based emotion extraction system and demonstrate that it outperforms other available emotion extraction approaches. Using the developed emotion extraction system, the usefulness of the practical concepts mentioned above is studied in two scenarios: movie recommendation and news diversification. In the movie recommendation scenario, two collaborative filtering (CF) models were proposed. CF systems aim to recommend items to a user, based on the information gathered from other users who have similar interests. CF techniques do not handle data sparsity well, especially in the case of the cold start problem, where there is no past rating for an item. In order to predict the rating of an item for a given user, the first and second models rely on an extension of state-of-the-art memory-based and model-based CF systems. The features used by the models are two emotion spaces extracted from the movie plot summary and the reviews made by users, and three semantic spaces, namely, actor, director, and genre. Experiments with two MovieLens datasets show that the inclusion of emotion information significantly improves the accuracy of prediction when compared with the state-of-the-art CF techniques, and also tackles data sparsity issues. In the news retrieval scenario, a novel way of diversifying results, i.e., diversifying based on the emotion aspect of documents, is proposed. For this purpose, two approaches are introduced to consider emotion features for diversification, and they are empirically tested on the TREC 678 Interactive Track collection. The results show that emotion features are capable of enhancing retrieval effectiveness. Overall, this thesis shows that emotion plays a key role in IR and that its importance needs to be considered. At a more detailed level, it illustrates the crucial part that emotion can play in • searchers, both as a primary (emotion need) and secondary factor (influential role) in an IR&S process; • enhancing the representation of a document using emotion features (emotion object); and finally, • improving the effectiveness of IR systems at satisfying searchers’ needs (emotion relevance)

    Affect-based information retrieval

    Get PDF
    One of the main challenges Information Retrieval (IR) systems face nowadays originates from the semantic gap problem: the semantic difference between a user’s query representation and the internal representation of an information item in a collection. The gap is further widened when the user is driven by an ill-defined information need, often the result of an anomaly in his/her current state of knowledge. The formulated search queries, which are submitted to the retrieval systems to locate relevant items, produce poor results that do not address the users’ information needs. To deal with information need uncertainty IR systems have employed in the past a range of feedback techniques, which vary from explicit to implicit. The first category of feedback techniques necessitates the communication of explicit relevance judgments, in return for better query reformulations and recommendations of relevant results. However, the latter happens at the expense of users’ cognitive resources and, furthermore, introduces an additional layer of complexity to the search process. On the other hand, implicit feedback techniques make inferences on what is relevant based on observations of user search behaviour. By doing so, they disengage users from the cognitive burden of document rating and relevance assessments. However, both categories of RF techniques determine topical relevance with respect to the cognitive and situational levels of interaction, failing to acknowledge the importance of emotions in cognition and decision making. In this thesis I investigate the role of emotions in the information seeking process and develop affective feedback techniques for interactive IR. This novel feedback framework aims to aid the search process and facilitate a more natural and meaningful interaction. I develop affective models that determine topical relevance based on information gathered from various sensory channels, and enhance their performance using personalisation techniques. Furthermore, I present an operational video retrieval system that employs affective feedback to enrich user profiles and offers meaningful recommendations of unseen videos. The use of affective feedback as a surrogate for the information need is formalised as the Affective Model of Browsing. This is a cognitive model that motivates the use of evidence extracted from the psycho-somatic mobilisation that occurs during cognitive appraisal. Finally, I address some of the ethical and privacy issues that arise from the social-emotional interaction between users and computer systems. This study involves questionnaire data gathered over three user studies, from 74 participants of different educational background, ethnicity and search experience. The results show that affective feedback is a promising area of research and it can improve many aspects of the information seeking process, such as indexing, ranking and recommendation. Eventually, it may be that relevance inferences obtained from affective models will provide a more robust and personalised form of feedback, which will allow us to deal more effectively with issues such as the semantic gap

    Multimedia interaction and access based on emotions:automating video elicited emotions recognition and visualization

    Get PDF
    Tese de doutoramento, Informática (Engenharia Informática), Universidade de Lisboa, Faculdade de Ciências, 2013Films are an excellent form of art that exploit our affective, perceptual and intellectual abilities. Technological developments and the trends for media convergence are turning video into a dominant and pervasive medium, and online video is becoming a growing entertainment activity on the web. Alongside, physiological measures are making it possible to study additional ways to identify and use emotions in human-machine interactions, multimedia retrieval and information visualization. The work described in this thesis has two main objectives: to develop an Emotions Recognition and Classification mechanism for video induced emotions; and to enable Emotional Movie Access and Exploration. Regarding the first objective, we explore recognition and classification mechanisms, in order to allow video classification based on emotions, and to identify each user’s emotional states providing different access mechanisms. We aim to provide video classification and indexing based on emotions, felt by the users while watching movies. In what concerns the second objective, we focus on emotional movie access and exploration mechanisms to find ways to access and visualize videos based on their emotional properties and users’ emotions and profiles. In this context, we designed a set of methods to access and watch the movies, both at the level of the whole movie collection, and at the individual movies level. The automatic recognition mechanism developed in this work allows for the detection of physiologic patterns, indeed providing valid individual information about users emotion while they were watching a specific movie; in addition, the user interface representations and exploration mechanisms proposed and evaluated in this thesis, show that more perceptive, satisfactory and useful visual representations influenced positively the exploration of emotional information in movies.Fundação para a Ciência e a Tecnologia (FCT, PROTEC SFRH/BD/49475/2009, LASIGE Multiannual Funding e VIRUS projecto (PTDC/EIAEIA/101012/2008

    Análisis de la eficacia publicitaria en eventos de esports transmitidos por streaming, mediante técnicas de neuromarketing.

    Get PDF
    Los eventos de esports (deportes electrónicos) son considerados como una herramienta de marketing parte de las empresas que prestan patrocinio con el objetivo de atraer a nuevos consumidores. Es por ello que la eficacia publicitaria se torna de vital importancia teniendo como objetivo llegar a todos los potenciales consumidores, pero la creciente saturación de los espacios destinados a la publicidad en los medios audiovisuales hace cada vez más difícil el encontrar un lugar o una manera donde la exhibición de la marca tenga el alcance deseado por un anunciante. Esta situación obliga a los profesionales de la publicidad y el marketing a mantener un continuo esfuerzo para descubrir nuevas formas de alcanzar sus objetivos. El análisis del proceso cognitivo y emocional en los consumidores mediante técnicas de neuromarketing, permite describir la percepción de los impactos publicitarios en el propio contexto de consumo. Objetivo: El estudio pretende analizar y describir la eficacia de la publicidad en eventos de esports transmitidos por streaming, en función de las características de la marca, con la ayuda de herramientas pertenecientes al neuromarketing. Además de generar una base de conocimiento científico acerca del comportamiento no consciente de los espectadores de esports, en relación a la publicidad en esports. Hipótesis: Las variables extrínsecas de marca influyen en la eficacia publicitaria durante los eventos de esports transmitidos por streaming. Muestra: La muestra estuvo compuesta por un total de 48 sujetos, todos de género masculino, con una edad media de 23,4 ± 17 años, quienes observaban streaming una media de 9,42 ± 45 horas semanales y practican algún esports 16,4 ± 37 horas semanales. Criterio de inclusión: 1) tener entre 18 y 35 años y 2) ser consumidores de esports de manera asidua, ya sea jugando o visionando streams Variables independientes: Tamaño, localización, color, complejidad y tiempo de exposición de las marcas publicitarias. Variables dependientes: Comportamiento visual, impacto emocional, valencia y recuerdo. Material e instrumental: Se utilizó un electroencefalograma, un medidor de respuesta galvánica de la piel, un eye tracker y un test de recuerdo de marcas (Top of Mind). Además se utilizó un video de 10 minutos con 32 segundos de la final de la SuperLiga Orange (League of Legends) 2018 en España, en la fase de picks and bans. Conclusión: la hipótesis general no se confirma cien porciento, ya que el color no afecta el comportamiento visual y al recuerdo de los espectadores. Por otro lado, las variables como el tamaño, la complejidad, el color y el tiempo de exposición no afectan a la valencia experimentada por los sujetos de estudio.Actividad Física y Deport
    corecore