19 research outputs found

    Annotating, Understanding, and Predicting Long-term Video Memorability

    Get PDF
    International audienceMemorability can be regarded as a useful metric of video importance to help make a choice between competing videos. Research on computational understanding of video memorability is however in its early stages. There is no available dataset for modelling purposes, and the few previous attempts provided protocols to collect video memorability data that would be difficult to generalize. Furthermore, the computational features needed to build a robust memorability predictor remain largely undiscovered. In this article, we propose a new protocol to collect long-term video memorability annotations. We measure the memory performances of 104 participants from weeks to years after memorization to build a dataset of 660 videos for video memorability prediction. This dataset is made available for the research community. We then analyze the collected data in order to better understand video memorability, in particular the effects of response time, duration of memory retention and repetition of visualization on video memorability. We finally investigate the use of various types of audio and visual features and build a computational model for video memorability prediction. We conclude that high level visual semantics help better predict the memorability of videos

    MediaEval 2018: Predicting Media Memorability Task

    Full text link
    In this paper, we present the Predicting Media Memorability task, which is proposed as part of the MediaEval 2018 Benchmarking Initiative for Multimedia Evaluation. Participants are expected to design systems that automatically predict memorability scores for videos, which reflect the probability of a video being remembered. In contrast to previous work in image memorability prediction, where memorability was measured a few minutes after memorization, the proposed dataset comes with short-term and long-term memorability annotations. All task characteristics are described, namely: the task's challenges and breakthrough, the released data set and ground truth, the required participant runs and the evaluation metrics

    Social Interactions vs Revisions, What is important for Promotion in Wikipedia?

    Full text link
    In epistemic community, people are said to be selected on their knowledge contribution to the project (articles, codes, etc.) However, the socialization process is an important factor for inclusion, sustainability as a contributor, and promotion. Finally, what does matter to be promoted? being a good contributor? being a good animator? knowing the boss? We explore this question looking at the process of election for administrator in the English Wikipedia community. We modeled the candidates according to their revisions and/or social attributes. These attributes are used to construct a predictive model of promotion success, based on the candidates's past behavior, computed thanks to a random forest algorithm. Our model combining knowledge contribution variables and social networking variables successfully explain 78% of the results which is better than the former models. It also helps to refine the criterion for election. If the number of knowledge contributions is the most important element, social interactions come close second to explain the election. But being connected with the future peers (the admins) can make the difference between success and failure, making this epistemic community a very social community too

    Computational understanding of image memorability : towards the integration of emotional and extrinsic information

    No full text
    La mémorabilité des images est un sujet de recherche récent en vision par ordinateur. Les premières tentatives ont reposé sur l’utilisation d’algorithmes d’apprentissage pour inférer le degré de mémorabilité d’une image d’un ensemble de caractéristiques de bas niveau. Dans cette thèse, nous revenons sur les fondements théoriques de la mémorabilité des images, en insistant sur les émotions véhiculées par les images, étroitement liées à leur mémorabilité. En considération de cet éclairage théorique, nous proposons d’inscrire la prédiction de la mémorabilité des images dans un cadre de travail plus large, qui embrasse les informations intrinsèques mais également extrinsèques de l’image, liées à leur contexte de présentation et aux observateurs. En conséquence, nous construisons notre propre base de données pour l’étude de la mémorabilité des images ; elle sera utile pour éprouver les modèles existants, entraînés sur l’unique vérité terrain disponible jusqu’alors. Nous introduisons ensuite l’apprentissage profond pour la prédiction de la mémorabilité des images : notre modèle obtient les meilleures performances de prédiction à ce jour. En vue d’amender ces prédictions, nous cherchons alors à modéliser les effets contextuels et individuels sur la mémorabilité des images. Dans une dernière partie, nous évaluons la performance de modèles computationnels d’attention visuelle, de plus en plus utilisés pour la prédiction de la mémorabilité, pour des images dont le degré de mémorabilité et l’information émotionnelle varient. Nous présentons finalement le film interactif « émotionnel », qui nous permet d’étudier les liens entre émotion et attention visuelle dans les vidéos.The study of image memorability in computer science is a recent topic. First attempts were based on learning algorithms, used to infer the extent to which a picture is memorable from a set of low-level visual features. In this dissertation, we first investigate theoretical foundations of image memorability; we especially focus on the emotions the images convey, closely related to their memorability. In this light, we propose to widen the scope of image memorability prediction, to incorporate not only intrinsic, but also extrinsic image information, related to their context of presentation and to the observers. Accordingly, we build a new database for the study of image memorability; this database will be useful to test the existing models, trained on the unique database available so far. We then introduce deep learning for image memorability prediction: our model obtains the best performance to date. To improve its prediction accuracy, we try to model contextual and individual influences on image memorability. In the final part, we test the performance of computational models of visual attention, that attract growing interest for memorability prediction, for images which vary according to their degree of memorability and the emotion they convey. Finally, we present the "emotional" interactive movie, which enable us to study the links between emotion and visual attention for videos

    Role of memory re-evocation: Evolution of the what-where-when memory during long-term consolidation

    Get PDF
    International audienceModels of long-term consolidation suggest that memories, richly contextualized and fit for revival in their original form, become more generic over time, losing the particular occurrence of the event. However, this transition from an episodic nature to a semantic nature (i.e., semantization) spares certain memories which can be relived decades later as vividly as on the first day. The multi-trace model (Nadel & Moscovitch, 1997) assumes that the re-evocation of a memory preserves it from semantization, contrary to the standard model (Alvarez & Squire, 1994) which postulates that the re-evocation leads to the semantization of memories

    Emotional movie: A new art form designed at the heart of human-technology interaction

    Get PDF
    International audienceInnovative art forms emerge along with the development of new materials for displaying multimedia content and the new possibilities for interaction with the spectator they offer. Interactive movie in one of them; It consists in providing a personalized content by interacting with the emotional state of the spectator. In this study, we define this new form of artwork and describe how it can be implemented by providing functional criteria that will determine how different parts of the movie will be connected to form a particular scenario. We measured neurophysiological and ocular activities of 60 subjects watching an experimental short-movie realized for the “emotional movie” project, composed of 12 different scenarios. We combined an electroencephalography headset which directly provides emotional data with an eye-tracker to investigate the simultaneous positions of gaze. From our results we propose that the emotional proximity between the scenario and the spectator could be a relevant selection criterion to customize the movie

    Deep Learning for Image Memorability Prediction : the Emotional Bias

    No full text
    International audienceImage memorability prediction is a recent topic in computer science. First attempts have shown that it is possible to computationally infer from the intrinsic properties of an image the extent to which it is memorable. In this paper, we introduce a fine-tuned deep learning-based computational model for image memorability prediction. The performance of this model significantly outperforms previous work and obtains a 32.78% relative increase compared to the best-performing model from the state of the art on the same dataset. We also investigate how our model generalizes on a new dataset of 150 images, for which memorability and affective scores were collected from 50 participants. The prediction performance is weaker on this new dataset, which highlights the issue of representativity of the datasets. In particular, the model obtains a higher predictive performance for arousing negative pictures than for neutral or arousing positive ones, recalling how important it is for a memorability dataset to consist of images that are appropriately distributed within the emotional space

    Using individual data to characterize emotional user experience and its memorability: focus on gender factor

    No full text
    International audienceDelivering the same digital image to several users is not necessarily providing them the same experience. In this study, we focused on how different affective experiences impact the memorability of an image. Forty-nine participants took part in an experiment in which they saw a stream of images conveying various emotions. One day later, they had to recognize the images displayed the day before and rate them according to the posi-tivity/negativity of the emotional experience the images induced. In order to better appreciate the underlying idiosyncratic factors that affect the experience under test, prior to the test session we collected not only personal information but also results of psychological tests to characterize individuals according to their dominant personality in terms of masculinity-femininity (Bem Sex Role inventory) and to measure their emotional state. The results show that the way an emotional experience is rated depends on personality rather than biological sex, suggesting that personality could be a mediator in the well-established differences in how males and females experience emotional material. From the collected data, we derive a model including individual factors relevant to characterize the memorability of the images, in particular through the emotional experience they induced

    Emotional interactive movie: adjusting the scenario according to the emotional response of the viewer

    No full text
    International audienceEmotional interactive movie is a kind of film unfolding in different ways according to the emotion the viewer experiences. The movie is made of several sequences; their combination determines the particular scenario experienced. In this paper, we describe the system and its implementation by providing combination selection criteria. We measured neurophysiological and ocular activities of 60 individuals viewing an experimental interactive short-movie composed of 12 different scenarios. For this purpose, we combined an electroencephalography headset which directly provides emotional data with an eye-tracker in order to simultaneously investigate the position of viewer’s gaze. From the collected data analysis, we propose a functional version of the emotional interactive movie, which was used in a so called “emotional cinema” during public exhibitions
    corecore