871 research outputs found
How to Make an Image More Memorable? A Deep Style Transfer Approach
Recent works have shown that it is possible to automatically predict
intrinsic image properties like memorability. In this paper, we take a step
forward addressing the question: "Can we make an image more memorable?".
Methods for automatically increasing image memorability would have an impact in
many application fields like education, gaming or advertising. Our work is
inspired by the popular editing-by-applying-filters paradigm adopted in photo
editing applications, like Instagram and Prisma. In this context, the problem
of increasing image memorability maps to that of retrieving "memorabilizing"
filters or style "seeds". Still, users generally have to go through most of the
available filters before finding the desired solution, thus turning the editing
process into a resource and time consuming task. In this work, we show that it
is possible to automatically retrieve the best style seeds for a given image,
thus remarkably reducing the number of human attempts needed to find a good
match. Our approach leverages from recent advances in the field of image
synthesis and adopts a deep architecture for generating a memorable picture
from a given input image and a style seed. Importantly, to automatically select
the best style a novel learning-based solution, also relying on deep models, is
proposed. Our experimental evaluation, conducted on publicly available
benchmarks, demonstrates the effectiveness of the proposed approach for
generating memorable images through automatic style seed selectionComment: Accepted at ACM ICMR 201
What Makes Natural Scene Memorable?
Recent studies on image memorability have shed light on the visual features
that make generic images, object images or face photographs memorable. However,
a clear understanding and reliable estimation of natural scene memorability
remain elusive. In this paper, we provide an attempt to answer: "what exactly
makes natural scene memorable". Specifically, we first build LNSIM, a
large-scale natural scene image memorability database (containing 2,632 images
and memorability annotations). Then, we mine our database to investigate how
low-, middle- and high-level handcrafted features affect the memorability of
natural scene. In particular, we find that high-level feature of scene category
is rather correlated with natural scene memorability. Thus, we propose a deep
neural network based natural scene memorability (DeepNSM) predictor, which
takes advantage of scene category. Finally, the experimental results validate
the effectiveness of DeepNSM.Comment: Accepted to ACM MM Workshop
Enhancing Perceptual Attributes with Bayesian Style Generation
Deep learning has brought an unprecedented progress in computer vision and
significant advances have been made in predicting subjective properties
inherent to visual data (e.g., memorability, aesthetic quality, evoked
emotions, etc.). Recently, some research works have even proposed deep learning
approaches to modify images such as to appropriately alter these properties.
Following this research line, this paper introduces a novel deep learning
framework for synthesizing images in order to enhance a predefined perceptual
attribute. Our approach takes as input a natural image and exploits recent
models for deep style transfer and generative adversarial networks to change
its style in order to modify a specific high-level attribute. Differently from
previous works focusing on enhancing a specific property of a visual content,
we propose a general framework and demonstrate its effectiveness in two use
cases, i.e. increasing image memorability and generating scary pictures. We
evaluate the proposed approach on publicly available benchmarks, demonstrating
its advantages over state of the art methods.Comment: ACCV-201
Understanding and Predicting Image Memorability at a Large Scale
Progress in estimating visual memorability has been limited by the small scale and lack of variety of benchmark data. Here, we introduce a novel experimental procedure to objectively measure human memory, allowing us to build LaMem, the largest annotated image memorability dataset to date (containing 60,000 images from diverse sources). Using Convolutional Neural Networks (CNNs), we show that fine-tuned deep features outperform all other features by a large margin, reaching a rank correlation of 0.64, near human consistency (0.68). Analysis of the responses of the high-level CNN layers shows which objects and regions are positively, and negatively, correlated with memorability, allowing us to create memorability maps for each image and provide a concrete method to perform image memorability manipulation. This work demonstrates that one can now robustly estimate the memorability of images from many different classes, positioning memorability and deep memorability features as prime candidates to estimate the utility of information for cognitive systems. Our model and data are available at: http://memorability.csail.mit.edu.National Science Foundation (U.S.) (Grant 1532591)McGovern Institute for Brain Research at MIT. Neurotechnology (MINT) ProgramMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory. MIT Big Data InitiativeGoogle (Firm)Xerox Corporatio
- …