202 research outputs found
What Makes Natural Scene Memorable?
Recent studies on image memorability have shed light on the visual features
that make generic images, object images or face photographs memorable. However,
a clear understanding and reliable estimation of natural scene memorability
remain elusive. In this paper, we provide an attempt to answer: "what exactly
makes natural scene memorable". Specifically, we first build LNSIM, a
large-scale natural scene image memorability database (containing 2,632 images
and memorability annotations). Then, we mine our database to investigate how
low-, middle- and high-level handcrafted features affect the memorability of
natural scene. In particular, we find that high-level feature of scene category
is rather correlated with natural scene memorability. Thus, we propose a deep
neural network based natural scene memorability (DeepNSM) predictor, which
takes advantage of scene category. Finally, the experimental results validate
the effectiveness of DeepNSM.Comment: Accepted to ACM MM Workshop
Viraliency: Pooling Local Virality
In our overly-connected world, the automatic recognition of virality - the
quality of an image or video to be rapidly and widely spread in social networks
- is of crucial importance, and has recently awaken the interest of the
computer vision community. Concurrently, recent progress in deep learning
architectures showed that global pooling strategies allow the extraction of
activation maps, which highlight the parts of the image most likely to contain
instances of a certain class. We extend this concept by introducing a pooling
layer that learns the size of the support area to be averaged: the learned
top-N average (LENA) pooling. We hypothesize that the latent concepts (feature
maps) describing virality may require such a rich pooling strategy. We assess
the effectiveness of the LENA layer by appending it on top of a convolutional
siamese architecture and evaluate its performance on the task of predicting and
localizing virality. We report experiments on two publicly available datasets
annotated for virality and show that our method outperforms state-of-the-art
approaches.Comment: Accepted at IEEE CVPR 201
Dublin's participation in the predicting media memorability task at MediaEval 2018
This paper outlines 6 approaches taken to computing video memorability, for the MediaEval media memorability task. The approaches are based on video features, an end-to-end approach, saliency, aesthetics, neural feedback, and an ensemble of all approaches
AMNet: Memorability Estimation with Attention
In this paper we present the design and evaluation of an end-to-end trainable, deep neural network with a visual attention mechanism for memorability estimation in still images. We analyze the suitability of transfer learning of deep models from image classification to the memorability task. Further on we study the impact of the attention mechanism on the memorability estimation and evaluate our network on the SUN Memorability and the LaMem datasets. Our network outperforms the existing state of the art models on both datasets in terms of the Spearman's rank correlation as well as the mean squared error, closely matching human consistency
Quality Aware Network for Set to Set Recognition
This paper targets on the problem of set to set recognition, which learns the
metric between two image sets. Images in each set belong to the same identity.
Since images in a set can be complementary, they hopefully lead to higher
accuracy in practical applications. However, the quality of each sample cannot
be guaranteed, and samples with poor quality will hurt the metric. In this
paper, the quality aware network (QAN) is proposed to confront this problem,
where the quality of each sample can be automatically learned although such
information is not explicitly provided in the training stage. The network has
two branches, where the first branch extracts appearance feature embedding for
each sample and the other branch predicts quality score for each sample.
Features and quality scores of all samples in a set are then aggregated to
generate the final feature embedding. We show that the two branches can be
trained in an end-to-end manner given only the set-level identity annotation.
Analysis on gradient spread of this mechanism indicates that the quality
learned by the network is beneficial to set-to-set recognition and simplifies
the distribution that the network needs to fit. Experiments on both face
verification and person re-identification show advantages of the proposed QAN.
The source code and network structure can be downloaded at
https://github.com/sciencefans/Quality-Aware-Network.Comment: Accepted at CVPR 201
Understanding and Predicting Image Memorability at a Large Scale
Progress in estimating visual memorability has been limited by the small scale and lack of variety of benchmark data. Here, we introduce a novel experimental procedure to objectively measure human memory, allowing us to build LaMem, the largest annotated image memorability dataset to date (containing 60,000 images from diverse sources). Using Convolutional Neural Networks (CNNs), we show that fine-tuned deep features outperform all other features by a large margin, reaching a rank correlation of 0.64, near human consistency (0.68). Analysis of the responses of the high-level CNN layers shows which objects and regions are positively, and negatively, correlated with memorability, allowing us to create memorability maps for each image and provide a concrete method to perform image memorability manipulation. This work demonstrates that one can now robustly estimate the memorability of images from many different classes, positioning memorability and deep memorability features as prime candidates to estimate the utility of information for cognitive systems. Our model and data are available at: http://memorability.csail.mit.edu.National Science Foundation (U.S.) (Grant 1532591)McGovern Institute for Brain Research at MIT. Neurotechnology (MINT) ProgramMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory. MIT Big Data InitiativeGoogle (Firm)Xerox Corporatio
Data analytics for image visual complexity and kinect-based videos of rehabilitation exercises
With the recent advances in computer vision and pattern recognition, methods from these fields are successfully applied to solve problems in various domains, including health care and social sciences. In this thesis, two such problems, from different domains, are discussed. First, an application of computer vision and broader pattern recognition in physical therapy is presented. Home-based physical therapy is an essential part of the recovery process in which the patient is prescribed specific exercises in order to improve symptoms and daily functioning of the body. However, poor adherence to the prescribed exercises is a common problem. In our work, we explore methods for improving home-based physical therapy experience. We begin by proposing DyAd, a dynamically difficulty adjustment system which captures the trajectory of the hand movement, evaluates the user's performance quantitatively and adjusts the difficulty level for the next trial of the exercise based on the performance measurements. Next, we introduce ExerciseCheck, a remote monitoring and evaluation platform for home-based physical therapy. ExerciseCheck is capable of capturing exercise information, evaluating the performance, providing therapeutic feedback to the patient and the therapist, checking the progress of the user over the course of the physical therapy, and supporting the patient throughout this period. In our experiments, Parkinson patients have tested our system at a clinic and in their homes during their physical therapy period. Our results suggests that ExerciseCheck is a user-friendly application and can assist patients by providing motivation, and guidance to ensure correct execution of the required exercises.
As the second application, and within computer vision paradigm, we focus on visual complexity, an image attribute that humans can subjectively evaluate based on the level of details in the image. Visual complexity has been studied in psychophysics, cognitive science, and, more recently, computer vision, for the purposes of product design, web design, advertising, etc. We first introduce a diverse visual complexity dataset which compromises of seven image categories. We collect the ground-truth scores by comparing the pairwise relationship of images and then convert the pairwise scores to absolute scores using mathematical methods. Furthermore, we propose a method to measure the visual complexity that uses unsupervised information extraction from intermediate convolutional layers of deep neural networks. We derive an activation energy metric that combines convolutional layer activations to quantify visual complexity. The high correlations between ground-truth labels and computed energy scores in our experiments show superiority of our method compared to the previous works. Finally, as an example of the relationship between visual complexity and other image attributes, we demonstrate that, within the context of a category, visually more complex images are more memorable to human observers
- …