3,005 research outputs found
Deep Convolutional Neural Networks as strong gravitational lens detectors
Future large-scale surveys with high resolution imaging will provide us with
a few new strong galaxy-scale lenses. These strong lensing systems
however will be contained in large data amounts which are beyond the capacity
of human experts to visually classify in a unbiased way. We present a new
strong gravitational lens finder based on convolutional neural networks (CNNs).
The method was applied to the Strong Lensing challenge organised by the Bologna
Lens Factory. It achieved first and third place respectively on the space-based
data-set and the ground-based data-set. The goal was to find a fully automated
lens finder for ground-based and space-based surveys which minimizes human
inspect. We compare the results of our CNN architecture and three new
variations ("invariant" "views" and "residual") on the simulated data of the
challenge. Each method has been trained separately 5 times on 17 000 simulated
images, cross-validated using 3 000 images and then applied to a 100 000 image
test set. We used two different metrics for evaluation, the area under the
receiver operating characteristic curve (AUC) score and the recall with no
false positive (). For ground based data our
best method achieved an AUC score of and a
of . For space-based data our best
method achieved an AUC score of and a
of . On space-based data adding dihedral invariance to the CNN
architecture diminished the overall score but achieved a higher no
contamination recall. We found that using committees of 5 CNNs produce the best
recall at zero contamination and consistenly score better AUC than a single
CNN. We found that for every variation of our CNN lensfinder, we achieve AUC
scores close to within .Comment: 9 pages, accepted to A&
Audio-Visual Sentiment Analysis for Learning Emotional Arcs in Movies
Stories can have tremendous power -- not only useful for entertainment, they
can activate our interests and mobilize our actions. The degree to which a
story resonates with its audience may be in part reflected in the emotional
journey it takes the audience upon. In this paper, we use machine learning
methods to construct emotional arcs in movies, calculate families of arcs, and
demonstrate the ability for certain arcs to predict audience engagement. The
system is applied to Hollywood films and high quality shorts found on the web.
We begin by using deep convolutional neural networks for audio and visual
sentiment analysis. These models are trained on both new and existing
large-scale datasets, after which they can be used to compute separate audio
and visual emotional arcs. We then crowdsource annotations for 30-second video
clips extracted from highs and lows in the arcs in order to assess the
micro-level precision of the system, with precision measured in terms of
agreement in polarity between the system's predictions and annotators' ratings.
These annotations are also used to combine the audio and visual predictions.
Next, we look at macro-level characterizations of movies by investigating
whether there exist `universal shapes' of emotional arcs. In particular, we
develop a clustering approach to discover distinct classes of emotional arcs.
Finally, we show on a sample corpus of short web videos that certain emotional
arcs are statistically significant predictors of the number of comments a video
receives. These results suggest that the emotional arcs learned by our approach
successfully represent macroscopic aspects of a video story that drive audience
engagement. Such machine understanding could be used to predict audience
reactions to video stories, ultimately improving our ability as storytellers to
communicate with each other.Comment: Data Mining (ICDM), 2017 IEEE 17th International Conference o
- …