67,411 research outputs found
Quicksilver: Fast Predictive Image Registration - a Deep Learning Approach
This paper introduces Quicksilver, a fast deformable image registration
method. Quicksilver registration for image-pairs works by patch-wise prediction
of a deformation model based directly on image appearance. A deep
encoder-decoder network is used as the prediction model. While the prediction
strategy is general, we focus on predictions for the Large Deformation
Diffeomorphic Metric Mapping (LDDMM) model. Specifically, we predict the
momentum-parameterization of LDDMM, which facilitates a patch-wise prediction
strategy while maintaining the theoretical properties of LDDMM, such as
guaranteed diffeomorphic mappings for sufficiently strong regularization. We
also provide a probabilistic version of our prediction network which can be
sampled during the testing time to calculate uncertainties in the predicted
deformations. Finally, we introduce a new correction network which greatly
increases the prediction accuracy of an already existing prediction network. We
show experimental results for uni-modal atlas-to-image as well as uni- / multi-
modal image-to-image registrations. These experiments demonstrate that our
method accurately predicts registrations obtained by numerical optimization, is
very fast, achieves state-of-the-art registration results on four standard
validation datasets, and can jointly learn an image similarity measure.
Quicksilver is freely available as an open-source software.Comment: Add new discussion
Sequential Prediction of Social Media Popularity with Deep Temporal Context Networks
Prediction of popularity has profound impact for social media, since it
offers opportunities to reveal individual preference and public attention from
evolutionary social systems. Previous research, although achieves promising
results, neglects one distinctive characteristic of social data, i.e.,
sequentiality. For example, the popularity of online content is generated over
time with sequential post streams of social media. To investigate the
sequential prediction of popularity, we propose a novel prediction framework
called Deep Temporal Context Networks (DTCN) by incorporating both temporal
context and temporal attention into account. Our DTCN contains three main
components, from embedding, learning to predicting. With a joint embedding
network, we obtain a unified deep representation of multi-modal user-post data
in a common embedding space. Then, based on the embedded data sequence over
time, temporal context learning attempts to recurrently learn two adaptive
temporal contexts for sequential popularity. Finally, a novel temporal
attention is designed to predict new popularity (the popularity of a new
user-post pair) with temporal coherence across multiple time-scales.
Experiments on our released image dataset with about 600K Flickr photos
demonstrate that DTCN outperforms state-of-the-art deep prediction algorithms,
with an average of 21.51% relative performance improvement in the popularity
prediction (Spearman Ranking Correlation).Comment: accepted in IJCAI-1
Multi-Modal Imitation Learning from Unstructured Demonstrations using Generative Adversarial Nets
Imitation learning has traditionally been applied to learn a single task from
demonstrations thereof. The requirement of structured and isolated
demonstrations limits the scalability of imitation learning approaches as they
are difficult to apply to real-world scenarios, where robots have to be able to
execute a multitude of tasks. In this paper, we propose a multi-modal imitation
learning framework that is able to segment and imitate skills from unlabelled
and unstructured demonstrations by learning skill segmentation and imitation
learning jointly. The extensive simulation results indicate that our method can
efficiently separate the demonstrations into individual skills and learn to
imitate them using a single multi-modal policy. The video of our experiments is
available at http://sites.google.com/view/nips17intentionganComment: Paper accepted to NIPS 201
- …