38,186 research outputs found
Learned Multi-Patch Similarity
Estimating a depth map from multiple views of a scene is a fundamental task
in computer vision. As soon as more than two viewpoints are available, one
faces the very basic question how to measure similarity across >2 image
patches. Surprisingly, no direct solution exists, instead it is common to fall
back to more or less robust averaging of two-view similarities. Encouraged by
the success of machine learning, and in particular convolutional neural
networks, we propose to learn a matching function which directly maps multiple
image patches to a scalar similarity score. Experiments on several multi-view
datasets demonstrate that this approach has advantages over methods based on
pairwise patch similarity.Comment: 10 pages, 7 figures, Accepted at ICCV 201
Learning Correspondence Structures for Person Re-identification
This paper addresses the problem of handling spatial misalignments due to
camera-view changes or human-pose variations in person re-identification. We
first introduce a boosting-based approach to learn a correspondence structure
which indicates the patch-wise matching probabilities between images from a
target camera pair. The learned correspondence structure can not only capture
the spatial correspondence pattern between cameras but also handle the
viewpoint or human-pose variation in individual images. We further introduce a
global constraint-based matching process. It integrates a global matching
constraint over the learned correspondence structure to exclude cross-view
misalignments during the image patch matching process, hence achieving a more
reliable matching score between images. Finally, we also extend our approach by
introducing a multi-structure scheme, which learns a set of local
correspondence structures to capture the spatial correspondence sub-patterns
between a camera pair, so as to handle the spatial misalignments between
individual images in a more precise way. Experimental results on various
datasets demonstrate the effectiveness of our approach.Comment: IEEE Trans. Image Processing, vol. 26, no. 5, pp. 2438-2453, 2017.
The project page for this paper is available at
http://min.sjtu.edu.cn/lwydemo/personReID.htm arXiv admin note: text overlap
with arXiv:1504.0624
Deformable Registration through Learning of Context-Specific Metric Aggregation
We propose a novel weakly supervised discriminative algorithm for learning
context specific registration metrics as a linear combination of conventional
similarity measures. Conventional metrics have been extensively used over the
past two decades and therefore both their strengths and limitations are known.
The challenge is to find the optimal relative weighting (or parameters) of
different metrics forming the similarity measure of the registration algorithm.
Hand-tuning these parameters would result in sub optimal solutions and quickly
become infeasible as the number of metrics increases. Furthermore, such
hand-crafted combination can only happen at global scale (entire volume) and
therefore will not be able to account for the different tissue properties. We
propose a learning algorithm for estimating these parameters locally,
conditioned to the data semantic classes. The objective function of our
formulation is a special case of non-convex function, difference of convex
function, which we optimize using the concave convex procedure. As a proof of
concept, we show the impact of our approach on three challenging datasets for
different anatomical structures and modalities.Comment: Accepted for publication in the 8th International Workshop on Machine
Learning in Medical Imaging (MLMI 2017), in conjunction with MICCAI 201
Self-Tuned Deep Super Resolution
Deep learning has been successfully applied to image super resolution (SR).
In this paper, we propose a deep joint super resolution (DJSR) model to exploit
both external and self similarities for SR. A Stacked Denoising Convolutional
Auto Encoder (SDCAE) is first pre-trained on external examples with proper data
augmentations. It is then fine-tuned with multi-scale self examples from each
input, where the reliability of self examples is explicitly taken into account.
We also enhance the model performance by sub-model training and selection. The
DJSR model is extensively evaluated and compared with state-of-the-arts, and
show noticeable performance improvements both quantitatively and perceptually
on a wide range of images
Fast Predictive Multimodal Image Registration
We introduce a deep encoder-decoder architecture for image deformation
prediction from multimodal images. Specifically, we design an image-patch-based
deep network that jointly (i) learns an image similarity measure and (ii) the
relationship between image patches and deformation parameters. While our method
can be applied to general image registration formulations, we focus on the
Large Deformation Diffeomorphic Metric Mapping (LDDMM) registration model. By
predicting the initial momentum of the shooting formulation of LDDMM, we
preserve its mathematical properties and drastically reduce the computation
time, compared to optimization-based approaches. Furthermore, we create a
Bayesian probabilistic version of the network that allows evaluation of
registration uncertainty via sampling of the network at test time. We evaluate
our method on a 3D brain MRI dataset using both T1- and T2-weighted images. Our
experiments show that our method generates accurate predictions and that
learning the similarity measure leads to more consistent registrations than
relying on generic multimodal image similarity measures, such as mutual
information. Our approach is an order of magnitude faster than
optimization-based LDDMM.Comment: Accepted as a conference paper for ISBI 201
- …