5,102 research outputs found
Cross-Domain Image Retrieval with Attention Modeling
With the proliferation of e-commerce websites and the ubiquitousness of smart
phones, cross-domain image retrieval using images taken by smart phones as
queries to search products on e-commerce websites is emerging as a popular
application. One challenge of this task is to locate the attention of both the
query and database images. In particular, database images, e.g. of fashion
products, on e-commerce websites are typically displayed with other
accessories, and the images taken by users contain noisy background and large
variations in orientation and lighting. Consequently, their attention is
difficult to locate. In this paper, we exploit the rich tag information
available on the e-commerce websites to locate the attention of database
images. For query images, we use each candidate image in the database as the
context to locate the query attention. Novel deep convolutional neural network
architectures, namely TagYNet and CtxYNet, are proposed to learn the attention
weights and then extract effective representations of the images. Experimental
results on public datasets confirm that our approaches have significant
improvement over the existing methods in terms of the retrieval accuracy and
efficiency.Comment: 8 pages with an extra reference pag
Learning and Matching Multi-View Descriptors for Registration of Point Clouds
Critical to the registration of point clouds is the establishment of a set of
accurate correspondences between points in 3D space. The correspondence problem
is generally addressed by the design of discriminative 3D local descriptors on
the one hand, and the development of robust matching strategies on the other
hand. In this work, we first propose a multi-view local descriptor, which is
learned from the images of multiple views, for the description of 3D keypoints.
Then, we develop a robust matching approach, aiming at rejecting outlier
matches based on the efficient inference via belief propagation on the defined
graphical model. We have demonstrated the boost of our approaches to
registration on the public scanning and multi-view stereo datasets. The
superior performance has been verified by the intensive comparisons against a
variety of descriptors and matching methods
A General Framework for Robust G-Invariance in G-Equivariant Networks
We introduce a general method for achieving robust group-invariance in
group-equivariant convolutional neural networks (-CNNs), which we call the
-triple-correlation (-TC) layer. The approach leverages the theory of the
triple-correlation on groups, which is the unique, lowest-degree polynomial
invariant map that is also complete. Many commonly used invariant maps - such
as the max - are incomplete: they remove both group and signal structure. A
complete invariant, by contrast, removes only the variation due to the actions
of the group, while preserving all information about the structure of the
signal. The completeness of the triple correlation endows the -TC layer with
strong robustness, which can be observed in its resistance to invariance-based
adversarial attacks. In addition, we observe that it yields measurable
improvements in classification accuracy over standard Max -Pooling in
-CNN architectures. We provide a general and efficient implementation of the
method for any discretized group, which requires only a table defining the
group's product structure. We demonstrate the benefits of this method for
-CNNs defined on both commutative and non-commutative groups - ,
, , and (discretized as the cyclic , dihedral ,
chiral octahedral and full octahedral groups) - acting on
and on both -MNIST and -ModelNet10
datasets
Convolutional neural network architecture for geometric matching
We address the problem of determining correspondences between two images in
agreement with a geometric model such as an affine or thin-plate spline
transformation, and estimating its parameters. The contributions of this work
are three-fold. First, we propose a convolutional neural network architecture
for geometric matching. The architecture is based on three main components that
mimic the standard steps of feature extraction, matching and simultaneous
inlier detection and model parameter estimation, while being trainable
end-to-end. Second, we demonstrate that the network parameters can be trained
from synthetically generated imagery without the need for manual annotation and
that our matching layer significantly increases generalization capabilities to
never seen before images. Finally, we show that the same model can perform both
instance-level and category-level matching giving state-of-the-art results on
the challenging Proposal Flow dataset.Comment: In 2017 IEEE Conference on Computer Vision and Pattern Recognition
(CVPR 2017
Neighbourhood Consensus Networks
We address the problem of finding reliable dense correspondences between a
pair of images. This is a challenging task due to strong appearance differences
between the corresponding scene elements and ambiguities generated by
repetitive patterns. The contributions of this work are threefold. First,
inspired by the classic idea of disambiguating feature matches using semi-local
constraints, we develop an end-to-end trainable convolutional neural network
architecture that identifies sets of spatially consistent matches by analyzing
neighbourhood consensus patterns in the 4D space of all possible
correspondences between a pair of images without the need for a global
geometric model. Second, we demonstrate that the model can be trained
effectively from weak supervision in the form of matching and non-matching
image pairs without the need for costly manual annotation of point to point
correspondences. Third, we show the proposed neighbourhood consensus network
can be applied to a range of matching tasks including both category- and
instance-level matching, obtaining the state-of-the-art results on the PF
Pascal dataset and the InLoc indoor visual localization benchmark.Comment: In Proceedings of the 32nd Conference on Neural Information
Processing Systems (NeurIPS 2018
- …