82,246 research outputs found
JigsawNet: Shredded Image Reassembly using Convolutional Neural Network and Loop-based Composition
This paper proposes a novel algorithm to reassemble an arbitrarily shredded
image to its original status. Existing reassembly pipelines commonly consist of
a local matching stage and a global compositions stage. In the local stage, a
key challenge in fragment reassembly is to reliably compute and identify
correct pairwise matching, for which most existing algorithms use handcrafted
features, and hence, cannot reliably handle complicated puzzles. We build a
deep convolutional neural network to detect the compatibility of a pairwise
stitching, and use it to prune computed pairwise matches. To improve the
network efficiency and accuracy, we transfer the calculation of CNN to the
stitching region and apply a boost training strategy. In the global composition
stage, we modify the commonly adopted greedy edge selection strategies to two
new loop closure based searching algorithms. Extensive experiments show that
our algorithm significantly outperforms existing methods on solving various
puzzles, especially those challenging ones with many fragment pieces
Sketch-based 3D Shape Retrieval using Convolutional Neural Networks
Retrieving 3D models from 2D human sketches has received considerable
attention in the areas of graphics, image retrieval, and computer vision.
Almost always in state of the art approaches a large amount of "best views" are
computed for 3D models, with the hope that the query sketch matches one of
these 2D projections of 3D models using predefined features.
We argue that this two stage approach (view selection -- matching) is
pragmatic but also problematic because the "best views" are subjective and
ambiguous, which makes the matching inputs obscure. This imprecise nature of
matching further makes it challenging to choose features manually. Instead of
relying on the elusive concept of "best views" and the hand-crafted features,
we propose to define our views using a minimalism approach and learn features
for both sketches and views. Specifically, we drastically reduce the number of
views to only two predefined directions for the whole dataset. Then, we learn
two Siamese Convolutional Neural Networks (CNNs), one for the views and one for
the sketches. The loss function is defined on the within-domain as well as the
cross-domain similarities. Our experiments on three benchmark datasets
demonstrate that our method is significantly better than state of the art
approaches, and outperforms them in all conventional metrics.Comment: CVPR 201
A Quasi-Bayesian Perspective to Online Clustering
When faced with high frequency streams of data, clustering raises theoretical
and algorithmic pitfalls. We introduce a new and adaptive online clustering
algorithm relying on a quasi-Bayesian approach, with a dynamic (i.e.,
time-dependent) estimation of the (unknown and changing) number of clusters. We
prove that our approach is supported by minimax regret bounds. We also provide
an RJMCMC-flavored implementation (called PACBO, see
https://cran.r-project.org/web/packages/PACBO/index.html) for which we give a
convergence guarantee. Finally, numerical experiments illustrate the potential
of our procedure
- …