4,292 research outputs found
Fast multi-image matching via density-based clustering
We consider the problem of finding consistent matches
across multiple images. Previous state-of-the-art solutions
use constraints on cycles of matches together with convex
optimization, leading to computationally intensive iterative
algorithms. In this paper, we propose a clustering-based
formulation. We first rigorously show its equivalence with
the previous one, and then propose QuickMatch, a novel
algorithm that identifies multi-image matches from a density
function in feature space. We use the density to order the
points in a tree, and then extract the matches by breaking this
tree using feature distances and measures of distinctiveness.
Our algorithm outperforms previous state-of-the-art methods
(such as MatchALS) in accuracy, and it is significantly faster
(up to 62 times faster on some bechmarks), and can scale to
large datasets (with more than twenty thousands features).Accepted manuscriptSupporting documentatio
Data-Driven Shape Analysis and Processing
Data-driven methods play an increasingly important role in discovering
geometric, structural, and semantic relationships between 3D shapes in
collections, and applying this analysis to support intelligent modeling,
editing, and visualization of geometric data. In contrast to traditional
approaches, a key feature of data-driven approaches is that they aggregate
information from a collection of shapes to improve the analysis and processing
of individual shapes. In addition, they are able to learn models that reason
about properties and relationships of shapes without relying on hard-coded
rules or explicitly programmed instructions. We provide an overview of the main
concepts and components of these techniques, and discuss their application to
shape classification, segmentation, matching, reconstruction, modeling and
exploration, as well as scene analysis and synthesis, through reviewing the
literature and relating the existing works with both qualitative and numerical
comparisons. We conclude our report with ideas that can inspire future research
in data-driven shape analysis and processing.Comment: 10 pages, 19 figure
Robust Temporally Coherent Laplacian Protrusion Segmentation of 3D Articulated Bodies
In motion analysis and understanding it is important to be able to fit a
suitable model or structure to the temporal series of observed data, in order
to describe motion patterns in a compact way, and to discriminate between them.
In an unsupervised context, i.e., no prior model of the moving object(s) is
available, such a structure has to be learned from the data in a bottom-up
fashion. In recent times, volumetric approaches in which the motion is captured
from a number of cameras and a voxel-set representation of the body is built
from the camera views, have gained ground due to attractive features such as
inherent view-invariance and robustness to occlusions. Automatic, unsupervised
segmentation of moving bodies along entire sequences, in a temporally-coherent
and robust way, has the potential to provide a means of constructing a
bottom-up model of the moving body, and track motion cues that may be later
exploited for motion classification. Spectral methods such as locally linear
embedding (LLE) can be useful in this context, as they preserve "protrusions",
i.e., high-curvature regions of the 3D volume, of articulated shapes, while
improving their separation in a lower dimensional space, making them in this
way easier to cluster. In this paper we therefore propose a spectral approach
to unsupervised and temporally-coherent body-protrusion segmentation along time
sequences. Volumetric shapes are clustered in an embedding space, clusters are
propagated in time to ensure coherence, and merged or split to accommodate
changes in the body's topology. Experiments on both synthetic and real
sequences of dense voxel-set data are shown. This supports the ability of the
proposed method to cluster body-parts consistently over time in a totally
unsupervised fashion, its robustness to sampling density and shape quality, and
its potential for bottom-up model constructionComment: 31 pages, 26 figure
Multi-Image Semantic Matching by Mining Consistent Features
This work proposes a multi-image matching method to estimate semantic
correspondences across multiple images. In contrast to the previous methods
that optimize all pairwise correspondences, the proposed method identifies and
matches only a sparse set of reliable features in the image collection. In this
way, the proposed method is able to prune nonrepeatable features and also
highly scalable to handle thousands of images. We additionally propose a
low-rank constraint to ensure the geometric consistency of feature
correspondences over the whole image collection. Besides the competitive
performance on multi-graph matching and semantic flow benchmarks, we also
demonstrate the applicability of the proposed method for reconstructing
object-class models and discovering object-class landmarks from images without
using any annotation.Comment: CVPR 201
SCNet: Learning Semantic Correspondence
This paper addresses the problem of establishing semantic correspondences
between images depicting different instances of the same object or scene
category. Previous approaches focus on either combining a spatial regularizer
with hand-crafted features, or learning a correspondence model for appearance
only. We propose instead a convolutional neural network architecture, called
SCNet, for learning a geometrically plausible model for semantic
correspondence. SCNet uses region proposals as matching primitives, and
explicitly incorporates geometric consistency in its loss function. It is
trained on image pairs obtained from the PASCAL VOC 2007 keypoint dataset, and
a comparative evaluation on several standard benchmarks demonstrates that the
proposed approach substantially outperforms both recent deep learning
architectures and previous methods based on hand-crafted features.Comment: ICCV 201
A path following algorithm for the graph matching problem
We propose a convex-concave programming approach for the labeled weighted
graph matching problem. The convex-concave programming formulation is obtained
by rewriting the weighted graph matching problem as a least-square problem on
the set of permutation matrices and relaxing it to two different optimization
problems: a quadratic convex and a quadratic concave optimization problem on
the set of doubly stochastic matrices. The concave relaxation has the same
global minimum as the initial graph matching problem, but the search for its
global minimum is also a hard combinatorial problem. We therefore construct an
approximation of the concave problem solution by following a solution path of a
convex-concave problem obtained by linear interpolation of the convex and
concave formulations, starting from the convex relaxation. This method allows
to easily integrate the information on graph label similarities into the
optimization problem, and therefore to perform labeled weighted graph matching.
The algorithm is compared with some of the best performing graph matching
methods on four datasets: simulated graphs, QAPLib, retina vessel images and
handwritten chinese characters. In all cases, the results are competitive with
the state-of-the-art.Comment: 23 pages, 13 figures,typo correction, new results in sections 4,5,
Domain Adaptation on Graphs by Learning Graph Topologies: Theoretical Analysis and an Algorithm
Traditional machine learning algorithms assume that the training and test
data have the same distribution, while this assumption does not necessarily
hold in real applications. Domain adaptation methods take into account the
deviations in the data distribution. In this work, we study the problem of
domain adaptation on graphs. We consider a source graph and a target graph
constructed with samples drawn from data manifolds. We study the problem of
estimating the unknown class labels on the target graph using the label
information on the source graph and the similarity between the two graphs. We
particularly focus on a setting where the target label function is learnt such
that its spectrum is similar to that of the source label function. We first
propose a theoretical analysis of domain adaptation on graphs and present
performance bounds that characterize the target classification error in terms
of the properties of the graphs and the data manifolds. We show that the
classification performance improves as the topologies of the graphs get more
balanced, i.e., as the numbers of neighbors of different graph nodes become
more proportionate, and weak edges with small weights are avoided. Our results
also suggest that graph edges between too distant data samples should be
avoided for good generalization performance. We then propose a graph domain
adaptation algorithm inspired by our theoretical findings, which estimates the
label functions while learning the source and target graph topologies at the
same time. The joint graph learning and label estimation problem is formulated
through an objective function relying on our performance bounds, which is
minimized with an alternating optimization scheme. Experiments on synthetic and
real data sets suggest that the proposed method outperforms baseline
approaches
- …