33,140 research outputs found
Learning shape correspondence with anisotropic convolutional neural networks
Establishing correspondence between shapes is a fundamental problem in
geometry processing, arising in a wide variety of applications. The problem is
especially difficult in the setting of non-isometric deformations, as well as
in the presence of topological noise and missing parts, mainly due to the
limited capability to model such deformations axiomatically. Several recent
works showed that invariance to complex shape transformations can be learned
from examples. In this paper, we introduce an intrinsic convolutional neural
network architecture based on anisotropic diffusion kernels, which we term
Anisotropic Convolutional Neural Network (ACNN). In our construction, we
generalize convolutions to non-Euclidean domains by constructing a set of
oriented anisotropic diffusion kernels, creating in this way a local intrinsic
polar representation of the data (`patch'), which is then correlated with a
filter. Several cascades of such filters, linear, and non-linear operators are
stacked to form a deep neural network whose parameters are learned by
minimizing a task-specific cost. We use ACNNs to effectively learn intrinsic
dense correspondences between deformable shapes in very challenging settings,
achieving state-of-the-art results on some of the most difficult recent
correspondence benchmarks
A Fully Convolutional Network for Semantic Labeling of 3D Point Clouds
When classifying point clouds, a large amount of time is devoted to the
process of engineering a reliable set of features which are then passed to a
classifier of choice. Generally, such features - usually derived from the
3D-covariance matrix - are computed using the surrounding neighborhood of
points. While these features capture local information, the process is usually
time-consuming, and requires the application at multiple scales combined with
contextual methods in order to adequately describe the diversity of objects
within a scene. In this paper we present a 1D-fully convolutional network that
consumes terrain-normalized points directly with the corresponding spectral
data,if available, to generate point-wise labeling while implicitly learning
contextual features in an end-to-end fashion. Our method uses only the
3D-coordinates and three corresponding spectral features for each point.
Spectral features may either be extracted from 2D-georeferenced images, as
shown here for Light Detection and Ranging (LiDAR) point clouds, or extracted
directly for passive-derived point clouds,i.e. from muliple-view imagery. We
train our network by splitting the data into square regions, and use a pooling
layer that respects the permutation-invariance of the input points. Evaluated
using the ISPRS 3D Semantic Labeling Contest, our method scored second place
with an overall accuracy of 81.6%. We ranked third place with a mean F1-score
of 63.32%, surpassing the F1-score of the method with highest accuracy by
1.69%. In addition to labeling 3D-point clouds, we also show that our method
can be easily extended to 2D-semantic segmentation tasks, with promising
initial results
Object Tracking in Hyperspectral Videos with Convolutional Features and Kernelized Correlation Filter
Target tracking in hyperspectral videos is a new research topic. In this
paper, a novel method based on convolutional network and Kernelized Correlation
Filter (KCF) framework is presented for tracking objects of interest in
hyperspectral videos. We extract a set of normalized three-dimensional cubes
from the target region as fixed convolution filters which contain spectral
information surrounding a target. The feature maps generated by convolutional
operations are combined to form a three-dimensional representation of an
object, thereby providing effective encoding of local spectral-spatial
information. We show that a simple two-layer convolutional networks is
sufficient to learn robust representations without the need of offline training
with a large dataset. In the tracking step, KCF is adopted to distinguish
targets from neighboring environment. Experimental results demonstrate that the
proposed method performs well on sample hyperspectral videos, and outperforms
several state-of-the-art methods tested on grayscale and color videos in the
same scene.Comment: Accepted by ICSM 201
- …
