5,094 research outputs found
Shape Classification using Spectral Graph Wavelets
Spectral shape descriptors have been used extensively in a broad spectrum of
geometry processing applications ranging from shape retrieval and segmentation
to classification. In this pa- per, we propose a spectral graph wavelet
approach for 3D shape classification using the bag-of-features paradigm. In an
effort to capture both the local and global geometry of a 3D shape, we present
a three-step feature description framework. First, local descriptors are
extracted via the spectral graph wavelet transform having the Mexican hat
wavelet as a generating ker- nel. Second, mid-level features are obtained by
embedding lo- cal descriptors into the visual vocabulary space using the soft-
assignment coding step of the bag-of-features model. Third, a global descriptor
is constructed by aggregating mid-level fea- tures weighted by a geodesic
exponential kernel, resulting in a matrix representation that describes the
frequency of appearance of nearby codewords in the vocabulary. Experimental
results on two standard 3D shape benchmarks demonstrate the effective- ness of
the proposed classification approach in comparison with state-of-the-art
methods
3D Shape Classification Using Collaborative Representation based Projections
A novel 3D shape classification scheme, based on collaborative representation
learning, is investigated in this work. A data-driven feature-extraction
procedure, taking the form of a simple projection operator, is in the core of
our methodology. Provided a shape database, a graph encapsulating the
structural relationships among all the available shapes, is first constructed
and then employed in defining low-dimensional sparse projections. The recently
introduced method of CRPs (collaborative representation based projections),
which is based on L2-Graph, is the first variant that is included towards this
end. A second algorithm, that particularizes the CRPs to shape descriptors that
are inherently nonnegative, is also introduced as potential alternative. In
both cases, the weights in the graph reflecting the database structure are
calculated so as to approximate each shape as a sparse linear combination of
the remaining dataset objects. By way of solving a generalized eigenanalysis
problem, a linear matrix operator is designed that will act as the feature
extractor. Two popular, inherently high dimensional descriptors, namely
ShapeDNA and Global Point Signature (GPS), are employed in our experimentations
with SHREC10, SHREC11 and SCHREC 15 datasets, where shape recognition is cast
as a multi-class classification problem that is tackled by means of an SVM
(support vector machine) acting within the reduced dimensional space of the
crafted projections. The results are very promising and outperform state of the
art methods, providing evidence about the highly discriminative nature of the
introduced 3D shape representations.Comment: 16 pages, 6 Figures, 3 Tables Statement including that an updated
version of this manuscript is under condiseration at Pattern Recognition
Letters, is adde
Augmented Semantic Signatures of Airborne LiDAR Point Clouds for Comparison
LiDAR point clouds provide rich geometric information, which is particularly
useful for the analysis of complex scenes of urban regions. Finding structural
and semantic differences between two different three-dimensional point clouds,
say, of the same region but acquired at different time instances is an
important problem. A comparison of point clouds involves computationally
expensive registration and segmentation. We are interested in capturing the
relative differences in the geometric uncertainty and semantic content of the
point cloud without the registration process. Hence, we propose an
orientation-invariant geometric signature of the point cloud, which integrates
its probabilistic geometric and semantic classifications. We study different
properties of the geometric signature, which are an image-based encoding of
geometric uncertainty and semantic content. We explore different metrics to
determine differences between these signatures, which in turn compare point
clouds without performing point-to-point registration. Our results show that
the differences in the signatures corroborate with the geometric and semantic
differences of the point clouds.Comment: 18 pages, 6 figures, 1 tabl
Global spectral graph wavelet signature for surface analysis of carpal bones
In this paper, we present a spectral graph wavelet approach for shape
analysis of carpal bones of human wrist. We apply a metric called global
spectral graph wavelet signature for representation of cortical surface of the
carpal bone based on eigensystem of Laplace-Beltrami operator. Furthermore, we
propose a heuristic and efficient way of aggregating local descriptors of a
carpal bone surface to global descriptor. The resultant global descriptor is
not only isometric invariant, but also much more efficient and requires less
memory storage. We perform experiments on shape of the carpal bones of ten
women and ten men from a publicly-available database. Experimental results show
the excellency of the proposed GSGW compared to recent proposed GPS embedding
approach for comparing shapes of the carpal bones across populations.Comment: arXiv admin note: substantial text overlap with arXiv:1705.0625
Local Geometry Inclusive Global Shape Representation
Knowledge of shape geometry plays a pivotal role in many shape analysis
applications. In this paper we introduce a local geometry-inclusive global
representation of 3D shapes based on computation of the shortest quasi-geodesic
paths between all possible pairs of points on the 3D shape manifold. In the
proposed representation, the normal curvature along the quasi-geodesic paths
between any two points on the shape surface is preserved. We employ the
eigenspectrum of the proposed global representation to address the problems of
determination of region-based correspondence between isometric shapes and
characterization of self-symmetry in the absence of prior knowledge in the form
of user-defined correspondence maps. We further utilize the commutative
property of the resulting shape descriptor to extract stable regions between
isometric shapes that differ from one another by a high degree of isometry
transformation. We also propose various shape characterization metrics in terms
of the eigenvector decomposition of the shape descriptor spectrum to quantify
the correspondence and self-symmetry of 3D shapes. The performance of the
proposed 3D shape descriptor is experimentally compared with the performance of
other relevant state-of-the-art 3D shape descriptors.Comment: 11 pages, 5 figure
A Fusion of Labeled-Grid Shape Descriptors with Weighted Ranking Algorithm for Shapes Recognition
Retrieving similar images from a large dataset based on the image content has
been a very active research area and is a very challenging task. Studies have
shown that retrieving similar images based on their shape is a very effective
method. For this purpose a large number of methods exist in literature. The
combination of more than one feature has also been investigated for this
purpose and has shown promising results. In this paper a fusion based shapes
recognition method has been proposed. A set of local boundary based and region
based features are derived from the labeled grid based representation of the
shape and are combined with a few global shape features to produce a composite
shape descriptor. This composite shape descriptor is then used in a weighted
ranking algorithm to find similarities among shapes from a large dataset. The
experimental analysis has shown that the proposed method is powerful enough to
discriminate the geometrically similar shapes from the non-similar ones
Geodesic convolutional neural networks on Riemannian manifolds
Feature descriptors play a crucial role in a wide range of geometry analysis
and processing applications, including shape correspondence, retrieval, and
segmentation. In this paper, we introduce Geodesic Convolutional Neural
Networks (GCNN), a generalization of the convolutional networks (CNN) paradigm
to non-Euclidean manifolds. Our construction is based on a local geodesic
system of polar coordinates to extract "patches", which are then passed through
a cascade of filters and linear and non-linear operators. The coefficients of
the filters and linear combination weights are optimization variables that are
learned to minimize a task-specific cost function. We use GCNN to learn
invariant shape features, allowing to achieve state-of-the-art performance in
problems such as shape description, retrieval, and correspondence
Multi-feature Distance Metric Learning for Non-rigid 3D Shape Retrieval
In the past decades, feature-learning-based 3D shape retrieval approaches
have been received widespread attention in the computer graphic community.
These approaches usually explored the hand-crafted distance metric or
conventional distance metric learning methods to compute the similarity of the
single feature. The single feature always contains onefold geometric
information, which cannot characterize the 3D shapes well. Therefore, the
multiple features should be used for the retrieval task to overcome the
limitation of single feature and further improve the performance. However, most
conventional distance metric learning methods fail to integrate the
complementary information from multiple features to construct the distance
metric. To address these issue, a novel multi-feature distance metric learning
method for non-rigid 3D shape retrieval is presented in this study, which can
make full use of the complimentary geometric information from multiple shape
features by utilizing the KL-divergences. Minimizing KL-divergence between
different metric of features and a common metric is a consistency constraints,
which can lead the consistency shared latent feature space of the multiple
features. We apply the proposed method to 3D model retrieval, and test our
method on well known benchmark database. The results show that our method
substantially outperforms the state-of-the-art non-rigid 3D shape retrieval
methods
PointHop: An Explainable Machine Learning Method for Point Cloud Classification
An explainable machine learning method for point cloud classification, called
the PointHop method, is proposed in this work. The PointHop method consists of
two stages: 1) local-to-global attribute building through iterative one-hop
information exchange, and 2) classification and ensembles. In the attribute
building stage, we address the problem of unordered point cloud data using a
space partitioning procedure and developing a robust descriptor that
characterizes the relationship between a point and its one-hop neighbor in a
PointHop unit. When we put multiple PointHop units in cascade, the attributes
of a point will grow by taking its relationship with one-hop neighbor points
into account iteratively. Furthermore, to control the rapid dimension growth of
the attribute vector associated with a point, we use the Saab transform to
reduce the attribute dimension in each PointHop unit. In the classification and
ensemble stage, we feed the feature vector obtained from multiple PointHop
units to a classifier. We explore ensemble methods to improve the
classification performance furthermore. It is shown by experimental results
that the PointHop method offers classification performance that is comparable
with state-of-the-art methods while demanding much lower training complexity.Comment: 13 pages with 9 figure
Diffusion framework for geometric and photometric data fusion in non-rigid shape analysis
In this paper, we explore the use of the diffusion geometry framework for the
fusion of geometric and photometric information in local and global shape
descriptors. Our construction is based on the definition of a diffusion process
on the shape manifold embedded into a high-dimensional space where the
embedding coordinates represent the photometric information. Experimental
results show that such data fusion is useful in coping with different
challenges of shape analysis where pure geometric and pure photometric methods
fail
- …