277 research outputs found
Point Cloud Segmentation based on Hypergraph Spectral Clustering
Hypergraph spectral analysis has emerged as an effective tool processing
complex data structures in data analysis. The surface of a three-dimensional
(3D) point cloud and the multilateral relationship among their points can be
naturally captured by the high-dimensional hyperedges. This work investigates
the power of hypergraph spectral analysis in unsupervised segmentation of 3D
point clouds. We estimate and order the hypergraph spectrum from observed point
cloud coordinates. By trimming the redundancy from the estimated hypergraph
spectral space based on spectral component strengths, we develop a
clustering-based segmentation method. We apply the proposed method to various
point clouds, and analyze their respective spectral properties. Our
experimental results demonstrate the effectiveness and efficiency of the
proposed segmentation method
Hypergraph Neural Networks
In this paper, we present a hypergraph neural networks (HGNN) framework for
data representation learning, which can encode high-order data correlation in a
hypergraph structure. Confronting the challenges of learning representation for
complex data in real practice, we propose to incorporate such data structure in
a hypergraph, which is more flexible on data modeling, especially when dealing
with complex data. In this method, a hyperedge convolution operation is
designed to handle the data correlation during representation learning. In this
way, traditional hypergraph learning procedure can be conducted using hyperedge
convolution operations efficiently. HGNN is able to learn the hidden layer
representation considering the high-order data structure, which is a general
framework considering the complex data correlations. We have conducted
experiments on citation network classification and visual object recognition
tasks and compared HGNN with graph convolutional networks and other traditional
methods. Experimental results demonstrate that the proposed HGNN method
outperforms recent state-of-the-art methods. We can also reveal from the
results that the proposed HGNN is superior when dealing with multi-modal data
compared with existing methods.Comment: Accepted in AAAI'201
Hypergraph Convolutional Network based Weakly Supervised Point Cloud Semantic Segmentation with Scene-Level Annotations
Point cloud segmentation with scene-level annotations is a promising but
challenging task. Currently, the most popular way is to employ the class
activation map (CAM) to locate discriminative regions and then generate
point-level pseudo labels from scene-level annotations. However, these methods
always suffer from the point imbalance among categories, as well as the sparse
and incomplete supervision from CAM. In this paper, we propose a novel weighted
hypergraph convolutional network-based method, called WHCN, to confront the
challenges of learning point-wise labels from scene-level annotations. Firstly,
in order to simultaneously overcome the point imbalance among different
categories and reduce the model complexity, superpoints of a training point
cloud are generated by exploiting the geometrically homogeneous partition.
Then, a hypergraph is constructed based on the high-confidence superpoint-level
seeds which are converted from scene-level annotations. Secondly, the WHCN
takes the hypergraph as input and learns to predict high-precision point-level
pseudo labels by label propagation. Besides the backbone network consisting of
spectral hypergraph convolution blocks, a hyperedge attention module is learned
to adjust the weights of hyperedges in the WHCN. Finally, a segmentation
network is trained by these pseudo point cloud labels. We comprehensively
conduct experiments on the ScanNet and S3DIS segmentation datasets.
Experimental results demonstrate that the proposed WHCN is effective to predict
the point labels with scene annotations, and yields state-of-the-art results in
the community. The source code is available at
http://zhiyongsu.github.io/Project/WHCN.html
Robust Temporally Coherent Laplacian Protrusion Segmentation of 3D Articulated Bodies
In motion analysis and understanding it is important to be able to fit a
suitable model or structure to the temporal series of observed data, in order
to describe motion patterns in a compact way, and to discriminate between them.
In an unsupervised context, i.e., no prior model of the moving object(s) is
available, such a structure has to be learned from the data in a bottom-up
fashion. In recent times, volumetric approaches in which the motion is captured
from a number of cameras and a voxel-set representation of the body is built
from the camera views, have gained ground due to attractive features such as
inherent view-invariance and robustness to occlusions. Automatic, unsupervised
segmentation of moving bodies along entire sequences, in a temporally-coherent
and robust way, has the potential to provide a means of constructing a
bottom-up model of the moving body, and track motion cues that may be later
exploited for motion classification. Spectral methods such as locally linear
embedding (LLE) can be useful in this context, as they preserve "protrusions",
i.e., high-curvature regions of the 3D volume, of articulated shapes, while
improving their separation in a lower dimensional space, making them in this
way easier to cluster. In this paper we therefore propose a spectral approach
to unsupervised and temporally-coherent body-protrusion segmentation along time
sequences. Volumetric shapes are clustered in an embedding space, clusters are
propagated in time to ensure coherence, and merged or split to accommodate
changes in the body's topology. Experiments on both synthetic and real
sequences of dense voxel-set data are shown. This supports the ability of the
proposed method to cluster body-parts consistently over time in a totally
unsupervised fashion, its robustness to sampling density and shape quality, and
its potential for bottom-up model constructionComment: 31 pages, 26 figure
A Survey on Graph Neural Networks in Intelligent Transportation Systems
Intelligent Transportation System (ITS) is vital in improving traffic
congestion, reducing traffic accidents, optimizing urban planning, etc.
However, due to the complexity of the traffic network, traditional machine
learning and statistical methods are relegated to the background. With the
advent of the artificial intelligence era, many deep learning frameworks have
made remarkable progress in various fields and are now considered effective
methods in many areas. As a deep learning method, Graph Neural Networks (GNNs)
have emerged as a highly competitive method in the ITS field since 2019 due to
their strong ability to model graph-related problems. As a result, more and
more scholars pay attention to the applications of GNNs in transportation
domains, which have shown excellent performance. However, most of the research
in this area is still concentrated on traffic forecasting, while other ITS
domains, such as autonomous vehicles and urban planning, still require more
attention. This paper aims to review the applications of GNNs in six
representative and emerging ITS domains: traffic forecasting, autonomous
vehicles, traffic signal control, transportation safety, demand prediction, and
parking management. We have reviewed extensive graph-related studies from 2018
to 2023, summarized their methods, features, and contributions, and presented
them in informative tables or lists. Finally, we have identified the challenges
of applying GNNs to ITS and suggested potential future directions
- …