18,840 research outputs found
A self-adaptive segmentation method for a point cloud
The segmentation of a point cloud is one of the key technologies for three-dimensional reconstruction, and the segmentation from three-dimensional views can facilitate reverse engineering. In this paper, we propose a self-adaptive segmentation algorithm, which can address challenges related to the region-growing algorithm, such as inconsistent or excessive segmentation. Our algorithm consists of two main steps: automatic selection of seed points according to extracted features and segmentation of the points using an improved region-growing algorithm. The benefits of our approach are the ability to select seed points without user intervention and the reduction of the influence of noise. We demonstrate the robustness and effectiveness of our algorithm on different point cloud models and the results show that the segmentation accuracy rate achieves 96%
JSMNet Improving Indoor Point Cloud Semantic and Instance Segmentation through Self-Attention and Multiscale
The semantic understanding of indoor 3D point cloud data is crucial for a
range of subsequent applications, including indoor service robots, navigation
systems, and digital twin engineering. Global features are crucial for
achieving high-quality semantic and instance segmentation of indoor point
clouds, as they provide essential long-range context information. To this end,
we propose JSMNet, which combines a multi-layer network with a global feature
self-attention module to jointly segment three-dimensional point cloud
semantics and instances. To better express the characteristics of indoor
targets, we have designed a multi-resolution feature adaptive fusion module
that takes into account the differences in point cloud density caused by
varying scanner distances from the target. Additionally, we propose a framework
for joint semantic and instance segmentation by integrating semantic and
instance features to achieve superior results. We conduct experiments on S3DIS,
which is a large three-dimensional indoor point cloud dataset. Our proposed
method is compared against other methods, and the results show that it
outperforms existing methods in semantic and instance segmentation and provides
better results in target local area segmentation. Specifically, our proposed
method outperforms PointNet (Qi et al., 2017a) by 16.0% and 26.3% in terms of
semantic segmentation mIoU in S3DIS (Area 5) and instance segmentation mPre,
respectively. Additionally, it surpasses ASIS (Wang et al., 2019) by 6.0% and
4.6%, respectively, as well as JSPNet (Chen et al., 2022) by a margin of 3.3%
for semantic segmentation mIoU and a slight improvement of 0.3% for instance
segmentation mPre
Self-supervised Point Cloud Representation Learning via Separating Mixed Shapes
The manual annotation for large-scale point clouds costs a lot of time and is
usually unavailable in harsh real-world scenarios. Inspired by the great
success of the pre-training and fine-tuning paradigm in both vision and
language tasks, we argue that pre-training is one potential solution for
obtaining a scalable model to 3D point cloud downstream tasks as well. In this
paper, we, therefore, explore a new self-supervised learning method, called
Mixing and Disentangling (MD), for 3D point cloud representation learning. As
the name implies, we mix two input shapes and demand the model learning to
separate the inputs from the mixed shape. We leverage this reconstruction task
as the pretext optimization objective for self-supervised learning. There are
two primary advantages: 1) Compared to prevailing image datasets, eg, ImageNet,
point cloud datasets are de facto small. The mixing process can provide a much
larger online training sample pool. 2) On the other hand, the disentangling
process motivates the model to mine the geometric prior knowledge, eg, key
points. To verify the effectiveness of the proposed pretext task, we build one
baseline network, which is composed of one encoder and one decoder. During
pre-training, we mix two original shapes and obtain the geometry-aware
embedding from the encoder, then an instance-adaptive decoder is applied to
recover the original shapes from the embedding. Albeit simple, the pre-trained
encoder can capture the key points of an unseen point cloud and surpasses the
encoder trained from scratch on downstream tasks. The proposed method has
improved the empirical performance on both ModelNet-40 and ShapeNet-Part
datasets in terms of point cloud classification and segmentation tasks. We
further conduct ablation studies to explore the effect of each component and
verify the generalization of our proposed strategy by harnessing different
backbones
Self-organizing nonlinear output (SONO): A neural network suitable for cloud patch-based rainfall estimation at small scales
Accurate measurement of rainfall distribution at various spatial and temporal scales is crucial for hydrological modeling and water resources management. In the literature of satellite rainfall estimation, many efforts have been made to calibrate a statistical relationship (including threshold, linear, or nonlinear) between cloud infrared (IR) brightness temperatures and surface rain rates (RR). In this study, an automated neural network for cloud patch-based rainfall estimation, entitled self-organizing nonlinear output (SONO) model, is developed to account for the high variability of cloud-rainfall processes at geostationary scales (i.e., 4 km and every 30 min). Instead of calibrating only one IR-RR function for all clouds the SONO classifies varied cloud patches into different clusters and then searches a nonlinear IR-RR mapping function for each cluster. This designed feature enables SONO to generate various rain rates at a given brightness temperature and variable rain/no-rain IR thresholds for different cloud types, which overcomes the one-to-one mapping limitation of a single statistical IR-RR function for the full spectrum of cloud-rainfall conditions. In addition, the computational and modeling strengths of neural network enable SONO to cope with the nonlinearity of cloud-rainfall relationships by fusing multisource data sets. Evaluated at various temporal and spatial scales, SONO shows improvements of estimation accuracy, both in rain intensity and in detection of rain/no-rain pixels. Further examination of the SONO adaptability demonstrates its potentiality as an operational satellite rainfall estimation system that uses the passive microwave rainfall observations from low-orbiting satellites to adjust the IR-based rainfall estimates at the resolution of geostationary satellites. Copyright 2005 by the American Geophysical Union
- …