283 research outputs found
SGPN: Similarity Group Proposal Network for 3D Point Cloud Instance Segmentation
We introduce Similarity Group Proposal Network (SGPN), a simple and intuitive
deep learning framework for 3D object instance segmentation on point clouds.
SGPN uses a single network to predict point grouping proposals and a
corresponding semantic class for each proposal, from which we can directly
extract instance segmentation results. Important to the effectiveness of SGPN
is its novel representation of 3D instance segmentation results in the form of
a similarity matrix that indicates the similarity between each pair of points
in embedded feature space, thus producing an accurate grouping proposal for
each point. To the best of our knowledge, SGPN is the first framework to learn
3D instance-aware semantic segmentation on point clouds. Experimental results
on various 3D scenes show the effectiveness of our method on 3D instance
segmentation, and we also evaluate the capability of SGPN to improve 3D object
detection and semantic segmentation results. We also demonstrate its
flexibility by seamlessly incorporating 2D CNN features into the framework to
boost performance
BEVTrack: A Simple and Strong Baseline for 3D Single Object Tracking in Bird's-Eye View
3D Single Object Tracking (SOT) is a fundamental task of computer vision,
proving essential for applications like autonomous driving. It remains
challenging to localize the target from surroundings due to appearance
variations, distractors, and the high sparsity of point clouds. The spatial
information indicating objects' spatial adjacency across consecutive frames is
crucial for effective object tracking. However, existing trackers typically
employ point-wise representation with irregular formats, leading to
insufficient use of this important spatial knowledge. As a result, these
trackers usually require elaborate designs and solving multiple subtasks. In
this paper, we propose BEVTrack, a simple yet effective baseline that performs
tracking in Bird's-Eye View (BEV). This representation greatly retains spatial
information owing to its ordered structure and inherently encodes the implicit
motion relations of the target as well as distractors. To achieve accurate
regression for targets with diverse attributes (\textit{e.g.}, sizes and motion
patterns), BEVTrack constructs the likelihood function with the learned
underlying distributions adapted to different targets, rather than making a
fixed Laplace or Gaussian assumption as in previous works. This provides
valuable priors for tracking and thus further boosts performance. While only
using a single regression loss with a plain convolutional architecture,
BEVTrack achieves state-of-the-art performance on three large-scale datasets,
KITTI, NuScenes, and Waymo Open Dataset while maintaining a high inference
speed of about 200 FPS. The code will be released at
https://github.com/xmm-prio/BEVTrack.Comment: The code will be released at https://github.com/xmm-prio/BEVTrac
Small Object Tracking in LiDAR Point Cloud: Learning the Target-awareness Prototype and Fine-grained Search Region
Single Object Tracking in LiDAR point cloud is one of the most essential
parts of environmental perception, in which small objects are inevitable in
real-world scenarios and will bring a significant barrier to the accurate
location. However, the existing methods concentrate more on exploring universal
architectures for common categories and overlook the challenges that small
objects have long been thorny due to the relative deficiency of foreground
points and a low tolerance for disturbances. To this end, we propose a Siamese
network-based method for small object tracking in the LiDAR point cloud, which
is composed of the target-awareness prototype mining (TAPM) module and the
regional grid subdivision (RGS) module. The TAPM module adopts the
reconstruction mechanism of the masked decoder to learn the prototype in the
feature space, aiming to highlight the presence of foreground points that will
facilitate the subsequent location of small objects. Through the above
prototype is capable of accentuating the small object of interest, the
positioning deviation in feature maps still leads to high tracking errors. To
alleviate this issue, the RGS module is proposed to recover the fine-grained
features of the search region based on ViT and pixel shuffle layers. In
addition, apart from the normal settings, we elaborately design a scaling
experiment to evaluate the robustness of the different trackers on small
objects. Extensive experiments on KITTI and nuScenes demonstrate that our
method can effectively improve the tracking performance of small targets
without affecting normal-sized objects
Object Re-Identification from Point Clouds
Object re-identification (ReID) from images plays a critical role in
application domains of image retrieval (surveillance, retail analytics, etc.)
and multi-object tracking (autonomous driving, robotics, etc.). However,
systems that additionally or exclusively perceive the world from depth sensors
are becoming more commonplace without any corresponding methods for object
ReID. In this work, we fill the gap by providing the first large-scale study of
object ReID from point clouds and establishing its performance relative to
image ReID. To enable such a study, we create two large-scale ReID datasets
with paired image and LiDAR observations and propose a lightweight matching
head that can be concatenated to any set or sequence processing backbone (e.g.,
PointNet or ViT), creating a family of comparable object ReID networks for both
modalities. Run in Siamese style, our proposed point cloud ReID networks can
make thousands of pairwise comparisons in real-time ( Hz). Our findings
demonstrate that their performance increases with higher sensor resolution and
approaches that of image ReID when observations are sufficiently dense. Our
strongest network trained at the largest scale achieves ReID accuracy exceeding
for rigid objects and for deformable objects (without any
explicit skeleton normalization). To our knowledge, we are the first to study
object re-identification from real point cloud observations
- …