1,284 research outputs found
Learning multiview 3D point cloud registration
We present a novel, end-to-end learnable, multiview 3D point cloud
registration algorithm. Registration of multiple scans typically follows a
two-stage pipeline: the initial pairwise alignment and the globally consistent
refinement. The former is often ambiguous due to the low overlap of neighboring
point clouds, symmetries and repetitive scene parts. Therefore, the latter
global refinement aims at establishing the cyclic consistency across multiple
scans and helps in resolving the ambiguous cases. In this paper we propose, to
the best of our knowledge, the first end-to-end algorithm for joint learning of
both parts of this two-stage problem. Experimental evaluation on well accepted
benchmark datasets shows that our approach outperforms the state-of-the-art by
a significant margin, while being end-to-end trainable and computationally less
costly. Moreover, we present detailed analysis and an ablation study that
validate the novel components of our approach. The source code and pretrained
models are publicly available under
https://github.com/zgojcic/3D_multiview_reg.Comment: CVPR2020 - Camera Read
Point Completion Networks and Segmentation of 3D Mesh
Deep learning has made many advancements in fields such as computer vision, natural language processing and speech processing. In autonomous driving, deep learning has made great improvements pertaining to the tasks of lane detection, steering estimation, throttle control, depth estimation, 2D and 3D object detection, object segmentation and object tracking. Understanding the 3D world is necessary for safe end-to-end self-driving. 3D point clouds provide rich 3D information, but processing point clouds is difficult since point clouds are irregular and unordered. Neural point processing methods like GraphCNN and PointNet operate on individual points for accurate classification and segmentation results. Occlusion of these 3D point clouds remains a major problem for autonomous driving. To process occluded point clouds, this research explores deep learning models to fill in missing points from partial point clouds. Specifically, we introduce improvements to methods called deep multistage point completion networks. We propose novel encoder and decoder architectures for efficiently processing partial point clouds as input and outputting complete point clouds. Results will be demonstrated on ShapeNet dataset.
Deep learning has made significant advancements in the field of robotics. For a robot gripper such as a suction cup to hold an object firmly, the robot needs to determine which portions of an object, or specifically which surfaces of the object should be used to mount the suction cup. Since 3D objects can be represented in many forms for computational purposes, a proper representation of 3D objects is necessary to tackle this problem. Formulating this problem using deep learning problem provides dataset challenges. In this work we will show representing 3D objects in the form of 3D mesh is effective for the problem of a robot gripper. We will perform research on the proper way for dataset creation and performance evaluation
- …