50,934 research outputs found
Automated Evaluation of 3D Reconstruction Results for Benchmarking View Planning
To obtain complete 3D object reconstructions using optical measurements, several views of the object are necessary. The task of determining good sensor positions to achieve a 3D reconstruction with low error, high completeness and few required views is called the Next Best View (NBV) problem. Solving the NBV problem is an important task for automated 3D reconstruction. However, comparison of different planning methods has been difficult, since only few dedicated test methods exist. We present an extension to our NBV benchmark framework, that allows for faster, automated evaluation of large result data sets. We show that the method introduces insignificant error, while considerably reducing evaluation runtime and increasing robustness
Pred-NBV: Prediction-guided Next-Best-View for 3D Object Reconstruction
Prediction-based active perception has shown the potential to improve the
navigation efficiency and safety of the robot by anticipating the uncertainty
in the unknown environment. The existing works for 3D shape prediction make an
implicit assumption about the partial observations and therefore cannot be used
for real-world planning and do not consider the control effort for
next-best-view planning. We present Pred-NBV, a realistic object shape
reconstruction method consisting of PoinTr-C, an enhanced 3D prediction model
trained on the ShapeNet dataset, and an information and control effort-based
next-best-view method to address these issues. Pred-NBV shows an improvement of
25.46% in object coverage over the traditional methods in the AirSim simulator,
and performs better shape completion than PoinTr, the state-of-the-art shape
completion model, even on real data obtained from a Velodyne 3D LiDAR mounted
on DJI M600 Pro.Comment: 6 pages, 4 figures, 2 tables. Accepted to IROS 202
MAP-NBV: Multi-agent Prediction-guided Next-Best-View Planning for Active 3D Object Reconstruction
We propose MAP-NBV, a prediction-guided active algorithm for 3D
reconstruction with multi-agent systems. Prediction-based approaches have shown
great improvement in active perception tasks by learning the cues about
structures in the environment from data. But these methods primarily focus on
single-agent systems. We design a next-best-view approach that utilizes
geometric measures over the predictions and jointly optimizes the information
gain and control effort for efficient collaborative 3D reconstruction of the
object. Our method achieves 22.75% improvement over the prediction-based
single-agent approach and 15.63% improvement over the non-predictive
multi-agent approach. We make our code publicly available through our project
website: http://raaslab.org/projects/MAPNBV/Comment: 7 pages, 7 figures, 2 tables. Submitted to MRS 202
3D ShapeNets: A Deep Representation for Volumetric Shapes
3D shape is a crucial but heavily underutilized cue in today's computer
vision systems, mostly due to the lack of a good generic shape representation.
With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft
Kinect), it is becoming increasingly important to have a powerful 3D shape
representation in the loop. Apart from category recognition, recovering full 3D
shapes from view-based 2.5D depth maps is also a critical part of visual
understanding. To this end, we propose to represent a geometric 3D shape as a
probability distribution of binary variables on a 3D voxel grid, using a
Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the
distribution of complex 3D shapes across different object categories and
arbitrary poses from raw CAD data, and discovers hierarchical compositional
part representations automatically. It naturally supports joint object
recognition and shape completion from 2.5D depth maps, and it enables active
object recognition through view planning. To train our 3D deep learning model,
we construct ModelNet -- a large-scale 3D CAD model dataset. Extensive
experiments show that our 3D deep representation enables significant
performance improvement over the-state-of-the-arts in a variety of tasks.Comment: to be appeared in CVPR 201
- …