126 research outputs found

    Predicting the Next Best View for 3D Mesh Refinement

    Full text link
    3D reconstruction is a core task in many applications such as robot navigation or sites inspections. Finding the best poses to capture part of the scene is one of the most challenging topic that goes under the name of Next Best View. Recently, many volumetric methods have been proposed; they choose the Next Best View by reasoning over a 3D voxelized space and by finding which pose minimizes the uncertainty decoded into the voxels. Such methods are effective, but they do not scale well since the underlaying representation requires a huge amount of memory. In this paper we propose a novel mesh-based approach which focuses on the worst reconstructed region of the environment mesh. We define a photo-consistent index to evaluate the 3D mesh accuracy, and an energy function over the worst regions of the mesh which takes into account the mutual parallax with respect to the previous cameras, the angle of incidence of the viewing ray to the surface and the visibility of the region. We test our approach over a well known dataset and achieve state-of-the-art results.Comment: 13 pages, 5 figures, to be published in IAS-1

    Pred-NBV: Prediction-guided Next-Best-View for 3D Object Reconstruction

    Full text link
    Prediction-based active perception has shown the potential to improve the navigation efficiency and safety of the robot by anticipating the uncertainty in the unknown environment. The existing works for 3D shape prediction make an implicit assumption about the partial observations and therefore cannot be used for real-world planning and do not consider the control effort for next-best-view planning. We present Pred-NBV, a realistic object shape reconstruction method consisting of PoinTr-C, an enhanced 3D prediction model trained on the ShapeNet dataset, and an information and control effort-based next-best-view method to address these issues. Pred-NBV shows an improvement of 25.46% in object coverage over the traditional methods in the AirSim simulator, and performs better shape completion than PoinTr, the state-of-the-art shape completion model, even on real data obtained from a Velodyne 3D LiDAR mounted on DJI M600 Pro.Comment: 6 pages, 4 figures, 2 tables. Accepted to IROS 202

    3D Multi-Robot Exploration with a Two-Level Coordination Strategy and Prioritization

    Full text link
    This work presents a 3D multi-robot exploration framework for a team of UGVs moving on uneven terrains. The framework was designed by casting the two-level coordination strategy presented in [1] into the context of multi-robot exploration. The resulting distributed exploration technique minimizes and explicitly manages the occurrence of conflicts and interferences in the robot team. Each robot selects where to scan next by using a receding horizon next-best-view approach [2]. A sampling-based tree is directly expanded on segmented traversable regions of the terrain 3D map to generate the candidate next viewpoints. During the exploration, users can assign locations with higher priorities on-demand to steer the robot exploration toward areas of interest. The proposed framework can be also used to perform coverage tasks in the case a map of the environment is a priori provided as input. An open-source implementation is available online

    NeU-NBV: Next Best View Planning Using Uncertainty Estimation in Image-Based Neural Rendering

    Full text link
    Autonomous robotic tasks require actively perceiving the environment to achieve application-specific goals. In this paper, we address the problem of positioning an RGB camera to collect the most informative images to represent an unknown scene, given a limited measurement budget. We propose a novel mapless planning framework to iteratively plan the next best camera view based on collected image measurements. A key aspect of our approach is a new technique for uncertainty estimation in image-based neural rendering, which guides measurement acquisition at the most uncertain view among view candidates, thus maximising the information value during data collection. By incrementally adding new measurements into our image collection, our approach efficiently explores an unknown scene in a mapless manner. We show that our uncertainty estimation is generalisable and valuable for view planning in unknown scenes. Our planning experiments using synthetic and real-world data verify that our uncertainty-guided approach finds informative images leading to more accurate scene representations when compared against baselines.Comment: Accepted to IEEE/RSJ International Conference on Robotics and Intelligent Systems (IROS) 202

    MAP-NBV: Multi-agent Prediction-guided Next-Best-View Planning for Active 3D Object Reconstruction

    Full text link
    We propose MAP-NBV, a prediction-guided active algorithm for 3D reconstruction with multi-agent systems. Prediction-based approaches have shown great improvement in active perception tasks by learning the cues about structures in the environment from data. But these methods primarily focus on single-agent systems. We design a next-best-view approach that utilizes geometric measures over the predictions and jointly optimizes the information gain and control effort for efficient collaborative 3D reconstruction of the object. Our method achieves 22.75% improvement over the prediction-based single-agent approach and 15.63% improvement over the non-predictive multi-agent approach. We make our code publicly available through our project website: http://raaslab.org/projects/MAPNBV/Comment: 7 pages, 7 figures, 2 tables. Submitted to MRS 202

    Sampling-Based Exploration Strategies for Mobile Robot Autonomy

    Get PDF
    A novel, sampling-based exploration strategy is introduced for Unmanned Ground Vehicles (UGV) to efficiently map large GPS-deprived underground environments. It is compared to state-of-the-art approaches and performs on a similar level, while it is not designed for a specific robot or sensor configuration like the other approaches. The introduced exploration strategy, which is called Random-Sampling-Based Next-Best View Exploration (RNE), uses a Rapidly-exploring Random Graph (RRG) to find possible view points in an area around the robot. They are compared with a computation-efficient Sparse Ray Polling (SRP) in a voxel grid to find the next-best view for the exploration. Each node in the exploration graph built with RRG is evaluated regarding the ability of the UGV to traverse it, which is derived from an occupancy grid map. It is also used to create a topology-based graph where nodes are placed centrally to reduce the risk of collisions and increase the amount of observable space. Nodes that fall outside the local exploration area are stored in a global graph and are connected with a Traveling Salesman Problem solver to explore them later

    The Surface Edge Explorer (SEE): A measurement-direct approach to next best view planning

    Full text link
    High-quality observations of the real world are crucial for a variety of applications, including producing 3D printed replicas of small-scale scenes and conducting inspections of large-scale infrastructure. These 3D observations are commonly obtained by combining multiple sensor measurements from different views. Guiding the selection of suitable views is known as the NBV planning problem. Most NBV approaches reason about measurements using rigid data structures (e.g., surface meshes or voxel grids). This simplifies next best view selection but can be computationally expensive, reduces real-world fidelity, and couples the selection of a next best view with the final data processing. This paper presents the Surface Edge Explorer, a NBV approach that selects new observations directly from previous sensor measurements without requiring rigid data structures. SEE uses measurement density to propose next best views that increase coverage of insufficiently observed surfaces while avoiding potential occlusions. Statistical results from simulated experiments show that SEE can attain similar or better surface coverage with less observation time and travel distance than evaluated volumetric approaches on both small- and large-scale scenes. Real-world experiments demonstrate SEE autonomously observing a deer statue using a 3D sensor affixed to a robotic arm.Comment: Under review for the International Journal of Robotics Research (IJRR), Manuscript #IJR-22-4541. 25 pages, 17 figures, 6 tables. Videos available at https://www.youtube.com/watch?v=dqppqRlaGEA and https://www.youtube.com/playlist?list=PLbaQBz4TuPcyNh4COoaCtC1ZGhpbEkFE
    • …
    corecore