16,912 research outputs found

    Predicting the Next Best View for 3D Mesh Refinement

    Full text link
    3D reconstruction is a core task in many applications such as robot navigation or sites inspections. Finding the best poses to capture part of the scene is one of the most challenging topic that goes under the name of Next Best View. Recently, many volumetric methods have been proposed; they choose the Next Best View by reasoning over a 3D voxelized space and by finding which pose minimizes the uncertainty decoded into the voxels. Such methods are effective, but they do not scale well since the underlaying representation requires a huge amount of memory. In this paper we propose a novel mesh-based approach which focuses on the worst reconstructed region of the environment mesh. We define a photo-consistent index to evaluate the 3D mesh accuracy, and an energy function over the worst regions of the mesh which takes into account the mutual parallax with respect to the previous cameras, the angle of incidence of the viewing ray to the surface and the visibility of the region. We test our approach over a well known dataset and achieve state-of-the-art results.Comment: 13 pages, 5 figures, to be published in IAS-1

    Active SLAM for autonomous underwater exploration

    Get PDF
    Exploration of a complex underwater environment without an a priori map is beyond the state of the art for autonomous underwater vehicles (AUVs). Despite several efforts regarding simultaneous localization and mapping (SLAM) and view planning, there is no exploration framework, tailored to underwater vehicles, that faces exploration combining mapping, active localization, and view planning in a unified way. We propose an exploration framework, based on an active SLAM strategy, that combines three main elements: a view planner, an iterative closest point algorithm (ICP)-based pose-graph SLAM algorithm, and an action selection mechanism that makes use of the joint map and state entropy reduction. To demonstrate the benefits of the active SLAM strategy, several tests were conducted with the Girona 500 AUV, both in simulation and in the real world. The article shows how the proposed framework makes it possible to plan exploratory trajectories that keep the vehicle’s uncertainty bounded; thus, creating more consistent maps.Peer ReviewedPostprint (published version

    Implementation of an automated eye-in-hand scanning system using Best-Path planning

    Get PDF
    In this thesis we implemented an automated scanning system for 3D object reconstruction. This system is composed of a KUKA LWR 4+ arm with Microsoft Kinect cameras placed on its extreme and thus, in an eye-in-hand con guration. We implemented the system in ROS using Kinect Fusion software with extra features added by R. Monica's previous work [16] and MoveIt! ROS libraries [29] to control the robot movement with motion planning. To connect these nodes, we have coded a suite using ROS and MATLAB to easily operate them as well as including new features, such as an original view planner that outperforms the commonly used Next-Best-View planner. This suite incorporates a Graphical User Interface that allows new users to easily perform the reconstruction tasks. The new view planner developed in this work, called Best-Path planner, o ers a new approach using a modi ed Dijkstra algorithm. Among its bene ts, Best-Path planner o ers an optimized way to scan the objects preventing the camera to cross again the areas which have already been scanned. Moreover, viewpoint location and orientation have been studied in depth in order to obtain the most natural movements and get the best results. For this reason, this new planner makes the scanning procedure more robust as it assures trajectories through these optimized viewpoints, so the camera is always looking towards the object maintaining the optimal sensing distances. As this project is focused on its later utility in the Intelligent Robotics Laboratory, we uploaded all the source code in the Aalto GitLab repositories [37] with installation instructions and user guides to show the di erent features that the suite o ers

    3D Registration of Aerial and Ground Robots for Disaster Response: An Evaluation of Features, Descriptors, and Transformation Estimation

    Full text link
    Global registration of heterogeneous ground and aerial mapping data is a challenging task. This is especially difficult in disaster response scenarios when we have no prior information on the environment and cannot assume the regular order of man-made environments or meaningful semantic cues. In this work we extensively evaluate different approaches to globally register UGV generated 3D point-cloud data from LiDAR sensors with UAV generated point-cloud maps from vision sensors. The approaches are realizations of different selections for: a) local features: key-points or segments; b) descriptors: FPFH, SHOT, or ESF; and c) transformation estimations: RANSAC or FGR. Additionally, we compare the results against standard approaches like applying ICP after a good prior transformation has been given. The evaluation criteria include the distance which a UGV needs to travel to successfully localize, the registration error, and the computational cost. In this context, we report our findings on effectively performing the task on two new Search and Rescue datasets. Our results have the potential to help the community take informed decisions when registering point-cloud maps from ground robots to those from aerial robots.Comment: Awarded Best Paper at the 15th IEEE International Symposium on Safety, Security, and Rescue Robotics 2017 (SSRR 2017
    • …
    corecore