21 research outputs found
Active Image-based Modeling with a Toy Drone
Image-based modeling techniques can now generate photo-realistic 3D models
from images. But it is up to users to provide high quality images with good
coverage and view overlap, which makes the data capturing process tedious and
time consuming. We seek to automate data capturing for image-based modeling.
The core of our system is an iterative linear method to solve the multi-view
stereo (MVS) problem quickly and plan the Next-Best-View (NBV) effectively. Our
fast MVS algorithm enables online model reconstruction and quality assessment
to determine the NBVs on the fly. We test our system with a toy unmanned aerial
vehicle (UAV) in simulated, indoor and outdoor experiments. Results show that
our system improves the efficiency of data acquisition and ensures the
completeness of the final model.Comment: To be published on International Conference on Robotics and
Automation 2018, Brisbane, Australia. Project Page:
https://huangrui815.github.io/active-image-based-modeling/ The author's
personal page: http://www.sfu.ca/~rha55
Surface Edge Explorer (SEE): Planning Next Best Views Directly from 3D Observations
Surveying 3D scenes is a common task in robotics. Systems can do so
autonomously by iteratively obtaining measurements. This process of planning
observations to improve the model of a scene is called Next Best View (NBV)
planning.
NBV planning approaches often use either volumetric (e.g., voxel grids) or
surface (e.g., triangulated meshes) representations. Volumetric approaches
generalise well between scenes as they do not depend on surface geometry but do
not scale to high-resolution models of large scenes. Surface representations
can obtain high-resolution models at any scale but often require tuning of
unintuitive parameters or multiple survey stages.
This paper presents a scene-model-free NBV planning approach with a density
representation. The Surface Edge Explorer (SEE) uses the density of current
measurements to detect and explore observed surface boundaries. This approach
is shown experimentally to provide better surface coverage in lower computation
time than the evaluated state-of-the-art volumetric approaches while moving
equivalent distances
GATSBI: An Online GTSP-Based Algorithm for Targeted Surface Bridge Inspection
We study the problem of visually inspecting the surface of a bridge using an
Unmanned Aerial Vehicle (UAV) for defects. We do not assume that the geometric
model of the bridge is known. The UAV is equipped with a LiDAR and RGB sensor
that is used to build a 3D semantic map of the environment. Our planner, termed
GATSBI, plans in an online fashion a path that is targeted towards inspecting
all points on the surface of the bridge. The input to GATSBI consists of a 3D
occupancy grid map of the part of the environment seen by the UAV so far. We
use semantic segmentation to segment the voxels into those that are part of the
bridge and the surroundings. Inspecting a bridge voxel requires the UAV to take
images from a desired viewing angle and distance. We then create a Generalized
Traveling Salesperson Problem (GTSP) instance to cluster candidate viewpoints
for inspecting the bridge voxels and use an off-the-shelf GTSP solver to find
the optimal path for the given instance. As more parts of the environment are
seen, we replan the path. We evaluate the performance of our algorithm through
high-fidelity simulations conducted in Gazebo. We compare the performance of
this algorithm with a frontier exploration algorithm. Our evaluation reveals
that targeting the inspection to only the segmented bridge voxels and planning
carefully using a GTSP solver leads to more efficient inspection than the
baseline algorithms.Comment: 8 pages, 16 figure