651 research outputs found
Incremental Visual-Inertial 3D Mesh Generation with Structural Regularities
Visual-Inertial Odometry (VIO) algorithms typically rely on a point cloud
representation of the scene that does not model the topology of the
environment. A 3D mesh instead offers a richer, yet lightweight, model.
Nevertheless, building a 3D mesh out of the sparse and noisy 3D landmarks
triangulated by a VIO algorithm often results in a mesh that does not fit the
real scene. In order to regularize the mesh, previous approaches decouple state
estimation from the 3D mesh regularization step, and either limit the 3D mesh
to the current frame or let the mesh grow indefinitely. We propose instead to
tightly couple mesh regularization and state estimation by detecting and
enforcing structural regularities in a novel factor-graph formulation. We also
propose to incrementally build the mesh by restricting its extent to the
time-horizon of the VIO optimization; the resulting 3D mesh covers a larger
portion of the scene than a per-frame approach while its memory usage and
computational complexity remain bounded. We show that our approach successfully
regularizes the mesh, while improving localization accuracy, when structural
regularities are present, and remains operational in scenes without
regularities.Comment: 7 pages, 5 figures, ICRA accepte
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
PVI-DSO: Leveraging Planar Regularities for Direct Sparse Visual-Inertial Odometry
The monocular Visual-Inertial Odometry (VIO) based on the direct method can
leverage all the available pixels in the image to estimate the camera motion
and reconstruct the environment. The denser map reconstruction provides more
information about the environment, making it easier to extract structure and
planar regularities. In this paper, we propose a monocular direct sparse
visual-inertial odometry, which exploits the plane regularities (PVI-DSO). Our
system detects coplanar information from 3D meshes generated from 3D point
clouds and uses coplanar parameters to introduce coplanar constraints. In order
to reduce computation and improve compactness, the plane-distance cost is
directly used as the prior information of plane parameters. We conduct ablation
experiments on public datasets and compare our system with other
state-of-the-art algorithms. The experimental results verified leveraging the
plane information can improve the accuracy of the VIO system based on the
direct method
Advances in Simultaneous Localization and Mapping in Confined Underwater Environments Using Sonar and Optical Imaging.
This thesis reports on the incorporation of surface information into a probabilistic simultaneous localization and mapping (SLAM) framework used on an autonomous underwater vehicle (AUV) designed for underwater inspection. AUVs operating in cluttered underwater environments, such as ship hulls or dams, are commonly equipped with Doppler-based sensors, which---in addition to navigation---provide a sparse representation of the environment in the form of a three-dimensional (3D) point cloud. The goal of this thesis is to develop perceptual algorithms that take full advantage of these sparse observations for correcting navigational drift and building a model of the environment. In particular, we focus on three objectives. First, we introduce a novel representation of this 3D point cloud as collections of planar features arranged in a factor graph. This factor graph representation probabalistically infers the spatial arrangement of each planar segment and can effectively model smooth surfaces (such as a ship hull). Second, we show how this technique can produce 3D models that serve as input to our pipeline that produces the first-ever 3D photomosaics using a two-dimensional (2D) imaging sonar. Finally, we propose a model-assisted bundle adjustment (BA) framework that allows for robust registration between surfaces observed from a Doppler sensor and visual features detected from optical images. Throughout this thesis, we show methods that produce 3D photomosaics using a combination of triangular meshes (derived from our SLAM framework or given a-priori), optical images, and sonar images. Overall, the contributions of this thesis greatly increase the accuracy, reliability, and utility of in-water ship hull inspection with AUVs despite the challenges they face in underwater environments.
We provide results using the Hovering Autonomous Underwater Vehicle (HAUV) for autonomous ship hull inspection, which serves as the primary testbed for the algorithms presented in this thesis. The sensor payload of the HAUV consists primarily of: a Doppler velocity log (DVL) for underwater navigation and ranging, monocular and stereo cameras, and---for some applications---an imaging sonar.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120750/1/paulozog_1.pd
A LiDAR-Inertial SLAM Tightly-Coupled with Dropout-Tolerant GNSS Fusion for Autonomous Mine Service Vehicles
Multi-modal sensor integration has become a crucial prerequisite for the
real-world navigation systems. Recent studies have reported successful
deployment of such system in many fields. However, it is still challenging for
navigation tasks in mine scenes due to satellite signal dropouts, degraded
perception, and observation degeneracy. To solve this problem, we propose a
LiDAR-inertial odometry method in this paper, utilizing both Kalman filter and
graph optimization. The front-end consists of multiple parallel running
LiDAR-inertial odometries, where the laser points, IMU, and wheel odometer
information are tightly fused in an error-state Kalman filter. Instead of the
commonly used feature points, we employ surface elements for registration. The
back-end construct a pose graph and jointly optimize the pose estimation
results from inertial, LiDAR odometry, and global navigation satellite system
(GNSS). Since the vehicle has a long operation time inside the tunnel, the
largely accumulated drift may be not fully by the GNSS measurements. We hereby
leverage a loop closure based re-initialization process to achieve full
alignment. In addition, the system robustness is improved through handling data
loss, stream consistency, and estimation error. The experimental results show
that our system has a good tolerance to the long-period degeneracy with the
cooperation different LiDARs and surfel registration, achieving meter-level
accuracy even for tens of minutes running during GNSS dropouts
- …