2,883 research outputs found
High-Performance and Tunable Stereo Reconstruction
Traditional stereo algorithms have focused their efforts on reconstruction
quality and have largely avoided prioritizing for run time performance. Robots,
on the other hand, require quick maneuverability and effective computation to
observe its immediate environment and perform tasks within it. In this work, we
propose a high-performance and tunable stereo disparity estimation method, with
a peak frame-rate of 120Hz (VGA resolution, on a single CPU-thread), that can
potentially enable robots to quickly reconstruct their immediate surroundings
and maneuver at high-speeds. Our key contribution is a disparity estimation
algorithm that iteratively approximates the scene depth via a piece-wise planar
mesh from stereo imagery, with a fast depth validation step for semi-dense
reconstruction. The mesh is initially seeded with sparsely matched keypoints,
and is recursively tessellated and refined as needed (via a resampling stage),
to provide the desired stereo disparity accuracy. The inherent simplicity and
speed of our approach, with the ability to tune it to a desired reconstruction
quality and runtime performance makes it a compelling solution for applications
in high-speed vehicles.Comment: Accepted to International Conference on Robotics and Automation
(ICRA) 2016; 8 pages, 5 figure
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
Learning a Bias Correction for Lidar-only Motion Estimation
This paper presents a novel technique to correct for bias in a classical
estimator using a learning approach. We apply a learned bias correction to a
lidar-only motion estimation pipeline. Our technique trains a Gaussian process
(GP) regression model using data with ground truth. The inputs to the model are
high-level features derived from the geometry of the point-clouds, and the
outputs are the predicted biases between poses computed by the estimator and
the ground truth. The predicted biases are applied as a correction to the poses
computed by the estimator.
Our technique is evaluated on over 50km of lidar data, which includes the
KITTI odometry benchmark and lidar datasets collected around the University of
Toronto campus. After applying the learned bias correction, we obtained
significant improvements to lidar odometry in all datasets tested. We achieved
around 10% reduction in errors on all datasets from an already accurate lidar
odometry algorithm, at the expense of only less than 1% increase in
computational cost at run-time.Comment: 15th Conference on Computer and Robot Vision (CRV 2018
Advances in Simultaneous Localization and Mapping in Confined Underwater Environments Using Sonar and Optical Imaging.
This thesis reports on the incorporation of surface information into a probabilistic simultaneous localization and mapping (SLAM) framework used on an autonomous underwater vehicle (AUV) designed for underwater inspection. AUVs operating in cluttered underwater environments, such as ship hulls or dams, are commonly equipped with Doppler-based sensors, which---in addition to navigation---provide a sparse representation of the environment in the form of a three-dimensional (3D) point cloud. The goal of this thesis is to develop perceptual algorithms that take full advantage of these sparse observations for correcting navigational drift and building a model of the environment. In particular, we focus on three objectives. First, we introduce a novel representation of this 3D point cloud as collections of planar features arranged in a factor graph. This factor graph representation probabalistically infers the spatial arrangement of each planar segment and can effectively model smooth surfaces (such as a ship hull). Second, we show how this technique can produce 3D models that serve as input to our pipeline that produces the first-ever 3D photomosaics using a two-dimensional (2D) imaging sonar. Finally, we propose a model-assisted bundle adjustment (BA) framework that allows for robust registration between surfaces observed from a Doppler sensor and visual features detected from optical images. Throughout this thesis, we show methods that produce 3D photomosaics using a combination of triangular meshes (derived from our SLAM framework or given a-priori), optical images, and sonar images. Overall, the contributions of this thesis greatly increase the accuracy, reliability, and utility of in-water ship hull inspection with AUVs despite the challenges they face in underwater environments.
We provide results using the Hovering Autonomous Underwater Vehicle (HAUV) for autonomous ship hull inspection, which serves as the primary testbed for the algorithms presented in this thesis. The sensor payload of the HAUV consists primarily of: a Doppler velocity log (DVL) for underwater navigation and ranging, monocular and stereo cameras, and---for some applications---an imaging sonar.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120750/1/paulozog_1.pd
Data-Driven Grasp Synthesis - A Survey
We review the work on data-driven grasp synthesis and the methodologies for
sampling and ranking candidate grasps. We divide the approaches into three
groups based on whether they synthesize grasps for known, familiar or unknown
objects. This structure allows us to identify common object representations and
perceptual processes that facilitate the employed data-driven grasp synthesis
technique. In the case of known objects, we concentrate on the approaches that
are based on object recognition and pose estimation. In the case of familiar
objects, the techniques use some form of a similarity matching to a set of
previously encountered objects. Finally for the approaches dealing with unknown
objects, the core part is the extraction of specific features that are
indicative of good grasps. Our survey provides an overview of the different
methodologies and discusses open problems in the area of robot grasping. We
also draw a parallel to the classical approaches that rely on analytic
formulations.Comment: 20 pages, 30 Figures, submitted to IEEE Transactions on Robotic
3D Visual Perception for Self-Driving Cars using a Multi-Camera System: Calibration, Mapping, Localization, and Obstacle Detection
Cameras are a crucial exteroceptive sensor for self-driving cars as they are
low-cost and small, provide appearance information about the environment, and
work in various weather conditions. They can be used for multiple purposes such
as visual navigation and obstacle detection. We can use a surround multi-camera
system to cover the full 360-degree field-of-view around the car. In this way,
we avoid blind spots which can otherwise lead to accidents. To minimize the
number of cameras needed for surround perception, we utilize fisheye cameras.
Consequently, standard vision pipelines for 3D mapping, visual localization,
obstacle detection, etc. need to be adapted to take full advantage of the
availability of multiple cameras rather than treat each camera individually. In
addition, processing of fisheye images has to be supported. In this paper, we
describe the camera calibration and subsequent processing pipeline for
multi-fisheye-camera systems developed as part of the V-Charge project. This
project seeks to enable automated valet parking for self-driving cars. Our
pipeline is able to precisely calibrate multi-camera systems, build sparse 3D
maps for visual navigation, visually localize the car with respect to these
maps, generate accurate dense maps, as well as detect obstacles based on
real-time depth map extraction
A robust and fast method for 6DoF motion estimation from generalized 3D data
Nowadays, there is an increasing number of robotic applications that need to act in real three-dimensional (3D) scenarios. In this paper we present a new mobile robotics orientated 3D registration method that improves previous Iterative Closest Points based solutions both in speed and accuracy. As an initial step, we perform a low cost computational method to obtain descriptions for 3D scenes planar surfaces. Then, from these descriptions we apply a force system in order to compute accurately and efficiently a six degrees of freedom egomotion. We describe the basis of our approach and demonstrate its validity with several experiments using different kinds of 3D sensors and different 3D real environments.This work has been supported by project DPI2009-07144 from Ministerio de Educación y Ciencia (Spain) and GRE10-35 from Universidad de Alicante (Spain)
MonoSLAM: Real-time single camera SLAM
Published versio
- …