43,850 research outputs found
Visual SLAM for flying vehicles
The ability to learn a map of the environment is important for numerous types of robotic vehicles. In this paper, we address the problem of learning a visual map of the ground using flying vehicles. We assume that the vehicles are equipped with one or two low-cost downlooking cameras in combination with an attitude sensor. Our approach is able to construct a visual map that can later on be used for navigation. Key advantages of our approach are that it is comparably easy to implement, can robustly deal with noisy camera images, and can operate either with a monocular camera or a stereo camera system. Our technique uses visual features and estimates the correspondences between features using a variant of the progressive sample consensus (PROSAC) algorithm. This allows our approach to extract spatial constraints between camera poses that can then be used to address the simultaneous localization and mapping (SLAM) problem by applying graph methods. Furthermore, we address the problem of efficiently identifying loop closures. We performed several experiments with flying vehicles that demonstrate that our method is able to construct maps of large outdoor and indoor environments. © 2008 IEEE
Hybrid One-Shot 3D Hand Pose Estimation by Exploiting Uncertainties
Model-based approaches to 3D hand tracking have been shown to perform well in
a wide range of scenarios. However, they require initialisation and cannot
recover easily from tracking failures that occur due to fast hand motions.
Data-driven approaches, on the other hand, can quickly deliver a solution, but
the results often suffer from lower accuracy or missing anatomical validity
compared to those obtained from model-based approaches. In this work we propose
a hybrid approach for hand pose estimation from a single depth image. First, a
learned regressor is employed to deliver multiple initial hypotheses for the 3D
position of each hand joint. Subsequently, the kinematic parameters of a 3D
hand model are found by deliberately exploiting the inherent uncertainty of the
inferred joint proposals. This way, the method provides anatomically valid and
accurate solutions without requiring manual initialisation or suffering from
track losses. Quantitative results on several standard datasets demonstrate
that the proposed method outperforms state-of-the-art representatives of the
model-based, data-driven and hybrid paradigms.Comment: BMVC 2015 (oral); see also
http://lrs.icg.tugraz.at/research/hybridhape
Efficient Online Surface Correction for Real-time Large-Scale 3D Reconstruction
State-of-the-art methods for large-scale 3D reconstruction from RGB-D sensors
usually reduce drift in camera tracking by globally optimizing the estimated
camera poses in real-time without simultaneously updating the reconstructed
surface on pose changes. We propose an efficient on-the-fly surface correction
method for globally consistent dense 3D reconstruction of large-scale scenes.
Our approach uses a dense Visual RGB-D SLAM system that estimates the camera
motion in real-time on a CPU and refines it in a global pose graph
optimization. Consecutive RGB-D frames are locally fused into keyframes, which
are incorporated into a sparse voxel hashed Signed Distance Field (SDF) on the
GPU. On pose graph updates, the SDF volume is corrected on-the-fly using a
novel keyframe re-integration strategy with reduced GPU-host streaming. We
demonstrate in an extensive quantitative evaluation that our method is up to
93% more runtime efficient compared to the state-of-the-art and requires
significantly less memory, with only negligible loss of surface quality.
Overall, our system requires only a single GPU and allows for real-time surface
correction of large environments.Comment: British Machine Vision Conference (BMVC), London, September 201
3D Visual Perception for Self-Driving Cars using a Multi-Camera System: Calibration, Mapping, Localization, and Obstacle Detection
Cameras are a crucial exteroceptive sensor for self-driving cars as they are
low-cost and small, provide appearance information about the environment, and
work in various weather conditions. They can be used for multiple purposes such
as visual navigation and obstacle detection. We can use a surround multi-camera
system to cover the full 360-degree field-of-view around the car. In this way,
we avoid blind spots which can otherwise lead to accidents. To minimize the
number of cameras needed for surround perception, we utilize fisheye cameras.
Consequently, standard vision pipelines for 3D mapping, visual localization,
obstacle detection, etc. need to be adapted to take full advantage of the
availability of multiple cameras rather than treat each camera individually. In
addition, processing of fisheye images has to be supported. In this paper, we
describe the camera calibration and subsequent processing pipeline for
multi-fisheye-camera systems developed as part of the V-Charge project. This
project seeks to enable automated valet parking for self-driving cars. Our
pipeline is able to precisely calibrate multi-camera systems, build sparse 3D
maps for visual navigation, visually localize the car with respect to these
maps, generate accurate dense maps, as well as detect obstacles based on
real-time depth map extraction
Simultaneous Hand Pose and Skeleton Bone-Lengths Estimation from a Single Depth Image
Articulated hand pose estimation is a challenging task for human-computer
interaction. The state-of-the-art hand pose estimation algorithms work only
with one or a few subjects for which they have been calibrated or trained.
Particularly, the hybrid methods based on learning followed by model fitting or
model based deep learning do not explicitly consider varying hand shapes and
sizes. In this work, we introduce a novel hybrid algorithm for estimating the
3D hand pose as well as bone-lengths of the hand skeleton at the same time,
from a single depth image. The proposed CNN architecture learns hand pose
parameters and scale parameters associated with the bone-lengths
simultaneously. Subsequently, a new hybrid forward kinematics layer employs
both parameters to estimate 3D joint positions of the hand. For end-to-end
training, we combine three public datasets NYU, ICVL and MSRA-2015 in one
unified format to achieve large variation in hand shapes and sizes. Among
hybrid methods, our method shows improved accuracy over the state-of-the-art on
the combined dataset and the ICVL dataset that contain multiple subjects. Also,
our algorithm is demonstrated to work well with unseen images.Comment: This paper has been accepted and presented in 3DV-2017 conference
held at Qingdao, China. http://irc.cs.sdu.edu.cn/3dv
- …