16,656 research outputs found
CNN-SLAM: Real-time dense monocular SLAM with learned depth prediction
Given the recent advances in depth prediction from Convolutional Neural
Networks (CNNs), this paper investigates how predicted depth maps from a deep
neural network can be deployed for accurate and dense monocular reconstruction.
We propose a method where CNN-predicted dense depth maps are naturally fused
together with depth measurements obtained from direct monocular SLAM. Our
fusion scheme privileges depth prediction in image locations where monocular
SLAM approaches tend to fail, e.g. along low-textured regions, and vice-versa.
We demonstrate the use of depth prediction for estimating the absolute scale of
the reconstruction, hence overcoming one of the major limitations of monocular
SLAM. Finally, we propose a framework to efficiently fuse semantic labels,
obtained from a single frame, with dense SLAM, yielding semantically coherent
scene reconstruction from a single view. Evaluation results on two benchmark
datasets show the robustness and accuracy of our approach.Comment: 10 pages, 6 figures, IEEE Computer Society Conference on Computer
Vision and Pattern Recognition (CVPR), Hawaii, USA, June, 2017. The first two
authors contribute equally to this pape
SkiMap: An Efficient Mapping Framework for Robot Navigation
We present a novel mapping framework for robot navigation which features a
multi-level querying system capable to obtain rapidly representations as
diverse as a 3D voxel grid, a 2.5D height map and a 2D occupancy grid. These
are inherently embedded into a memory and time efficient core data structure
organized as a Tree of SkipLists. Compared to the well-known Octree
representation, our approach exhibits a better time efficiency, thanks to its
simple and highly parallelizable computational structure, and a similar memory
footprint when mapping large workspaces. Peculiarly within the realm of mapping
for robot navigation, our framework supports realtime erosion and
re-integration of measurements upon reception of optimized poses from the
sensor tracker, so as to improve continuously the accuracy of the map.Comment: Accepted by International Conference on Robotics and Automation
(ICRA) 2017. This is the submitted version. The final published version may
be slightly differen
Magnetic-Visual Sensor Fusion-based Dense 3D Reconstruction and Localization for Endoscopic Capsule Robots
Reliable and real-time 3D reconstruction and localization functionality is a
crucial prerequisite for the navigation of actively controlled capsule
endoscopic robots as an emerging, minimally invasive diagnostic and therapeutic
technology for use in the gastrointestinal (GI) tract. In this study, we
propose a fully dense, non-rigidly deformable, strictly real-time,
intraoperative map fusion approach for actively controlled endoscopic capsule
robot applications which combines magnetic and vision-based localization, with
non-rigid deformations based frame-to-model map fusion. The performance of the
proposed method is demonstrated using four different ex-vivo porcine stomach
models. Across different trajectories of varying speed and complexity, and four
different endoscopic cameras, the root mean square surface reconstruction
errors 1.58 to 2.17 cm.Comment: submitted to IROS 201
C-blox: A Scalable and Consistent TSDF-based Dense Mapping Approach
In many applications, maintaining a consistent dense map of the environment
is key to enabling robotic platforms to perform higher level decision making.
Several works have addressed the challenge of creating precise dense 3D maps
from visual sensors providing depth information. However, during operation over
longer missions, reconstructions can easily become inconsistent due to
accumulated camera tracking error and delayed loop closure. Without explicitly
addressing the problem of map consistency, recovery from such distortions tends
to be difficult. We present a novel system for dense 3D mapping which addresses
the challenge of building consistent maps while dealing with scalability.
Central to our approach is the representation of the environment as a
collection of overlapping TSDF subvolumes. These subvolumes are localized
through feature-based camera tracking and bundle adjustment. Our main
contribution is a pipeline for identifying stable regions in the map, and to
fuse the contributing subvolumes. This approach allows us to reduce map growth
while still maintaining consistency. We demonstrate the proposed system on a
publicly available dataset and simulation engine, and demonstrate the efficacy
of the proposed approach for building consistent and scalable maps. Finally we
demonstrate our approach running in real-time on-board a lightweight MAV.Comment: 8 pages, 5 figures, conferenc
OctNetFusion: Learning Depth Fusion from Data
In this paper, we present a learning based approach to depth fusion, i.e.,
dense 3D reconstruction from multiple depth images. The most common approach to
depth fusion is based on averaging truncated signed distance functions, which
was originally proposed by Curless and Levoy in 1996. While this method is
simple and provides great results, it is not able to reconstruct (partially)
occluded surfaces and requires a large number frames to filter out sensor noise
and outliers. Motivated by the availability of large 3D model repositories and
recent advances in deep learning, we present a novel 3D CNN architecture that
learns to predict an implicit surface representation from the input depth maps.
Our learning based method significantly outperforms the traditional volumetric
fusion approach in terms of noise reduction and outlier suppression. By
learning the structure of real world 3D objects and scenes, our approach is
further able to reconstruct occluded regions and to fill in gaps in the
reconstruction. We demonstrate that our learning based approach outperforms
both vanilla TSDF fusion as well as TV-L1 fusion on the task of volumetric
fusion. Further, we demonstrate state-of-the-art 3D shape completion results.Comment: 3DV 2017, https://github.com/griegler/octnetfusio
Depth Fields: Extending Light Field Techniques to Time-of-Flight Imaging
A variety of techniques such as light field, structured illumination, and
time-of-flight (TOF) are commonly used for depth acquisition in consumer
imaging, robotics and many other applications. Unfortunately, each technique
suffers from its individual limitations preventing robust depth sensing. In
this paper, we explore the strengths and weaknesses of combining light field
and time-of-flight imaging, particularly the feasibility of an on-chip
implementation as a single hybrid depth sensor. We refer to this combination as
depth field imaging. Depth fields combine light field advantages such as
synthetic aperture refocusing with TOF imaging advantages such as high depth
resolution and coded signal processing to resolve multipath interference. We
show applications including synthesizing virtual apertures for TOF imaging,
improved depth mapping through partial and scattering occluders, and single
frequency TOF phase unwrapping. Utilizing space, angle, and temporal coding,
depth fields can improve depth sensing in the wild and generate new insights
into the dimensions of light's plenoptic function.Comment: 9 pages, 8 figures, Accepted to 3DV 201
Cross-calibration of Time-of-flight and Colour Cameras
Time-of-flight cameras provide depth information, which is complementary to
the photometric appearance of the scene in ordinary images. It is desirable to
merge the depth and colour information, in order to obtain a coherent scene
representation. However, the individual cameras will have different viewpoints,
resolutions and fields of view, which means that they must be mutually
calibrated. This paper presents a geometric framework for this multi-view and
multi-modal calibration problem. It is shown that three-dimensional projective
transformations can be used to align depth and parallax-based representations
of the scene, with or without Euclidean reconstruction. A new evaluation
procedure is also developed; this allows the reprojection error to be
decomposed into calibration and sensor-dependent components. The complete
approach is demonstrated on a network of three time-of-flight and six colour
cameras. The applications of such a system, to a range of automatic
scene-interpretation problems, are discussed.Comment: 18 pages, 12 figures, 3 table
- …