15,517 research outputs found
SPLODE: Semi-Probabilistic Point and Line Odometry with Depth Estimation from RGB-D Camera Motion
Active depth cameras suffer from several limitations, which cause incomplete
and noisy depth maps, and may consequently affect the performance of RGB-D
Odometry. To address this issue, this paper presents a visual odometry method
based on point and line features that leverages both measurements from a depth
sensor and depth estimates from camera motion. Depth estimates are generated
continuously by a probabilistic depth estimation framework for both types of
features to compensate for the lack of depth measurements and inaccurate
feature depth associations. The framework models explicitly the uncertainty of
triangulating depth from both point and line observations to validate and
obtain precise estimates. Furthermore, depth measurements are exploited by
propagating them through a depth map registration module and using a
frame-to-frame motion estimation method that considers 3D-to-2D and 2D-to-3D
reprojection errors, independently. Results on RGB-D sequences captured on
large indoor and outdoor scenes, where depth sensor limitations are critical,
show that the combination of depth measurements and estimates through our
approach is able to overcome the absence and inaccuracy of depth measurements.Comment: IROS 201
Exploiting Points and Lines in Regression Forests for RGB-D Camera Relocalization
Camera relocalization plays a vital role in many robotics and computer vision
tasks, such as global localization, recovery from tracking failure and loop
closure detection. Recent random forests based methods exploit randomly sampled
pixel comparison features to predict 3D world locations for 2D image locations
to guide the camera pose optimization. However, these image features are only
sampled randomly in the images, without considering the spatial structures or
geometric information, leading to large errors or failure cases with the
existence of poorly textured areas or in motion blur. Line segment features are
more robust in these environments. In this work, we propose to jointly exploit
points and lines within the framework of uncertainty driven regression forests.
The proposed approach is thoroughly evaluated on three publicly available
datasets against several strong state-of-the-art baselines in terms of several
different error metrics. Experimental results prove the efficacy of our method,
showing superior or on-par state-of-the-art performance.Comment: published as a conference paper at 2018 IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS
Keyframe-based monocular SLAM: design, survey, and future directions
Extensive research in the field of monocular SLAM for the past fifteen years
has yielded workable systems that found their way into various applications in
robotics and augmented reality. Although filter-based monocular SLAM systems
were common at some time, the more efficient keyframe-based solutions are
becoming the de facto methodology for building a monocular SLAM system. The
objective of this paper is threefold: first, the paper serves as a guideline
for people seeking to design their own monocular SLAM according to specific
environmental constraints. Second, it presents a survey that covers the various
keyframe-based monocular SLAM systems in the literature, detailing the
components of their implementation, and critically assessing the specific
strategies made in each proposed solution. Third, the paper provides insight
into the direction of future research in this field, to address the major
limitations still facing monocular SLAM; namely, in the issues of illumination
changes, initialization, highly dynamic motion, poorly textured scenes,
repetitive textures, map maintenance, and failure recovery
Configurable Input Devices for 3D Interaction using Optical Tracking
Three-dimensional interaction with virtual objects is one of the aspects that needs to be addressed
in order to increase the usability and usefulness of virtual reality. Human beings
have difficulties understanding 3D spatial relationships and manipulating 3D user interfaces,
which require the control of multiple degrees of freedom simultaneously. Conventional interaction
paradigms known from the desktop computer, such as the use of interaction devices as
the mouse and keyboard, may be insufficient or even inappropriate for 3D spatial interaction
tasks.
The aim of the research in this thesis is to develop the technology required to improve 3D
user interaction. This can be accomplished by allowing interaction devices to be constructed
such that their use is apparent from their structure, and by enabling efficient development of
new input devices for 3D interaction.
The driving vision in this thesis is that for effective and natural direct 3D interaction the
structure of an interaction device should be specifically tuned to the interaction task. Two
aspects play an important role in this vision. First, interaction devices should be structured
such that interaction techniques are as direct and transparent as possible. Interaction techniques
define the mapping between interaction task parameters and the degrees of freedom of
interaction devices. Second, the underlying technology should enable developers to rapidly
construct and evaluate new interaction devices.
The thesis is organized as follows. In Chapter 2, a review of the optical tracking field is
given. The tracking pipeline is discussed, existing methods are reviewed, and improvement
opportunities are identified.
In Chapters 3 and 4 the focus is on the development of optical tracking techniques of rigid
objects. The goal of the tracking method presented in Chapter 3 is to reduce the occlusion
problem. The method exploits projection invariant properties of line pencil markers, and the
fact that line features only need to be partially visible.
In Chapter 4, the aim is to develop a tracking system that supports devices of arbitrary
shapes, and allows for rapid development of new interaction devices. The method is based on
subgraph isomorphism to identify point clouds. To support the development of new devices
in the virtual environment an automatic model estimation method is used.
Chapter 5 provides an analysis of three optical tracking systems based on different principles.
The first system is based on an optimization procedure that matches the 3D device
model points to the 2D data points that are detected in the camera images. The other systems
are the tracking methods as discussed in Chapters 3 and 4.
In Chapter 6 an analysis of various filtering and prediction methods is given. These
techniques can be used to make the tracking system more robust against noise, and to reduce
the latency problem.
Chapter 7 focusses on optical tracking of composite input devices, i.e., input devices
197
198 Summary
that consist of multiple rigid parts that can have combinations of rotational and translational
degrees of freedom with respect to each other. Techniques are developed to automatically
generate a 3D model of a segmented input device from motion data, and to use this model to
track the device.
In Chapter 8, the presented techniques are combined to create a configurable input device,
which supports direct and natural co-located interaction. In this chapter, the goal of the thesis
is realized. The device can be configured such that its structure reflects the parameters of the
interaction task.
In Chapter 9, the configurable interaction device is used to study the influence of spatial
device structure with respect to the interaction task at hand. The driving vision of this thesis,
that the spatial structure of an interaction device should match that of the task, is analyzed
and evaluated by performing a user study.
The concepts and techniques developed in this thesis allow researchers to rapidly construct
and apply new interaction devices for 3D interaction in virtual environments. Devices
can be constructed such that their spatial structure reflects the 3D parameters of the interaction
task at hand. The interaction technique then becomes a transparent one-to-one mapping
that directly mediates the functions of the device to the task. The developed configurable interaction
devices can be used to construct intuitive spatial interfaces, and allow researchers to
rapidly evaluate new device configurations and to efficiently perform studies on the relation
between the spatial structure of devices and the interaction task
Fast, Autonomous Flight in GPS-Denied and Cluttered Environments
One of the most challenging tasks for a flying robot is to autonomously
navigate between target locations quickly and reliably while avoiding obstacles
in its path, and with little to no a-priori knowledge of the operating
environment. This challenge is addressed in the present paper. We describe the
system design and software architecture of our proposed solution, and showcase
how all the distinct components can be integrated to enable smooth robot
operation. We provide critical insight on hardware and software component
selection and development, and present results from extensive experimental
testing in real-world warehouse environments. Experimental testing reveals that
our proposed solution can deliver fast and robust aerial robot autonomous
navigation in cluttered, GPS-denied environments.Comment: Pre-peer reviewed version of the article accepted in Journal of Field
Robotic
Initial steps towards automatic segmentation of the wire frame of stent grafts in CT data
For the purpose of obtaining a geometrical model of the wire frame of stent grafts, we propose three tracking methods to segment the stent's wire, and compare them in an experiment. A 2D test image was created by obtaining a projection of a 3D volume containing a stent. The image was modified to connect the parts of the stent's frame and thus create a single path. Ten versions of this image were obtained by adding different noise realizations. Each algorithm was started at the start of each of the ten images, after which the traveled paths were compared to the known correct path to determine the performance. Additionally, the algorithms were applied to 3D clinical data and visually inspected. The method based on the minimum cost path algorithm scored excellent in the experiment and showed good results on the 3D data. Future research will focus on establishing a geometrical model by determining the corner points and the crossings from the results of this method.\u
Probabilistic RGB-D Odometry based on Points, Lines and Planes Under Depth Uncertainty
This work proposes a robust visual odometry method for structured
environments that combines point features with line and plane segments,
extracted through an RGB-D camera. Noisy depth maps are processed by a
probabilistic depth fusion framework based on Mixtures of Gaussians to denoise
and derive the depth uncertainty, which is then propagated throughout the
visual odometry pipeline. Probabilistic 3D plane and line fitting solutions are
used to model the uncertainties of the feature parameters and pose is estimated
by combining the three types of primitives based on their uncertainties.
Performance evaluation on RGB-D sequences collected in this work and two public
RGB-D datasets: TUM and ICL-NUIM show the benefit of using the proposed depth
fusion framework and combining the three feature-types, particularly in scenes
with low-textured surfaces, dynamic objects and missing depth measurements.Comment: Major update: more results, depth filter released as opensource, 34
page
- …