6,849 research outputs found
Exploiting Points and Lines in Regression Forests for RGB-D Camera Relocalization
Camera relocalization plays a vital role in many robotics and computer vision
tasks, such as global localization, recovery from tracking failure and loop
closure detection. Recent random forests based methods exploit randomly sampled
pixel comparison features to predict 3D world locations for 2D image locations
to guide the camera pose optimization. However, these image features are only
sampled randomly in the images, without considering the spatial structures or
geometric information, leading to large errors or failure cases with the
existence of poorly textured areas or in motion blur. Line segment features are
more robust in these environments. In this work, we propose to jointly exploit
points and lines within the framework of uncertainty driven regression forests.
The proposed approach is thoroughly evaluated on three publicly available
datasets against several strong state-of-the-art baselines in terms of several
different error metrics. Experimental results prove the efficacy of our method,
showing superior or on-par state-of-the-art performance.Comment: published as a conference paper at 2018 IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS
Uncertainty-Aware Organ Classification for Surgical Data Science Applications in Laparoscopy
Objective: Surgical data science is evolving into a research field that aims
to observe everything occurring within and around the treatment process to
provide situation-aware data-driven assistance. In the context of endoscopic
video analysis, the accurate classification of organs in the field of view of
the camera proffers a technical challenge. Herein, we propose a new approach to
anatomical structure classification and image tagging that features an
intrinsic measure of confidence to estimate its own performance with high
reliability and which can be applied to both RGB and multispectral imaging (MI)
data. Methods: Organ recognition is performed using a superpixel classification
strategy based on textural and reflectance information. Classification
confidence is estimated by analyzing the dispersion of class probabilities.
Assessment of the proposed technology is performed through a comprehensive in
vivo study with seven pigs. Results: When applied to image tagging, mean
accuracy in our experiments increased from 65% (RGB) and 80% (MI) to 90% (RGB)
and 96% (MI) with the confidence measure. Conclusion: Results showed that the
confidence measure had a significant influence on the classification accuracy,
and MI data are better suited for anatomical structure labeling than RGB data.
Significance: This work significantly enhances the state of art in automatic
labeling of endoscopic videos by introducing the use of the confidence metric,
and by being the first study to use MI data for in vivo laparoscopic tissue
classification. The data of our experiments will be released as the first in
vivo MI dataset upon publication of this paper.Comment: 7 pages, 6 images, 2 table
Cross-calibration of Time-of-flight and Colour Cameras
Time-of-flight cameras provide depth information, which is complementary to
the photometric appearance of the scene in ordinary images. It is desirable to
merge the depth and colour information, in order to obtain a coherent scene
representation. However, the individual cameras will have different viewpoints,
resolutions and fields of view, which means that they must be mutually
calibrated. This paper presents a geometric framework for this multi-view and
multi-modal calibration problem. It is shown that three-dimensional projective
transformations can be used to align depth and parallax-based representations
of the scene, with or without Euclidean reconstruction. A new evaluation
procedure is also developed; this allows the reprojection error to be
decomposed into calibration and sensor-dependent components. The complete
approach is demonstrated on a network of three time-of-flight and six colour
cameras. The applications of such a system, to a range of automatic
scene-interpretation problems, are discussed.Comment: 18 pages, 12 figures, 3 table
- …