3 research outputs found
Multi-Spectral Visual Odometry without Explicit Stereo Matching
Multi-spectral sensors consisting of a standard (visible-light) camera and a
long-wave infrared camera can simultaneously provide both visible and thermal
images. Since thermal images are independent from environmental illumination,
they can help to overcome certain limitations of standard cameras under
complicated illumination conditions. However, due to the difference in the
information source of the two types of cameras, their images usually share very
low texture similarity. Hence, traditional texture-based feature matching
methods cannot be directly applied to obtain stereo correspondences. To tackle
this problem, a multi-spectral visual odometry method without explicit stereo
matching is proposed in this paper. Bundle adjustment of multi-view stereo is
performed on the visible and the thermal images using direct image alignment.
Scale drift can be avoided by additional temporal observations of map points
with the fixed-baseline stereo. Experimental results indicate that the proposed
method can provide accurate visual odometry results with recovered metric
scale. Moreover, the proposed method can also provide a metric 3D
reconstruction in semi-dense density with multi-spectral information, which is
not available from existing multi-spectral methods
A Dataset for Evaluating Multi-spectral Motion Estimation Methods
Visible images have been widely used for indoor motion estimation. Thermal
images, in contrast, are more challenging to be used in motion estimation since
they typically have lower resolution, less texture, and more noise. In this
paper, a novel dataset for evaluating the performance of multi-spectral motion
estimation systems is presented. The dataset includes both multi-spectral and
dense depth images with accurate ground-truth camera poses provided by a motion
capture system. All the sequences are recorded from a handheld multi-spectral
device, which consists of a standard visible-light camera, a long-wave infrared
camera, and a depth camera. The multi-spectral images, including both color and
thermal images in full sensor resolution (640 480), are obtained from
the hardware-synchronized standard and long-wave infrared camera at 32Hz. The
depth images are captured by a Microsoft Kinect2 and can have benefits for
learning cross-modalities stereo matching. In addition to the sequences with
bright illumination, the dataset also contains scenes with dim or varying
illumination. The full dataset, including both raw data and calibration data
with detailed specifications of data format, is publicly available
TP-TIO: A Robust Thermal-Inertial Odometry with Deep ThermalPoint
To achieve robust motion estimation in visually degraded environments,
thermal odometry has been an attraction in the robotics community. However,
most thermal odometry methods are purely based on classical feature extractors,
which is difficult to establish robust correspondences in successive frames due
to sudden photometric changes and large thermal noise. To solve this problem,
we propose ThermalPoint, a lightweight feature detection network specifically
tailored for producing keypoints on thermal images, providing notable
anti-noise improvements compared with other state-of-the-art methods. After
that, we combine ThermalPoint with a novel radiometric feature tracking method,
which directly makes use of full radiometric data and establishes reliable
correspondences between sequential frames. Finally, taking advantage of an
optimization-based visual-inertial framework, a deep feature-based
thermal-inertial odometry (TP-TIO) framework is proposed and evaluated
thoroughly in various visually degraded environments. Experiments show that our
method outperforms state-of-the-art visual and laser odometry methods in
smoke-filled environments and achieves competitive accuracy in normal
environments