581 research outputs found
SiLVR: scalable Lidar-visual reconstruction with neural radiance fields for robotic inspection
We present a neural-field-based large-scale reconstruction system that fuses lidar and vision data to generate high-quality reconstructions that are geometrically accurate and capture photo-realistic textures. This system adapts the state-of-the-art neural radiance field (NeRF) representation to also incorporate lidar data which adds strong geometric constraints on the depth and surface normals. We exploit the trajectory from a real-time lidar SLAM system to bootstrap a Structure-from-Motion (SfM) procedure to both significantly reduce the computation time and to provide metric scale which is crucial for lidar depth loss. We use submapping to scale the system to large-scale environments captured over long trajectories. We demonstrate the reconstruction system with data from a multi-camera, lidar sensor suite onboard a legged robot, hand-held while scanning building scenes for 600 metres, and onboard an aerial robot surveying a multi-storey mock disaster site-building. Website: https://ori-drs.github. io/projects/silvr
Tightly Coupled 3D Lidar Inertial Odometry and Mapping
Ego-motion estimation is a fundamental requirement for most mobile robotic
applications. By sensor fusion, we can compensate the deficiencies of
stand-alone sensors and provide more reliable estimations. We introduce a
tightly coupled lidar-IMU fusion method in this paper. By jointly minimizing
the cost derived from lidar and IMU measurements, the lidar-IMU odometry (LIO)
can perform well with acceptable drift after long-term experiment, even in
challenging cases where the lidar measurements can be degraded. Besides, to
obtain more reliable estimations of the lidar poses, a rotation-constrained
refinement algorithm (LIO-mapping) is proposed to further align the lidar poses
with the global map. The experiment results demonstrate that the proposed
method can estimate the poses of the sensor pair at the IMU update rate with
high precision, even under fast motion conditions or with insufficient
features.Comment: Accepted by ICRA 201
Continuous-Time Fixed-Lag Smoothing for LiDAR-Inertial-Camera SLAM
Localization and mapping with heterogeneous multi-sensor fusion have been
prevalent in recent years. To adequately fuse multi-modal sensor measurements
received at different time instants and different frequencies, we estimate the
continuous-time trajectory by fixed-lag smoothing within a factor-graph
optimization framework. With the continuous-time formulation, we can query
poses at any time instants corresponding to the sensor measurements. To bound
the computation complexity of the continuous-time fixed-lag smoother, we
maintain temporal and keyframe sliding windows with constant size, and
probabilistically marginalize out control points of the trajectory and other
states, which allows preserving prior information for future sliding-window
optimization. Based on continuous-time fixed-lag smoothing, we design
tightly-coupled multi-modal SLAM algorithms with a variety of sensor
combinations, like the LiDAR-inertial and LiDAR-inertial-camera SLAM systems,
in which online timeoffset calibration is also naturally supported. More
importantly, benefiting from the marginalization and our derived analytical
Jacobians for optimization, the proposed continuous-time SLAM systems can
achieve real-time performance regardless of the high complexity of
continuous-time formulation. The proposed multi-modal SLAM systems have been
widely evaluated on three public datasets and self-collect datasets. The
results demonstrate that the proposed continuous-time SLAM systems can achieve
high-accuracy pose estimations and outperform existing state-of-the-art
methods. To benefit the research community, we will open source our code at
~\url{https://github.com/APRIL-ZJU/clic}
A Radio-Inertial Localization and Tracking System with BLE Beacons Prior Maps
© 2018 IEEE. In this paper, we develop a system for the low-cost indoor localization and tracking problem using radio signal strength indicator, Inertial Measurement Unit (IMU), and magnetometer sensors. We develop a novel and simplified probabilistic IMU motion model as the proposal distribution of the sequential Monte-Carlo technique to track the robot trajectory. Our algorithm can globally localize and track a robot with a priori unknown location, given an informative prior map of the Bluetooth Low Energy (BLE) beacons. Also, we formulate the problem as an optimization problem that serves as the Backend of the algorithm mentioned above (Front-end). Thus, by simultaneously solving for the robot trajectory and the map of BLE beacons, we recover a continuous and smooth trajectory of the robot, corrected locations of the BLE beacons, and the time-varying IMU bias. The evaluations achieved using hardware show that through the proposed closed-loop system the localization performance can be improved; furthermore, the system becomes robust to the error in the map of beacons by feeding back the optimized map to the Front-end
BuFF: Burst Feature Finder for Light-Constrained 3D Reconstruction
Robots operating at night using conventional vision cameras face significant
challenges in reconstruction due to noise-limited images. Previous work has
demonstrated that burst-imaging techniques can be used to partially overcome
this issue. In this paper, we develop a novel feature detector that operates
directly on image bursts that enhances vision-based reconstruction under
extremely low-light conditions. Our approach finds keypoints with well-defined
scale and apparent motion within each burst by jointly searching in a
multi-scale and multi-motion space. Because we describe these features at a
stage where the images have higher signal-to-noise ratio, the detected features
are more accurate than the state-of-the-art on conventional noisy images and
burst-merged images and exhibit high precision, recall, and matching
performance. We show improved feature performance and camera pose estimates and
demonstrate improved structure-from-motion performance using our feature
detector in challenging light-constrained scenes. Our feature finder provides a
significant step towards robots operating in low-light scenarios and
applications including night-time operations.Comment: 7 pages, 9 figures, 2 tables, for associated project page, see
https://roboticimaging.org/Projects/BuFF
Benchmarking Visual-Inertial Deep Multimodal Fusion for Relative Pose Regression and Odometry-aided Absolute Pose Regression
Visual-inertial localization is a key problem in computer vision and robotics
applications such as virtual reality, self-driving cars, and aerial vehicles.
The goal is to estimate an accurate pose of an object when either the
environment or the dynamics are known. Recent methods directly regress the pose
using convolutional and spatio-temporal networks. Absolute pose regression
(APR) techniques predict the absolute camera pose from an image input in a
known scene. Odometry methods perform relative pose regression (RPR) that
predicts the relative pose from a known object dynamic (visual or inertial
inputs). The localization task can be improved by retrieving information of
both data sources for a cross-modal setup, which is a challenging problem due
to contradictory tasks. In this work, we conduct a benchmark to evaluate deep
multimodal fusion based on PGO and attention networks. Auxiliary and Bayesian
learning are integrated for the APR task. We show accuracy improvements for the
RPR-aided APR task and for the RPR-RPR task for aerial vehicles and hand-held
devices. We conduct experiments on the EuRoC MAV and PennCOSYVIO datasets, and
record a novel industry dataset.Comment: Under revie
- …