202 research outputs found
Real-time Visual-Inertial Odometry for Event Cameras using Keyframe-based Nonlinear Optimization
Polymorphic conserved simple sequence repeats (SSR) markers detected among the two lowland and an upland genotypes. (DOCX 15 kb
Ultimate SLAM? Combining Events, Images, and IMU for Robust Visual SLAM in HDR and High Speed Scenarios
Event cameras are bio-inspired vision sensors that output pixel-level
brightness changes instead of standard intensity frames. These cameras do not
suffer from motion blur and have a very high dynamic range, which enables them
to provide reliable visual information during high speed motions or in scenes
characterized by high dynamic range. However, event cameras output only little
information when the amount of motion is limited, such as in the case of almost
still motion. Conversely, standard cameras provide instant and rich information
about the environment most of the time (in low-speed and good lighting
scenarios), but they fail severely in case of fast motions, or difficult
lighting such as high dynamic range or low light scenes. In this paper, we
present the first state estimation pipeline that leverages the complementary
advantages of these two sensors by fusing in a tightly-coupled manner events,
standard frames, and inertial measurements. We show on the publicly available
Event Camera Dataset that our hybrid pipeline leads to an accuracy improvement
of 130% over event-only pipelines, and 85% over standard-frames-only
visual-inertial systems, while still being computationally tractable.
Furthermore, we use our pipeline to demonstrate - to the best of our knowledge
- the first autonomous quadrotor flight using an event camera for state
estimation, unlocking flight scenarios that were not reachable with traditional
visual-inertial odometry, such as low-light environments and high-dynamic range
scenes.Comment: 8 pages, 9 figures, 2 table
PL-EVIO: Robust Monocular Event-based Visual Inertial Odometry with Point and Line Features
Event cameras are motion-activated sensors that capture pixel-level
illumination changes instead of the intensity image with a fixed frame rate.
Compared with the standard cameras, it can provide reliable visual perception
during high-speed motions and in high dynamic range scenarios. However, event
cameras output only a little information or even noise when the relative motion
between the camera and the scene is limited, such as in a still state. While
standard cameras can provide rich perception information in most scenarios,
especially in good lighting conditions. These two cameras are exactly
complementary. In this paper, we proposed a robust, high-accurate, and
real-time optimization-based monocular event-based visual-inertial odometry
(VIO) method with event-corner features, line-based event features, and
point-based image features. The proposed method offers to leverage the
point-based features in the nature scene and line-based features in the
human-made scene to provide more additional structure or constraints
information through well-design feature management. Experiments in the public
benchmark datasets show that our method can achieve superior performance
compared with the state-of-the-art image-based or event-based VIO. Finally, we
used our method to demonstrate an onboard closed-loop autonomous quadrotor
flight and large-scale outdoor experiments. Videos of the evaluations are
presented on our project website: https://b23.tv/OE3QM6
A Comprehensive Introduction of Visual-Inertial Navigation
In this article, a tutorial introduction to visual-inertial navigation(VIN)
is presented. Visual and inertial perception are two complementary sensing
modalities. Cameras and inertial measurement units (IMU) are the corresponding
sensors for these two modalities. The low cost and light weight of camera-IMU
sensor combinations make them ubiquitous in robotic navigation. Visual-inertial
Navigation is a state estimation problem, that estimates the ego-motion and
local environment of the sensor platform. This paper presents visual-inertial
navigation in the classical state estimation framework, first illustrating the
estimation problem in terms of state variables and system models, including
related quantities representations (Parameterizations), IMU dynamic and camera
measurement models, and corresponding general probabilistic graphical models
(Factor Graph). Secondly, we investigate the existing model-based estimation
methodologies, these involve filter-based and optimization-based frameworks and
related on-manifold operations. We also discuss the calibration of some
relevant parameters, also initialization of state of interest in
optimization-based frameworks. Then the evaluation and improvement of VIN in
terms of accuracy, efficiency, and robustness are discussed. Finally, we
briefly mention the recent development of learning-based methods that may
become alternatives to traditional model-based methods.Comment: 35 pages, 10 figure
Continuous-Time Fixed-Lag Smoothing for LiDAR-Inertial-Camera SLAM
Localization and mapping with heterogeneous multi-sensor fusion have been
prevalent in recent years. To adequately fuse multi-modal sensor measurements
received at different time instants and different frequencies, we estimate the
continuous-time trajectory by fixed-lag smoothing within a factor-graph
optimization framework. With the continuous-time formulation, we can query
poses at any time instants corresponding to the sensor measurements. To bound
the computation complexity of the continuous-time fixed-lag smoother, we
maintain temporal and keyframe sliding windows with constant size, and
probabilistically marginalize out control points of the trajectory and other
states, which allows preserving prior information for future sliding-window
optimization. Based on continuous-time fixed-lag smoothing, we design
tightly-coupled multi-modal SLAM algorithms with a variety of sensor
combinations, like the LiDAR-inertial and LiDAR-inertial-camera SLAM systems,
in which online timeoffset calibration is also naturally supported. More
importantly, benefiting from the marginalization and our derived analytical
Jacobians for optimization, the proposed continuous-time SLAM systems can
achieve real-time performance regardless of the high complexity of
continuous-time formulation. The proposed multi-modal SLAM systems have been
widely evaluated on three public datasets and self-collect datasets. The
results demonstrate that the proposed continuous-time SLAM systems can achieve
high-accuracy pose estimations and outperform existing state-of-the-art
methods. To benefit the research community, we will open source our code at
~\url{https://github.com/APRIL-ZJU/clic}
- …