11 research outputs found
Smart Visual Beacons with Asynchronous Optical Communications using Event Cameras
Event cameras are bio-inspired dynamic vision sensors that respond to changes
in image intensity with a high temporal resolution, high dynamic range and low
latency. These sensor characteristics are ideally suited to enable visual
target tracking in concert with a broadcast visual communication channel for
smart visual beacons with applications in distributed robotics. Visual beacons
can be constructed by high-frequency modulation of Light Emitting Diodes (LEDs)
such as vehicle headlights, Internet of Things (IoT) LEDs, smart building
lights, etc., that are already present in many real-world scenarios. The high
temporal resolution characteristic of the event cameras allows them to capture
visual signals at far higher data rates compared to classical frame-based
cameras. In this paper, we propose a novel smart visual beacon architecture
with both LED modulation and event camera demodulation algorithms. We
quantitatively evaluate the relationship between LED transmission rate,
communication distance and the message transmission accuracy for the smart
visual beacon communication system that we prototyped. The proposed method
achieves up to 4 kbps in an indoor environment and lossless transmission over a
distance of 100 meters, at a transmission rate of 500 bps, in full sunlight,
demonstrating the potential of the technology in an outdoor environment.Comment: 7 pages, 8 figures, accepted by IEEE International Conference on
Intelligent Robots and Systems (IROS) 202
An Asynchronous Linear Filter Architecture for Hybrid Event-Frame Cameras
Event cameras are ideally suited to capture High Dynamic Range (HDR) visual
information without blur but provide poor imaging capability for static or
slowly varying scenes. Conversely, conventional image sensors measure absolute
intensity of slowly changing scenes effectively but do poorly on HDR or quickly
changing scenes. In this paper, we present an asynchronous linear filter
architecture, fusing event and frame camera data, for HDR video reconstruction
and spatial convolution that exploits the advantages of both sensor modalities.
The key idea is the introduction of a state that directly encodes the
integrated or convolved image information and that is updated asynchronously as
each event or each frame arrives from the camera. The state can be read-off
as-often-as and whenever required to feed into subsequent vision modules for
real-time robotic systems. Our experimental results are evaluated on both
publicly available datasets with challenging lighting conditions and fast
motions, along with a new dataset with HDR reference that we provide. The
proposed AKF pipeline outperforms other state-of-the-art methods in both
absolute intensity error (69.4% reduction) and image similarity indexes
(average 35.5% improvement). We also demonstrate the integration of image
convolution with linear spatial kernels Gaussian, Sobel, and Laplacian as an
application of our architecture.Comment: 17 pages, 10 figures, Accepted by IEEE Transactions on Pattern
Analysis and Machine Intelligence (TPAMI) in August 202
An Asynchronous Kalman Filter for Hybrid Event Cameras
Event cameras are ideally suited to capture HDR visual information without
blur but perform poorly on static or slowly changing scenes. Conversely,
conventional image sensors measure absolute intensity of slowly changing scenes
effectively but do poorly on high dynamic range or quickly changing scenes. In
this paper, we present an event-based video reconstruction pipeline for High
Dynamic Range (HDR) scenarios. The proposed algorithm includes a frame
augmentation pre-processing step that deblurs and temporally interpolates frame
data using events. The augmented frame and event data are then fused using a
novel asynchronous Kalman filter under a unifying uncertainty model for both
sensors. Our experimental results are evaluated on both publicly available
datasets with challenging lighting conditions and fast motions and our new
dataset with HDR reference. The proposed algorithm outperforms state-of-the-art
methods in both absolute intensity error (48% reduction) and image similarity
indexes (average 11% improvement).Comment: 12 pages, 6 figures, published in International Conference on
Computer Vision (ICCV) 202
Overcoming Bias: Equivariant Filter Design for Biased Attitude Estimation with Online Calibration
Stochastic filters for on-line state estimation are a core technology for
autonomous systems. The performance of such filters is one of the key limiting
factors to a system's capability. Both asymptotic behavior (e.g.,~for regular
operation) and transient response (e.g.,~for fast initialization and reset) of
such filters are of crucial importance in guaranteeing robust operation of
autonomous systems.
This paper introduces a new generic formulation for a gyroscope aided
attitude estimator using N direction measurements including both body-frame and
reference-frame direction type measurements. The approach is based on an
integrated state formulation that incorporates navigation, extrinsic
calibration for all direction sensors, and gyroscope bias states in a single
equivariant geometric structure. This newly proposed symmetry allows modular
addition of different direction measurements and their extrinsic calibration
while maintaining the ability to include bias states in the same symmetry. The
subsequently proposed filter-based estimator using this symmetry noticeably
improves the transient response, and the asymptotic bias and extrinsic
calibration estimation compared to state-of-the-art approaches. The estimator
is verified in statistically representative simulations and is tested in
real-world experiments.Comment: to be published in Robotics and Automation Letter
Equivariant Systems Theory and Observer Design for Second Order Kinematic Systems on Matrix Lie Groups
This paper presents the equivariant systems theory and observer design for
second order kinematic systems on matrix Lie groups. The state of a second
order kinematic system on a matrix Lie group is naturally posed on the tangent
bundle of the group with the inputs lying in the tangent of the tangent bundle
known as the double tangent bundle. We provide a simple parameterization of
both the tangent bundle state space and the input space (the fiber space of the
double tangent bundle) and then introduce a semi-direct product group and group
actions onto both the state and input spaces. We show that with the proposed
group actions the second order kinematics are equivariant. An equivariant lift
of the kinematics onto the symmetry group is defined and used to design a
nonlinear observer on the lifted state space using nonlinear constructive
design techniques. A simple hovercraft simulation verifies the performance of
our observer.This work was partially supported by the Australian Research Council
through the ARC Discovery Project DP160100783 “Sensing a complex
world: Infinite dimensional observer theory for robots
Non-iterative, fast SE(3) path smoothing
In this paper, we present a fast, non-iterative
approach to smooth a noisy input on the Special Euclidean
Group, SE(3) manifold. The translational part can be smoothed
by a simple Gaussian convolution.We then proposed a novel approach
to rotation smoothing. Unlike existing rotation smoothing
methods using either iterative optimization methods or
stochastic filtering methods, our method allows direct computation
of the smoothing result and allows parallelization of the
computation. Furthermore, we have done a comparative study
on Jia and Evans’s method published in 2014 [1], and shown
that our method can better smooth an input rotation sequence,
with shorter computational time. The smoothed camera path is
then used for video stabilisation, which shows fluid and smooth
camera motion.Australian ARC Centre of Excellence for
Robotic Vision (CE140100016
Event Camera Calibration of Per-pixel Biased Contrast Threshold
Event cameras output asynchronous events to represent intensity changes with a high temporal resolution, even under extreme lighting conditions. Currently, most of the existing works use a single contrast threshold to estimate the intensity change of all pixels. However, complex circuit bias and manufacturing imperfections cause biased pixels and mismatch contrast threshold among pixels, which may lead to undesirable outputs. In this paper, we propose a new event camera model and two calibration approaches which cover event-only cameras and hybrid image-event cameras. When intensity images are simultaneously provided along with events, we also propose an efficient online method to calibrate event cameras that adapts to time-varying event rates. We demonstrate the advantages of our proposed methods compared to the state-of-the-art on several different event camera dataset
MAVIS: Multi-Camera Augmented Visual-Inertial SLAM using SE2(3) Based Exact IMU Pre-integration
We present a novel optimization-based Visual-Inertial SLAM system designed
for multiple partially overlapped camera systems, named MAVIS. Our framework
fully exploits the benefits of wide field-of-view from multi-camera systems,
and the metric scale measurements provided by an inertial measurement unit
(IMU). We introduce an improved IMU pre-integration formulation based on the
exponential function of an automorphism of SE_2(3), which can effectively
enhance tracking performance under fast rotational motion and extended
integration time. Furthermore, we extend conventional front-end tracking and
back-end optimization module designed for monocular or stereo setup towards
multi-camera systems, and introduce implementation details that contribute to
the performance of our system in challenging scenarios. The practical validity
of our approach is supported by our experiments on public datasets. Our MAVIS
won the first place in all the vision-IMU tracks (single and multi-session
SLAM) on Hilti SLAM Challenge 2023 with 1.7 times the score compared to the
second place.Comment: video link: https://youtu.be/Q_jZSjhNFf
High Frequency, High Accuracy Pointing onboard Nanosats using Neuromorphic Event Sensing and Piezoelectric Actuation
As satellites become smaller, the ability to maintain stable pointing
decreases as external forces acting on the satellite come into play. At the
same time, reaction wheels used in the attitude determination and control
system (ADCS) introduce high frequency jitter which can disrupt pointing
stability. For space domain awareness (SDA) tasks that track objects tens of
thousands of kilometres away, the pointing accuracy offered by current
nanosats, typically in the range of 10 to 100 arcseconds, is not sufficient. In
this work, we develop a novel payload that utilises a neuromorphic event sensor
(for high frequency and highly accurate relative attitude estimation) paired in
a closed loop with a piezoelectric stage (for active attitude corrections) to
provide highly stable sensor-specific pointing. Event sensors are especially
suited for space applications due to their desirable characteristics of low
power consumption, asynchronous operation, and high dynamic range. We use the
event sensor to first estimate a reference background star field from which
instantaneous relative attitude is estimated at high frequency. The
piezoelectric stage works in a closed control loop with the event sensor to
perform attitude corrections based on the discrepancy between the current and
desired attitude. Results in a controlled setting show that we can achieve a
pointing accuracy in the range of 1-5 arcseconds using our novel payload at an
operating frequency of up to 50Hz using a prototype built from
commercial-off-the-shelf components. Further details can be found at
https://ylatif.github.io/ultrafinestabilisatio