26 research outputs found
Towards Robust UAV Tracking in GNSS-Denied Environments: A Multi-LiDAR Multi-UAV Dataset
With the increasing prevalence of drones in various industries, the
navigation and tracking of unmanned aerial vehicles (UAVs) in challenging
environments, particularly GNSS-denied areas, have become crucial concerns. To
address this need, we present a novel multi-LiDAR dataset specifically designed
for UAV tracking. Our dataset includes data from a spinning LiDAR, two
solid-state LiDARs with different Field of View (FoV) and scan patterns, and an
RGB-D camera. This diverse sensor suite allows for research on new challenges
in the field, including limited FoV adaptability and multi-modality data
processing.
The dataset facilitates the evaluation of existing algorithms and the
development of new ones, paving the way for advances in UAV tracking
techniques. Notably, we provide data in both indoor and outdoor environments.
We also consider variable UAV sizes, from micro-aerial vehicles to more
standard commercial UAV platforms. The outdoor trajectories are selected with
close proximity to buildings, targeting research in UAV detection in urban
areas, e.g., within counter-UAV systems or docking for UAV logistics.
In addition to the dataset, we provide a baseline comparison with recent
LiDAR-based UAV tracking algorithms, benchmarking the performance with
different sensors, UAVs, and algorithms. Importantly, our dataset shows that
current methods have shortcomings and are unable to track UAVs consistently
across different scenarios
LiDAR-Generated Images Derived Keypoints Assisted Point Cloud Registration Scheme in Odometry Estimation
Keypoint detection and description play a pivotal role in various robotics
and autonomous applications including visual odometry (VO), visual navigation,
and Simultaneous localization and mapping (SLAM). While a myriad of keypoint
detectors and descriptors have been extensively studied in conventional camera
images, the effectiveness of these techniques in the context of LiDAR-generated
images, i.e. reflectivity and ranges images, has not been assessed. These
images have gained attention due to their resilience in adverse conditions such
as rain or fog. Additionally, they contain significant textural information
that supplements the geometric information provided by LiDAR point clouds in
the point cloud registration phase, especially when reliant solely on LiDAR
sensors. This addresses the challenge of drift encountered in LiDAR Odometry
(LO) within geometrically identical scenarios or where not all the raw point
cloud is informative and may even be misleading. This paper aims to analyze the
applicability of conventional image key point extractors and descriptors on
LiDAR-generated images via a comprehensive quantitative investigation.
Moreover, we propose a novel approach to enhance the robustness and reliability
of LO. After extracting key points, we proceed to downsample the point cloud,
subsequently integrating it into the point cloud registration phase for the
purpose of odometry estimation. Our experiment demonstrates that the proposed
approach has comparable accuracy but reduced computational overhead, higher
odometry publishing rate, and even superior performance in scenarios prone to
drift by using the raw point cloud. This, in turn, lays a foundation for
subsequent investigations into the integration of LiDAR-generated images with
LO. Our code is available on GitHub:
https://github.com/TIERS/ws-lidar-as-camera-odom
Comparison of DDS, MQTT, and Zenoh in Edge-to-Edge and Edge-to-Cloud Communication for Distributed ROS 2 Systems
The increased data transmission and number of devices involved in
communications among distributed systems make it challenging yet significantly
necessary to have an efficient and reliable networking middleware. In robotics
and autonomous systems, the wide application of ROS\,2 brings the possibility
of utilizing various networking middlewares together with DDS in ROS\,2 for
better communication among edge devices or between edge devices and the cloud.
However, there is a lack of comprehensive communication performance comparison
of integrating these networking middlewares with ROS\,2. In this study, we
provide a quantitative analysis for the communication performance of utilized
networking middlewares including MQTT and Zenoh alongside DDS in ROS\,2 among a
multiple host system. For a complete and reliable comparison, we calculate the
latency and throughput of these middlewares by sending distinct amounts and
types of data through different network setups including Ethernet, Wi-Fi, and
4G. To further extend the evaluation to real-world application scenarios, we
assess the drift error (the position changes) over time caused by these
networking middlewares with the robot moving in an identical square-shaped
path. Our results show that CycloneDDS performs better under Ethernet while
Zenoh performs better under Wi-Fi and 4G. In the actual robot test, the robot
moving trajectory drift error over time (96\,s) via Zenoh is the smallest. It
is worth noting we have a discussion of the CPU utilization of these networking
middlewares and the performance impact caused by enabling the security feature
in ROS\,2 at the end of the paper.Comment: 19 pages, 8 figures. Submitted to the Journal of Intelligent &
Robotic Systems. Under revie
Loosely Coupled Odometry, UWB Ranging, and Cooperative Spatial Detection for Relative Monte-Carlo Multi-Robot Localization
As mobile robots become more ubiquitous, their deployments grow across use
cases where GNSS positioning is either unavailable or unreliable. This has led
to increased interest in multi-modal relative localization methods.
Complementing onboard odometry, ranging allows for relative state estimation,
with ultra-wideband (UWB) ranging having gained widespread recognition due to
its low cost and centimeter-level out-of-box accuracy. Infrastructure-free
localization methods allow for more dynamic, ad-hoc, and flexible deployments,
yet they have received less attention from the research community. In this
work, we propose a cooperative relative multi-robot localization where we
leverage inter-robot ranging and simultaneous spatial detections of objects in
the environment. To achieve this, we equip robots with a single UWB transceiver
and a stereo camera. We propose a novel Monte-Carlo approach to estimate
relative states by either employing only UWB ranges or dynamically integrating
simultaneous spatial detections from the stereo cameras. We also address the
challenges for UWB ranging error mitigation, especially in non-line-of-sight,
with a study on different LSTM networks to estimate the ranging error. The
proposed approach has multiple benefits. First, we show that a single range is
enough to estimate the accurate relative states of two robots when fusing
odometry measurements. Second, our experiments also demonstrate that our
approach surpasses traditional methods such as multilateration in terms of
accuracy. Third, to increase accuracy even further, we allow for the
integration of cooperative spatial detections. Finally, we show how ROS 2 and
Zenoh can be integrated to build a scalable wireless communication solution for
multi-robot systems. The experimental validation includes real-time deployment
and autonomous navigation based on the relative positioning method
Exploiting Redundancy for UWB Anomaly Detection in Infrastructure-Free Multi-Robot Relative Localization
Ultra-wideband (UWB) localization methods have emerged as a cost-effective
and accurate solution for GNSS-denied environments. There is a significant
amount of previous research in terms of resilience of UWB ranging, with
non-line-of-sight and multipath detection methods. However, little attention
has been paid to resilience against disturbances in relative localization
systems involving multiple nodes. This paper presents an approach to detecting
range anomalies in UWB ranging measurements from the perspective of multi-robot
cooperative localization. We introduce an approach to exploiting redundancy for
relative localization in multi-robot systems, where the position of each node
is calculated using different subsets of available data. This enables us to
effectively identify nodes that present ranging anomalies and eliminate their
effect within the cooperative localization scheme. We analyze anomalies created
by timing errors in the ranging process, e.g., owing to malfunctioning
hardware. However, our method is generic and can be extended to other types of
ranging anomalies. Our approach results in a more resilient cooperative
localization framework with a negligible impact in terms of the computational
workload
Distributed Robotic Systems in the Edge-Cloud Continuum with ROS 2: a Review on Novel Architectures and Technology Readiness
Robotic systems are more connected, networked, and distributed than ever. New
architectures that comply with the \textit{de facto} robotics middleware
standard, ROS\,2, have recently emerged to fill the gap in terms of hybrid
systems deployed from edge to cloud. This paper reviews new architectures and
technologies that enable containerized robotic applications to seamlessly run
at the edge or in the cloud. We also overview systems that include solutions
from extension to ROS\,2 tooling to the integration of Kubernetes and ROS\,2.
Another important trend is robot learning, and how new simulators and cloud
simulations are enabling, e.g., large-scale reinforcement learning or
distributed federated learning solutions. This has also enabled deeper
integration of continuous interaction and continuous deployment (CI/CD)
pipelines for robotic systems development, going beyond standard software unit
tests with simulated tests to build and validate code automatically. We discuss
the current technology readiness and list the potential new application
scenarios that are becoming available. Finally, we discuss the current
challenges in distributed robotic systems and list open research questions in
the field
Determinants of Developing Stroke Among Low-Income, Rural Residents: A 27-Year Population-Based, Prospective Cohort Study in Northern China
Although strokes are the leading cause of death and disability in many countries, China still lacks long-term monitoring data on stroke incidence and risk factors. This study explored stroke risk factors in a low-income, rural population in China. The study population was derived from the Tianjin Brain Study, a population-based stroke monitoring study that began in 1985. This study documented the demographic characteristics, past medical histories, and personal lifestyles of the study participants. In addition, physical examinations, including measurements of blood pressure (BP), height, and weight, were performed. Hazard ratios (HRs) were estimated for the risk factors for all subtypes of stroke using multivariate Cox regression analyses. During the study with mean following-up time of 23.16 years, 3906 individuals were recruited at baseline, and during 27 years of follow-up, 638 strokes were documented. The multivariate Cox regression analyses revealed a positive correlation between age and stroke incidence. Limited education was associated with a 1.9-fold increase in stroke risk (lowest vs. highest education level). Stroke risk was higher among former smokers than among current smokers (HR, 1.8 vs. 1.6; both, P < 0.05). Moreover, stroke risk was significantly associated with sex (HR, 1.8), former alcohol drinking (HR, 2.7), baseline hypertension (HR, 3.1), and overweight (HR, 1.3). In conclusion, this study identified uncontrollable (sex and age) and controllable (education, smoking, alcohol drinking, hypertension, and overweight) risk factors for stroke in a low-income, rural population in China. Therefore, it is critical to control BP and weight effectively, advocate cessation of smoking/alcohol drinking, and enhance the education level in this population to prevent increase in the burden of stroke in China
The 13th International Conference on Emerging Ubiquitous Systems and Pervasive Networks (EUSPN) / The 12th International Conference on Current and Future Trends of Information and Communication Technologies in Healthcare (ICTH-2022) / Affiliated Workshops
The role of deep learning (DL) in robotics has significantly deepened over the last decade. Intelligent robotic systems today are highly connected systems that rely on DL for a variety of perception, control and other tasks. At the same time, autonomous robots are being increasingly deployed as part of fleets, with collaboration among robots becoming a more relevant factor. From the perspective of collaborative learning, federated learning (FL) enables continuous training of models in a distributed, privacy-preserving way. This paper focuses on vision-based obstacle avoidance for mobile robot navigation. On this basis, we explore the potential of FL for distributed systems of mobile robots enabling continuous learning via the engagement of robots in both simulated and real-world scenarios. We extend previous works by studying the performance of different image classifiers for FL, compared to centralized, cloud-based learning with a priori aggregated data. We also introduce an approach to continuous learning from mobile robots with extended sensor suites able to provide automatically labelled data while they are completing other tasks. We show that higher accuracies can be achieved by training the models in both simulation and reality, enabling continuous updates to deployed models.</p
Towards Lifelong Federated Learning in Autonomous Mobile Robots with Continuous Sim-to-Real Transfer
The role of deep learning (DL) in robotics has significantly deepened over
the last decade. Intelligent robotic systems today are highly connected systems
that rely on DL for a variety of perception, control, and other tasks. At the
same time, autonomous robots are being increasingly deployed as part of fleets,
with collaboration among robots becoming a more relevant factor. From the
perspective of collaborative learning, federated learning (FL) enables
continuous training of models in a distributed, privacy-preserving way. This
paper focuses on vision-based obstacle avoidance for mobile robot navigation.
On this basis, we explore the potential of FL for distributed systems of mobile
robots enabling continuous learning via the engagement of robots in both
simulated and real-world scenarios. We extend previous works by studying the
performance of different image classifiers for FL, compared to centralized,
cloud-based learning with a priori aggregated data. We also introduce an
approach to continuous learning from mobile robots with extended sensor suites
able to provide automatically labeled data while they are completing other
tasks. We show that higher accuracies can be achieved by training the models in
both simulation and reality, enabling continuous updates to deployed models