14 research outputs found
Enabling Multi-LiDAR Sensing in GNSS-Denied Environments: SLAM Dataset, Benchmark, and UAV Tracking with LiDAR-as-a-camera
The rise of Light Detection and Ranging (LiDAR) sensors has profoundly impacted industries ranging from automotive to urban planning. As these sensors become increasingly affordable and compact, their applications are diversifying, driving precision, and innovation. This thesis delves into LiDAR's advancements in autonomous robotic systems, with a focus on its role in simultaneous localization and mapping (SLAM) methodologies and LiDAR as a camera-based tracking for Unmanned Aerial Vehicles (UAV).
Our contributions span two primary domains: the Multi-Modal LiDAR SLAM Benchmark, and the LiDAR-as-a-camera UAV Tracking. In the former, we have expanded our previous multi-modal LiDAR dataset by adding more data sequences from various scenarios. In contrast to the previous dataset, we employ different ground truth-generating approaches. We propose a new multi-modal multi-lidar SLAM-assisted and ICP-based sensor fusion method for generating ground truth maps. Additionally, we also supplement our data with new open road sequences with GNSS-RTK. This enriched dataset, supported by high-resolution LiDAR, provides detailed insights through an evaluation of ten configurations, pairing diverse LiDAR sensors with state-of-the-art SLAM algorithms. In the latter contribution, we leverage a custom YOLOv5 model trained on panoramic low-resolution images from LiDAR reflectivity (LiDAR-as-a-camera) to detect UAVs, demonstrating the superiority of this approach over point cloud or image-only methods. Additionally, we evaluated the real-time performance of our approach on the Nvidia Jetson Nano, a popular mobile computing platform.
Overall, our research underscores the transformative potential of integrating advanced LiDAR sensors with autonomous robotics. By bridging the gaps between different technological approaches, we pave the way for more versatile and efficient applications in the future
LiDAR-Generated Images Derived Keypoints Assisted Point Cloud Registration Scheme in Odometry Estimation
Keypoint detection and description play a pivotal role in various robotics
and autonomous applications including visual odometry (VO), visual navigation,
and Simultaneous localization and mapping (SLAM). While a myriad of keypoint
detectors and descriptors have been extensively studied in conventional camera
images, the effectiveness of these techniques in the context of LiDAR-generated
images, i.e. reflectivity and ranges images, has not been assessed. These
images have gained attention due to their resilience in adverse conditions such
as rain or fog. Additionally, they contain significant textural information
that supplements the geometric information provided by LiDAR point clouds in
the point cloud registration phase, especially when reliant solely on LiDAR
sensors. This addresses the challenge of drift encountered in LiDAR Odometry
(LO) within geometrically identical scenarios or where not all the raw point
cloud is informative and may even be misleading. This paper aims to analyze the
applicability of conventional image key point extractors and descriptors on
LiDAR-generated images via a comprehensive quantitative investigation.
Moreover, we propose a novel approach to enhance the robustness and reliability
of LO. After extracting key points, we proceed to downsample the point cloud,
subsequently integrating it into the point cloud registration phase for the
purpose of odometry estimation. Our experiment demonstrates that the proposed
approach has comparable accuracy but reduced computational overhead, higher
odometry publishing rate, and even superior performance in scenarios prone to
drift by using the raw point cloud. This, in turn, lays a foundation for
subsequent investigations into the integration of LiDAR-generated images with
LO. Our code is available on GitHub:
https://github.com/TIERS/ws-lidar-as-camera-odom
Comparison of DDS, MQTT, and Zenoh in Edge-to-Edge and Edge-to-Cloud Communication for Distributed ROS 2 Systems
The increased data transmission and number of devices involved in
communications among distributed systems make it challenging yet significantly
necessary to have an efficient and reliable networking middleware. In robotics
and autonomous systems, the wide application of ROS\,2 brings the possibility
of utilizing various networking middlewares together with DDS in ROS\,2 for
better communication among edge devices or between edge devices and the cloud.
However, there is a lack of comprehensive communication performance comparison
of integrating these networking middlewares with ROS\,2. In this study, we
provide a quantitative analysis for the communication performance of utilized
networking middlewares including MQTT and Zenoh alongside DDS in ROS\,2 among a
multiple host system. For a complete and reliable comparison, we calculate the
latency and throughput of these middlewares by sending distinct amounts and
types of data through different network setups including Ethernet, Wi-Fi, and
4G. To further extend the evaluation to real-world application scenarios, we
assess the drift error (the position changes) over time caused by these
networking middlewares with the robot moving in an identical square-shaped
path. Our results show that CycloneDDS performs better under Ethernet while
Zenoh performs better under Wi-Fi and 4G. In the actual robot test, the robot
moving trajectory drift error over time (96\,s) via Zenoh is the smallest. It
is worth noting we have a discussion of the CPU utilization of these networking
middlewares and the performance impact caused by enabling the security feature
in ROS\,2 at the end of the paper.Comment: 19 pages, 8 figures. Submitted to the Journal of Intelligent &
Robotic Systems. Under revie
Prognostic impact of urokinase-type plasminogen activator receptor (uPAR) in cytosols and pellet extracts derived from primary breast tumours
Using a previously developed enzyme-linked immunosorbent assay (ELISA), the levels of the receptor for urokinase-type plasminogen activator (uPAR) were determined in cytosols and corresponding membrane pellets derived from 878 primary breast tumours. The levels of uPAR in the pellet extracts were more than 3-fold higher than those measured in the cytosols (P< 0.001). Moreover, the uPAR levels in the two types of extracts were weakly, though significantly, correlated with each other (rS= 0.20, P< 0.001). In Cox univariate analysis, high cytosolic levels of uPAR were significantly associated with reduced overall survival (OS) and relapse-free survival (RFS). The levels of uPAR in pellet extracts appeared not to be related with patient survival. In multivariate analysis, elevated levels of uPAR measured in cytosols and pellet extracts were found to be independent predictors of poor OS, not RFS. The prediction of poor prognosis on the basis of high uPAR levels emphasizes its important role in plasmin-mediated degradation of extracellular matrix proteins during cancer invasion and metastasis. © 2001 Cancer Research Campaign http://www.bjcancer.co
Absence of Host Plasminogen Activator Inhibitor 1 Prevents Cancer Invasion and Vascularization
Acquisition of invasive/metastatic potential through protease expression is an essential event in tumor progression. High levels of components of the plasminogen activation system, including urokinase, but paradoxically also its inhibitor, plasminogen activator inhibitor 1 (PAI1), have been correlated with a poor prognosis for some cancers. We report here that deficient PAI1 expression in host mice prevented local invasion and tumor vascularization of transplanted malignant keratinocytes. When this PAI1 deficiency was circumvented by intravenous injection of a replication-defective adenoviral vector expressing human PAI1, invasion and associated angiogenesis were restored. This experimental evidence demonstrates that host-produced PAI is essential for cancer cell invasion and angiogenesis
UAV Tracking with Solid-State Lidars:Dynamic Multi-Frequency Scan Integration
With the increasing use of drones across various industries, the navigation
and tracking of these unmanned aerial vehicles (UAVs) in challenging
environments (such as GNSS-denied environments) have become critical issues. In
this paper, we propose a novel method for a ground-based UAV tracking system
using a solid-state LiDAR, which dynamically adjusts the LiDAR frame
integration time based on the distance to the UAV and its speed. Our method
fuses two simultaneous scan integration frequencies for high accuracy and
persistent tracking, enabling reliable estimates of the UAV state even in
challenging scenarios. The use of the Inverse Covariance Intersection method
and Kalman filters allow for better tracking accuracy and can handle
challenging tracking scenarios.
We have performed a number of experiments for evaluating the performance of
the proposed tracking system and identifying its limitations. Our experimental
results demonstrate that the proposed method achieves comparable tracking
performance to the established baseline method, while also providing more
reliable and accurate tracking when only one of the frequencies is available or
unreliable
UAV Tracking with Lidar as a Camera Sensors in GNSS-Denied Environments
LiDAR has become one of the primary sensors in robotics and autonomous system
for high-accuracy situational awareness. In recent years, multi-modal LiDAR
systems emerged, and among them, LiDAR-as-a-camera sensors provide not only 3D
point clouds but also fixed-resolution 360{\deg}panoramic images by encoding
either depth, reflectivity, or near-infrared light in the image pixels. This
potentially brings computer vision capabilities on top of the potential of
LiDAR itself. In this paper, we are specifically interested in utilizing LiDARs
and LiDAR-generated images for tracking Unmanned Aerial Vehicles (UAVs) in
real-time which can benefit applications including docking, remote
identification, or counter-UAV systems, among others. This is, to the best of
our knowledge, the first work that explores the possibility of fusing the
images and point cloud generated by a single LiDAR sensor to track a UAV
without a priori known initialized position. We trained a custom YOLOv5 model
for detecting UAVs based on the panoramic images collected in an indoor
experiment arena with a MOCAP system. By integrating with the point cloud, we
are able to continuously provide the position of the UAV. Our experiment
demonstrated the effectiveness of the proposed UAV tracking approach compared
with methods based only on point clouds or images. Additionally, we evaluated
the real-time performance of our approach on the Nvidia Jetson Nano, a popular
mobile computing platform.Comment: I need to make some revisions to the paper because there are some
mistakes in the pape
Biomarker expression in rectal cancer tissue before and after neoadjuvant therapy
Leonora SF Boogerd,1 Maxime JM van der Valk,1 Martin C Boonstra,1 Hendrica AJM Prevoo,1 Denise E Hilling,1 Cornelis JH van de Velde,1 Cornelis FM Sier,1 Arantza Fariña Sarasqueta,2 Alexander L Vahrmeijer1 1Department of Surgery, 2Department of Pathology, Leiden University Medical Center, Leiden, the Netherlands Purpose: Intraoperative identification of rectal cancer (RC) can be challenging, especially because of fibrosis after treatment with preoperative chemo- and radiotherapy (CRT). Tumor-targeted fluorescence imaging can enhance the contrast between tumor and normal tissue during surgery. Promising targets for RC imaging are carcinoembryonic antigen (CEA), epithelial cell adhesion molecule (EpCAM) and the tyrosine-kinase receptor Met (c-Met). The effect of CRT on their expression determines their applicability for imaging. Therefore, we investigated whether CRT modifies expression patterns in tumors, lymph node (LN) metastases and adjacent normal rectal tissues. Patients and methods: Preoperative biopsies, primary tumor specimens and metastatic LNs were collected from 38 RC patients who did not receive CRT (cohort 1) and 34 patients who did (cohort 2). CEA, EpCAM and c-Met expression was determined using immunohistochemical staining and was semiquantified by a total immunostaining score (TIS), consisting of the percentage and intensity of stained tumor cells (0–12). Results: In both cohorts CEA, EpCAM and c-Met were significantly highly expressed in >60% of tumor tissues compared with adjacent normal epithelium (T/N ratio, P<0.01). EpCAM showed the most homogenous expression in tumors, whereas CEA showed the highest T/N ratio. Most importantly, CEA and EpCAM expression did not significantly change in normal or neoplastic RC tissue after CRT, whereas levels of c-Met changed (P=0.02). Tissues of eight patients with a pathological complete response after CRT showed expression of all biomarkers with TIS close to normal epithelium. Conclusion: Histological evaluation shows that CEA, EpCAM and c-Met are suitable targets for RC imaging, because all three are significantly enhanced in cancer tissue from primary tumors or LN metastases compared with normal adjacent tissue. Furthermore, the expression of CEA and EpCAM is not significantly changed after CRT. These data underscore the applicability of c-Met and especially, CEA and EpCAM as targets for image-guided RC surgery, both before and after CRT. Keywords: imaging, tumor markers, CEA, EpCAM, c-Met, preoperative chemo- and radiotherap