1,382 research outputs found
Four years of multi-modal odometry and mapping on the rail vehicles
Precise, seamless, and efficient train localization as well as long-term
railway environment monitoring is the essential property towards reliability,
availability, maintainability, and safety (RAMS) engineering for railroad
systems. Simultaneous localization and mapping (SLAM) is right at the core of
solving the two problems concurrently. In this end, we propose a
high-performance and versatile multi-modal framework in this paper, targeted
for the odometry and mapping task for various rail vehicles. Our system is
built atop an inertial-centric state estimator that tightly couples light
detection and ranging (LiDAR), visual, optionally satellite navigation and
map-based localization information with the convenience and extendibility of
loosely coupled methods. The inertial sensors IMU and wheel encoder are treated
as the primary sensor, which achieves the observations from subsystems to
constrain the accelerometer and gyroscope biases. Compared to point-only
LiDAR-inertial methods, our approach leverages more geometry information by
introducing both track plane and electric power pillars into state estimation.
The Visual-inertial subsystem also utilizes the environmental structure
information by employing both lines and points. Besides, the method is capable
of handling sensor failures by automatic reconfiguration bypassing failure
modules. Our proposed method has been extensively tested in the long-during
railway environments over four years, including general-speed, high-speed and
metro, both passenger and freight traffic are investigated. Further, we aim to
share, in an open way, the experience, problems, and successes of our group
with the robotics community so that those that work in such environments can
avoid these errors. In this view, we open source some of the datasets to
benefit the research community
Hyperspectral Imaging from Ground Based Mobile Platforms and Applications in Precision Agriculture
This thesis focuses on the use of line scanning hyperspectral sensors on mobile ground based platforms and applying them to agricultural applications. First this work deals with the geometric and radiometric calibration and correction of acquired hyperspectral data. When operating at low altitudes, changing lighting conditions are common and inevitable, complicating the retrieval of a surface's reflectance, which is solely a function of its physical structure and chemical composition. Therefore, this thesis contributes the evaluation of an approach to compensate for changes in illumination and obtain reflectance that is less labour intensive than traditional empirical methods. Convenient field protocols are produced that only require a representative set of illumination and reflectance spectral samples. In addition, a method for determining a line scanning camera's rigid 6 degree of freedom (DOF) offset and uncertainty with respect to a navigation system is developed, enabling accurate georegistration and sensor fusion. The thesis then applies the data captured from the platform to two different agricultural applications. The first is a self-supervised weed detection framework that allows training of a per-pixel classifier using hyperspectral data without manual labelling. The experiments support the effectiveness of the framework, rivalling classifiers trained on hand labelled training data. Then the thesis demonstrates the mapping of mango maturity using hyperspectral data on an orchard wide scale using efficient image scanning techniques, which is a world first result. A novel classification, regression and mapping pipeline is proposed to generate per tree mango maturity averages. The results confirm that maturity prediction in mango orchards is possible in natural daylight using a hyperspectral camera, despite complex micro-illumination-climates under the canopy
Cost-effective robot for steep slope crops monitoring
This project aims to develop a low cost, simple and robust robot able to autonomously monitorcrops using simple sensors. It will be required do develop robotic sub-systems and integrate them with pre-selected mechanical components, electrical interfaces and robot systems (localization, navigation and perception) using ROS, for wine making regions and maize fields
Enabling Multi-LiDAR Sensing in GNSS-Denied Environments: SLAM Dataset, Benchmark, and UAV Tracking with LiDAR-as-a-camera
The rise of Light Detection and Ranging (LiDAR) sensors has profoundly impacted industries ranging from automotive to urban planning. As these sensors become increasingly affordable and compact, their applications are diversifying, driving precision, and innovation. This thesis delves into LiDAR's advancements in autonomous robotic systems, with a focus on its role in simultaneous localization and mapping (SLAM) methodologies and LiDAR as a camera-based tracking for Unmanned Aerial Vehicles (UAV).
Our contributions span two primary domains: the Multi-Modal LiDAR SLAM Benchmark, and the LiDAR-as-a-camera UAV Tracking. In the former, we have expanded our previous multi-modal LiDAR dataset by adding more data sequences from various scenarios. In contrast to the previous dataset, we employ different ground truth-generating approaches. We propose a new multi-modal multi-lidar SLAM-assisted and ICP-based sensor fusion method for generating ground truth maps. Additionally, we also supplement our data with new open road sequences with GNSS-RTK. This enriched dataset, supported by high-resolution LiDAR, provides detailed insights through an evaluation of ten configurations, pairing diverse LiDAR sensors with state-of-the-art SLAM algorithms. In the latter contribution, we leverage a custom YOLOv5 model trained on panoramic low-resolution images from LiDAR reflectivity (LiDAR-as-a-camera) to detect UAVs, demonstrating the superiority of this approach over point cloud or image-only methods. Additionally, we evaluated the real-time performance of our approach on the Nvidia Jetson Nano, a popular mobile computing platform.
Overall, our research underscores the transformative potential of integrating advanced LiDAR sensors with autonomous robotics. By bridging the gaps between different technological approaches, we pave the way for more versatile and efficient applications in the future
Point Cloud Registration for LiDAR and Photogrammetric Data: a Critical Synthesis and Performance Analysis on Classic and Deep Learning Algorithms
Recent advances in computer vision and deep learning have shown promising
performance in estimating rigid/similarity transformation between unregistered
point clouds of complex objects and scenes. However, their performances are
mostly evaluated using a limited number of datasets from a single sensor (e.g.
Kinect or RealSense cameras), lacking a comprehensive overview of their
applicability in photogrammetric 3D mapping scenarios. In this work, we provide
a comprehensive review of the state-of-the-art (SOTA) point cloud registration
methods, where we analyze and evaluate these methods using a diverse set of
point cloud data from indoor to satellite sources. The quantitative analysis
allows for exploring the strengths, applicability, challenges, and future
trends of these methods. In contrast to existing analysis works that introduce
point cloud registration as a holistic process, our experimental analysis is
based on its inherent two-step process to better comprehend these approaches
including feature/keypoint-based initial coarse registration and dense fine
registration through cloud-to-cloud (C2C) optimization. More than ten methods,
including classic hand-crafted, deep-learning-based feature correspondence, and
robust C2C methods were tested. We observed that the success rate of most of
the algorithms are fewer than 40% over the datasets we tested and there are
still are large margin of improvement upon existing algorithms concerning 3D
sparse corresopondence search, and the ability to register point clouds with
complex geometry and occlusions. With the evaluated statistics on three
datasets, we conclude the best-performing methods for each step and provide our
recommendations, and outlook future efforts.Comment: 7 figure
- …