89 research outputs found
Localization in Unstructured Environments: Towards Autonomous Robots in Forests with Delaunay Triangulation
Autonomous harvesting and transportation is a long-term goal of the forest
industry. One of the main challenges is the accurate localization of both
vehicles and trees in a forest. Forests are unstructured environments where it
is difficult to find a group of significant landmarks for current fast
feature-based place recognition algorithms. This paper proposes a novel
approach where local observations are matched to a general tree map using the
Delaunay triangularization as the representation format. Instead of point cloud
based matching methods, we utilize a topology-based method. First, tree trunk
positions are registered at a prior run done by a forest harvester. Second, the
resulting map is Delaunay triangularized. Third, a local submap of the
autonomous robot is registered, triangularized and matched using triangular
similarity maximization to estimate the position of the robot. We test our
method on a dataset accumulated from a forestry site at Lieksa, Finland. A
total length of 2100\,m of harvester path was recorded by an industrial
harvester with a 3D laser scanner and a geolocation unit fixed to the frame.
Our experiments show a 12\,cm s.t.d. in the location accuracy and with
real-time data processing for speeds not exceeding 0.5\,m/s. The accuracy and
speed limit is realistic during forest operations
Vision-based Safe Autonomous UAV Docking with Panoramic Sensors
The remarkable growth of unmanned aerial vehicles (UAVs) has also sparked
concerns about safety measures during their missions. To advance towards safer
autonomous aerial robots, this work presents a vision-based solution to
ensuring safe autonomous UAV landings with minimal infrastructure. During
docking maneuvers, UAVs pose a hazard to people in the vicinity. In this paper,
we propose the use of a single omnidirectional panoramic camera pointing
upwards from a landing pad to detect and estimate the position of people around
the landing area. The images are processed in real-time in an embedded
computer, which communicates with the onboard computer of approaching UAVs to
transition between landing, hovering or emergency landing states. While
landing, the ground camera also aids in finding an optimal position, which can
be required in case of low-battery or when hovering is no longer possible. We
use a YOLOv7-based object detection model and a XGBooxt model for localizing
nearby people, and the open-source ROS and PX4 frameworks for communication,
interfacing, and control of the UAV. We present both simulation and real-world
indoor experimental results to show the efficiency of our methods
LiDAR-Generated Images Derived Keypoints Assisted Point Cloud Registration Scheme in Odometry Estimation
Keypoint detection and description play a pivotal role in various robotics
and autonomous applications including visual odometry (VO), visual navigation,
and Simultaneous localization and mapping (SLAM). While a myriad of keypoint
detectors and descriptors have been extensively studied in conventional camera
images, the effectiveness of these techniques in the context of LiDAR-generated
images, i.e. reflectivity and ranges images, has not been assessed. These
images have gained attention due to their resilience in adverse conditions such
as rain or fog. Additionally, they contain significant textural information
that supplements the geometric information provided by LiDAR point clouds in
the point cloud registration phase, especially when reliant solely on LiDAR
sensors. This addresses the challenge of drift encountered in LiDAR Odometry
(LO) within geometrically identical scenarios or where not all the raw point
cloud is informative and may even be misleading. This paper aims to analyze the
applicability of conventional image key point extractors and descriptors on
LiDAR-generated images via a comprehensive quantitative investigation.
Moreover, we propose a novel approach to enhance the robustness and reliability
of LO. After extracting key points, we proceed to downsample the point cloud,
subsequently integrating it into the point cloud registration phase for the
purpose of odometry estimation. Our experiment demonstrates that the proposed
approach has comparable accuracy but reduced computational overhead, higher
odometry publishing rate, and even superior performance in scenarios prone to
drift by using the raw point cloud. This, in turn, lays a foundation for
subsequent investigations into the integration of LiDAR-generated images with
LO. Our code is available on GitHub:
https://github.com/TIERS/ws-lidar-as-camera-odom
Simulation Analysis of Exploration Strategies and UAV Planning for Search and Rescue
Aerial scans with unmanned aerial vehicles (UAVs) are becoming more widely
adopted across industries, from smart farming to urban mapping. An application
area that can leverage the strength of such systems is search and rescue (SAR)
operations. However, with a vast variability in strategies and topology of
application scenarios, as well as the difficulties in setting up real-world
UAV-aided SAR operations for testing, designing an optimal flight pattern to
search for and detect all victims can be a challenging problem. Specifically,
the deployed UAV should be able to scan the area in the shortest amount of time
while maintaining high victim detection recall rates. Therefore, low
probability of false negatives (i.e., high recall) is more important than
precision in this case. To address the issues mentioned above, we have
developed a simulation environment that emulates different SAR scenarios and
allows experimentation with flight missions to provide insight into their
efficiency. The solution was developed with the open-source ROS framework and
Gazebo simulator, with PX4 as the autopilot system for flight control, and YOLO
as the object detector
- …