178 research outputs found
DRIVE: Data-driven Robot Input Vector Exploration
An accurate motion model is a fundamental component of most autonomous
navigation systems. While much work has been done on improving model
formulation, no standard protocol exists for gathering empirical data required
to train models. In this work, we address this issue by proposing Data-driven
Robot Input Vector Exploration (DRIVE), a protocol that enables characterizing
uncrewed ground vehicles (UGVs) input limits and gathering empirical model
training data. We also propose a novel learned slip approach outperforming
similar acceleration learning approaches. Our contributions are validated
through an extensive experimental evaluation, cumulating over 7 km and 1.8 h of
driving data over three distinct UGVs and four terrain types. We show that our
protocol offers increased predictive performance over common human-driven
data-gathering protocols. Furthermore, our protocol converges with 46 s of
training data, almost four times less than the shortest human dataset gathering
protocol. We show that the operational limit for our model is reached in
extreme slip conditions encountered on surfaced ice. DRIVE is an efficient way
of characterizing UGV motion in its operational conditions. Our code and
dataset are both available online at this link:
https://github.com/norlab-ulaval/DRIVE.Comment: 6 pages, 7 figures, submitted to 2024 IEEE International Conference
on Robotics and Automation (ICRA 2024
Improving Inference Speed of Perception Systems in Autonomous Unmanned Ground Vehicles
Autonomous vehicle (AV) development has become one of the largest research challenges in businesses and research institutions. While much research has been done, autonomous driving still requires extensive amounts of research due to its immense, multi-factorial difficulty. Autonomous vehicles rely on many complex systems to function, make accurate decisions, and, above all, provide maximum safety. One of the most crucial components of autonomous driving is the perception system.
The perception system allows the vehicle to identify its surroundings and make accurate, but safe, decisions through the use of computer vision techniques like object detection, image segmentation, and path planning. Due to the recent advances in deep learning, state-of-the-art computer vision algorithms have made exceptional progress. However, for production-ready autonomous driving, these algorithms must be nearly perfect. Furthermore, even though perception systems are a widely researched area, most research focuses on urban environments and there exists a great need for autonomy in other areas. Specifically, autonomy for unmanned ground vehicles (UGV) needs to be explored. Autonomous UGVs allow for a wide range of applications like military usage, extreme climate exploration, and rescue missions.
This research aims to investigate bottlenecks within a perception system of autonomous UGVs and methods of improving them. The primary investigation focuses on the inference speed of semantic segmentation using deep learning techniques. Unlike object detection, semantic segmentation provides a much better understanding of the environment by providing pixel-wise classification rather than only creating bounding boxes. However, semantic segmentation comes at a much higher computational cost. Secondly, this thesis looks at increasing the image transfer time from a mounted camera to a video processing unit (which serves the deep learning model) using lossy compression. Due to the nature of lossy compression, we must also understand how lossy compression affects the classification accuracy of semantic segmentation. Finally, the challenges faced and preliminary results from future work relating to temporal consistency are discussed
Methods for the improvement of power resource prediction and residual range estimation for offroad unmanned ground vehicles
Unmanned Ground Vehicles (UGVs) are becoming more widespread in their
deployment. Advances in technology have improved not only their reliability but also
their ability to perform complex tasks. UGVs are particularly attractive for operations
that are considered unsuitable for human operatives. These include dangerous
operations such as explosive ordnance disarmament, as well as situations where
human access is limited including planetary exploration or search and rescue missions
involving physically small spaces. As technology advances, UGVs are gaining increased
capabilities and consummate increased complexity, allowing them to participate in
increasingly wide range of scenarios.
UGVs have limited power reserves that can restrict a UGVâs mission duration and also
the range of capabilities that it can deploy. As UGVs tend towards increased
capabilities and complexity, extra burden is placed on the already stretched power
resources. Electric drives and an increasing array of processors, sensors and effectors,
all need sufficient power to operate. Accurate prediction of mission power
requirements is therefore of utmost importance, especially in safety critical scenarios
where the UGV must complete an atomic task or risk the creation of an unsafe
environment due to failure caused by depleted power.
Live energy prediction for vehicles that traverse typical road surfaces is a wellresearched
topic. However, this is not sufficient for modern UGVs as they are required
to traverse a wide variety of terrains that may change considerably with prevailing
environmental conditions. This thesis addresses the gap by presenting a novel
approach to both off and on-line energy prediction that considers the effects of
weather conditions on a wide variety of terrains. The prediction is based upon nonlinear
polynomial regression using live sensor data to improve upon the accuracy
provided by current methods.
The new approach is evaluated and compared to existing algorithms using a custom
âUGV mission powerâ simulation tool. The tool allows the user to test the accuracy of
various mission energy prediction algorithms over a specified mission routes that
include a variety of terrains and prevailing weather conditions. A series of experiments that test and record the âreal worldâ power use of a typical
small electric drive UGV are also performed. The tests are conducted for a variety of
terrains and weather conditions and the empirical results are used to validate the
results of the simulation tool.
The new algorithm showed a significant improvement compared with current
methods, which will allow for UGVs deployed in real world scenarios where they must
contend with a variety of terrains and changeable weather conditions to make
accurate energy use predictions. This enables more capabilities to be deployed with a
known impact on remaining mission power requirement, more efficient mission
durations through avoiding the need to maintain excessive estimated power reserves
and increased safety through reduced risk of aborting atomic operations in safety
critical scenarios.
As supplementary contribution, this work created a power resource usage and
prediction test bed UGV and resulting data-sets as well as a novel simulation tool for
UGV mission energy prediction. The tool implements a UGV model with accurate
power use characteristics, confirmed by an empirical test series. The tool can be used
to test a wide variety of scenarios and power prediction algorithms and could be used
for the development of further mission energy prediction technology or be used as a
mission energy planning tool
GANav: Group-wise Attention Network for Classifying Navigable Regions in Unstructured Outdoor Environments
We present a new learning-based method for identifying safe and navigable
regions in off-road terrains and unstructured environments from RGB images. Our
approach consists of classifying groups of terrain classes based on their
navigability levels using coarse-grained semantic segmentation. We propose a
bottleneck transformer-based deep neural network architecture that uses a novel
group-wise attention mechanism to distinguish between navigability levels of
different terrains.Our group-wise attention heads enable the network to
explicitly focus on the different groups and improve the accuracy. In addition,
we propose a dynamic weighted cross entropy loss function to handle the
long-tailed nature of the dataset. We show through extensive evaluations on the
RUGD and RELLIS-3D datasets that our learning algorithm improves the accuracy
of visual perception in off-road terrains for navigation. We compare our
approach with prior work on these datasets and achieve an improvement over the
state-of-the-art mIoU by 6.74-39.1% on RUGD and 3.82-10.64% on RELLIS-3D
Reinforcement and Curriculum Learning for Off-Road Navigation of an UGV with a 3D LiDAR
This paper presents the use of deep Reinforcement Learning (RL) for autonomous navigation
of an Unmanned Ground Vehicle (UGV) with an onboard three-dimensional (3D) Light Detection
and Ranging (LiDAR) sensor in off-road environments. For training, both the robotic simulator
Gazebo and the Curriculum Learning paradigm are applied. Furthermore, an ActorâCritic Neural
Network (NN) scheme is chosen with a suitable state and a custom reward function. To employ the
3D LiDAR data as part of the input state of the NNs, a virtual two-dimensional (2D) traversability
scanner is developed. The resulting Actor NN has been successfully tested in both real and simulated
experiments and favorably compared with a previous reactive navigation approach on the same UGV.Partial funding for open access charge: Universidad de MĂĄlag
Assessment of simulated and real-world autonomy performance with small-scale unmanned ground vehicles
Off-road autonomy is a challenging topic that requires robust systems to both understand and navigate complex environments. While on-road autonomy has seen a major expansion in recent years in the consumer space, off-road systems are mostly relegated to niche applications. However, these applications can provide safety and navigation to dangerous areas that are the most suited for autonomy tasks. Traversability analysis is at the core of many of the algorithms employed in these topics. In this thesis, a Clearpath Robotics Jackal vehicle is equipped with a 3D Ouster laser scanner to define and traverse off-road environments. The Mississippi State University Autonomous Vehicle Simulator (MAVS) and the Navigating All Terrains Using Robotic Exploration (NATURE) autonomy stack are used in conjunction with the small-scale vehicle platform to traverse uneven terrain and collect data. Additionally, the NATURE stack is used as a point of comparison between a MAVS simulated and physical Clearpath Robotics Jackal vehicle in testing
Cooperative heterogeneous robots for autonomous insects trap monitoring system in a precision agriculture scenario
The recent advances in precision agriculture are due to the emergence of modern robotics systems. For instance, unmanned aerial systems (UASs) give new possibilities that advance the solution of existing problems in this area in many different aspects. The reason is due to these platformsâ ability to perform activities at varying levels of complexity. Therefore, this research presents a multiple-cooperative robot solution for UAS and unmanned ground vehicle (UGV) systems for their joint inspection of olive grove inspect traps. This work evaluated the UAS and UGV vision-based navigation based on a yellow fly trap fixed in the trees to provide visual position data using the You Only Look Once (YOLO) algorithms. The experimental setup evaluated the fuzzy control algorithm applied to the UAS to make it reach the trap efficiently. Experimental tests were conducted in a realistic simulation environment using a robot operating system (ROS) and CoppeliaSim platforms to verify the methodologyâs performance, and all tests considered specific real-world environmental conditions. A search and landing algorithm based on augmented reality tag (AR-Tag) visual processing was evaluated to allow for the return and landing of the UAS to the UGV base. The outcomes obtained in this work demonstrate the robustness and feasibility of the multiple-cooperative robot architecture for UGVs and UASs applied in the olive inspection scenario.The authors would like to thank the Foundation for Science and Technology (FCT, Portugal) for financial support through national funds FCT/MCTES (PIDDAC) to CeDRI (UIDB/05757/2020 and UIDP/05757/2020) and SusTEC (LA/P/0007/2021). In addition, the authors would like to thank the following Brazilian Agencies CEFET-RJ, CAPES, CNPq, and FAPERJ. In addition, the authors also want to thank the Research Centre in Digitalization and Intelligent Robotics (CeDRI), Instituto PolitĂ©cnico de Braganca (IPB) - Campus de Santa Apolonia, Portugal, LaboratĂłrio Associado para a Sustentabilidade e Tecnologia em RegiĂ”es de Montanha (SusTEC), Portugal, INESC Technology and Science - Porto, Portugal and Universidade de TrĂĄs-os-Montes e Alto Douro - Vila Real, Portugal. This work was carried out under the Project âOleaChain: CompetĂȘncias para a sustentabilidade e inovação da cadeia de valor do olival tradicional no Norte Interior de Portugalâ (NORTE-06-3559-FSE-000188), an operation used to hire highly qualified human resources, funded by NORTE 2020 through the European Social Fund (ESF).info:eu-repo/semantics/publishedVersio
A Review of Radio Frequency Based Localization for Aerial and Ground Robots with 5G Future Perspectives
Efficient localization plays a vital role in many modern applications of
Unmanned Ground Vehicles (UGV) and Unmanned aerial vehicles (UAVs), which would
contribute to improved control, safety, power economy, etc. The ubiquitous 5G
NR (New Radio) cellular network will provide new opportunities for enhancing
localization of UAVs and UGVs. In this paper, we review the radio frequency
(RF) based approaches for localization. We review the RF features that can be
utilized for localization and investigate the current methods suitable for
Unmanned vehicles under two general categories: range-based and fingerprinting.
The existing state-of-the-art literature on RF-based localization for both UAVs
and UGVs is examined, and the envisioned 5G NR for localization enhancement,
and the future research direction are explored
- âŠ