35 research outputs found
Simulation of an autonomous vehicle with a\ud vision-based navigation system in unstructured\ud terrains using OctoMap
Design and implementation of autonomous vehicles\ud
is a very complex task. One important step on building autonomous\ud
navigation systems is to apply it first on simulations.\ud
We present here a vision-based autonomous navigation approach\ud
in unstructured terrains for a car-like vehicle. We modelled the\ud
vehicle and the scenario in a realistic physics simulation with the\ud
same constraints of a real car and uneven terrain with vegetation.\ud
We use stereo vision to build a navigation cost map grid based\ud
on a probabilistic occupancy space represented by an OctoMap.\ud
The localization is based on GPS and compass integrated with\ud
wheel odometry. A global planning is performed and continuously\ud
updated with the information added to the cost map while\ud
the vehicle moves. In our simulations we could autonomously\ud
navigate the vehicle through obstructed spaces avoiding collisions\ud
and generating feasible trajectories. This system will be validated\ud
in the near future using our autonomous vehicle testing platform\ud
- CaRINA.FAPESP - processo #2012/04555-
A flexible hardware-in-the-loop architecture for UAVs
© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.As robotic technology matures, fully autonomous robots become a realistic possibility, but demand very complex solutions to be rapidly engineered. In order to be able to quickly set up a working autonomous system, and to reduce the gap between simulated and real experiments, we propose a modular, upgradeable and flexible hardware-in-the-loop (HIL) architecture, which hybridizes the simulated and real settings. We take as use case the autonomous exploration of dense forests with UAVs, with the aim of creating useful maps for forest inspection, cataloging, or to compute other
metrics such as total wood volume. As the first step in the development of the full system, in this paper we implement a fraction of this architecture, comprising assisted localization, and automatic methods for mapping, planning and motion execution. Specifically we are able to simulate the use of a 3D LIDAR endowed below an actual UAV autonomously navigating among simulated obstacles, thus the platform safety is not compromised. The full system is modular and takes profit of pieces either publicly available or easily programmed. We highlight the flexibility of the proposed HIL architecture to rapidly configure different experimental setups with a UAV in challenging terrain. Moreover, it can be extended to other robotic fields without further design. The HIL system uses
the multi-platform ROS capabilities and only needs a motion capture system as external extra hardware, which is becoming standard equipment in all research labs dealing with mobile robots.Peer ReviewedPostprint (author's final draft
Learning-based Uncertainty-aware Navigation in 3D Off-Road Terrains
This paper presents a safe, efficient, and agile ground vehicle navigation
algorithm for 3D off-road terrain environments. Off-road navigation is subject
to uncertain vehicle-terrain interactions caused by different terrain
conditions on top of 3D terrain topology. The existing works are limited to
adopt overly simplified vehicle-terrain models. The proposed algorithm learns
the terrain-induced uncertainties from driving data and encodes the learned
uncertainty distribution into the traversability cost for path evaluation. The
navigation path is then designed to optimize the uncertainty-aware
traversability cost, resulting in a safe and agile vehicle maneuver. Assuring
real-time execution, the algorithm is further implemented within parallel
computation architecture running on Graphics Processing Units (GPU).Comment: 6 pages, 6 figures, submitted to International Conference on Robotics
and Automation (ICRA 2023
Simulation Framework for Mobile Robots in Planetary-Like Environments
In this paper we present a simulation framework for the evaluation of the
navigation and localization metrological performances of a robotic platform.
The simulator, based on ROS (Robot Operating System) Gazebo, is targeted to a
planetary-like research vehicle which allows to test various perception and
navigation approaches for specific environment conditions. The possibility of
simulating arbitrary sensor setups comprising cameras, LiDARs (Light Detection
and Ranging) and IMUs makes Gazebo an excellent resource for rapid prototyping.
In this work we evaluate a variety of open-source visual and LiDAR SLAM
(Simultaneous Localization and Mapping) algorithms in a simulated Martian
environment. Datasets are captured by driving the rover and recording sensors
outputs as well as the ground truth for a precise performance evaluation.Comment: To be presented at the 7th IEEE International Workshop on Metrology
for Aerospace (MetroAerospace
Traversability analysis in unstructured forested terrains for off-road autonomy using LIDAR data
Scene perception and traversability analysis are real challenges for autonomous driving systems. In the context of off-road autonomy, there are additional challenges due to the unstructured environments and the existence of various vegetation types. It is necessary for the Autonomous Ground Vehicles (AGVs) to be able to identify obstacles and load-bearing surfaces in the terrain to ensure a safe navigation (McDaniel et al. 2012). The presence of vegetation in off-road autonomy applications presents unique challenges for scene understanding: 1) understory vegetation makes it difficult to detect obstacles or to identify load-bearing surfaces; and 2) trees are usually regarded as obstacles even though only trunks of the trees pose collision risk in navigation. The overarching goal of this dissertation was to study traversability analysis in unstructured forested terrains for off-road autonomy using LIDAR data. More specifically, to address the aforementioned challenges, this dissertation studied the impacts of the understory vegetation density on the solid obstacle detection performance of the off-road autonomous systems. By leveraging a physics-based autonomous driving simulator, a classification-based machine learning framework was proposed for obstacle detection based on point cloud data captured by LIDAR. Features were extracted based on a cumulative approach meaning that information related to each feature was updated at each timeframe when new data was collected by LIDAR. It was concluded that the increase in the density of understory vegetation adversely affected the classification performance in correctly detecting solid obstacles. Additionally, a regression-based framework was proposed for estimating the understory vegetation density for safe path planning purposes according to which the traversabilty risk level was regarded as a function of estimated density. Thus, the denser the predicted density of an area, the higher the risk of collision if the AGV traversed through that area. Finally, for the trees in the terrain, the dissertation investigated statistical features that can be used in machine learning algorithms to differentiate trees from solid obstacles in the context of forested off-road scenes. Using the proposed extracted features, the classification algorithm was able to generate high precision results for differentiating trees from solid obstacles. Such differentiation can result in more optimized path planning in off-road applications
EVORA: Deep Evidential Traversability Learning for Risk-Aware Off-Road Autonomy
Traversing terrain with good traction is crucial for achieving fast off-road
navigation. Instead of manually designing costs based on terrain features,
existing methods learn terrain properties directly from data via
self-supervision, but challenges remain to properly quantify and mitigate risks
due to uncertainties in learned models. This work efficiently quantifies both
aleatoric and epistemic uncertainties by learning discrete traction
distributions and probability densities of the traction predictor's latent
features. Leveraging evidential deep learning, we parameterize Dirichlet
distributions with the network outputs and propose a novel uncertainty-aware
squared Earth Mover's distance loss with a closed-form expression that improves
learning accuracy and navigation performance. The proposed risk-aware planner
simulates state trajectories with the worst-case expected traction to handle
aleatoric uncertainty, and penalizes trajectories moving through terrain with
high epistemic uncertainty. Our approach is extensively validated in simulation
and on wheeled and quadruped robots, showing improved navigation performance
compared to methods that assume no slip, assume the expected traction, or
optimize for the worst-case expected cost.Comment: Under review. Journal extension for arXiv:2210.00153. Project
website: https://xiaoyi-cai.github.io/evora
Contributions to Intelligent Scene Understanding of Unstructured Environments from 3D lidar sensors
Además, la viabilidad de este enfoque es evaluado mediante la implementación de cuatro tipos de clasificadores de aprendizaje supervisado encontrados en métodos de procesamiento de escenas: red neuronal, máquina de vectores de soporte, procesos gaussianos, y modelos de mezcla gaussiana.
La segmentación de objetos es un paso más allá hacia el entendimiento de escena, donde conjuntos de puntos 3D correspondientes al suelo y otros objetos de la escena son aislados. La tesis propone nuevas contribuciones a la segmentación de nubes de puntos basados en mapas de vóxeles caracterizados geométricamente. En concreto, la metodología propuesta se compone de dos pasos: primero, una segmentación del suelo especialmente diseñado para entornos naturales; y segundo, el posterior aislamiento de objetos individuales. Además, el método de segmentación del suelo es integrado en una nueva técnica de mapa de navegabilidad basado en cuadrícula de ocupación el cuál puede ser apropiado para robots móviles en entornos naturales.
El diseño y desarrollo de un nuevo y asequible sensor lidar 3D de alta resolución también se ha propuesto en la tesis. Los nuevos MBLs, tales como los desarrollados por Velodyne, están siendo cada vez más un tipo de sensor 3D asequible y popular que ofrece alto ratio de datos en un campo de visión vertical (FOV) limitado. El diseño propuesto consiste en una plataforma giratoria que mejora la resolución y el FOV vertical de un Velodyne VLP-16 de 16 haces. Además, los complejos patrones de escaneo producidos por configuraciones de MBL que rotan se analizan tanto en simulaciones de esfera hueca como en escáneres reales en entornos representativos.
Fecha de Lectura de Tesis: 11 de julio 2018.Ingeniería de Sistemas y Automática
Resumen tesis:
Los sensores lidar 3D son una tecnología clave para navegación, localización, mapeo y entendimiento de escenas en vehículos no tripulados y robots móviles.
Esta tecnología, que provee nubes de puntos densas, puede ser especialmente adecuada para nuevas aplicaciones en entornos naturales o desestructurados, tales como búsqueda y rescate, exploración planetaria, agricultura, o exploración fuera de carretera.
Esto es un desafío como área de investigación que incluye disciplinas que van desde el diseño de sensor a la inteligencia artificial o el aprendizaje automático (machine learning). En este contexto, esta tesis propone contribuciones al entendimiento inteligente de escenas en entornos desestructurados basado en medidas 3D de distancia a nivel del suelo. En concreto, las contribuciones principales incluyen nuevas metodologías para la clasificación de características espaciales, segmentación de objetos, y evaluación de navegabilidad en entornos naturales y urbanos, y también el diseño y desarrollo de un nuevo lidar rotatorio multi-haz (MBL).
La clasificación de características espaciales es muy relevante porque es extensamente requerida como un paso fundamental previo a los problemas de entendimiento de alto nivel de una escena. Las contribuciones de la tesis en este respecto tratan de mejorar la eficacia, tanto en carga computacional como en precisión, de clasificación de aprendizaje supervisado de características de forma espacial (forma tubular, plana o difusa) obtenida mediante el análisis de componentes principales (PCA). Esto se ha conseguido proponiendo un concepto eficiente de vecindario basado en vóxel en una contribución original que define los procedimientos de aprendizaje “offline” y clasificación “online” a la vez que cinco definiciones alternativas de vectores de características basados en PCA
Present and Future of SLAM in Extreme Underground Environments
This paper reports on the state of the art in underground SLAM by discussing
different SLAM strategies and results across six teams that participated in the
three-year-long SubT competition. In particular, the paper has four main goals.
First, we review the algorithms, architectures, and systems adopted by the
teams; particular emphasis is put on lidar-centric SLAM solutions (the go-to
approach for virtually all teams in the competition), heterogeneous multi-robot
operation (including both aerial and ground robots), and real-world underground
operation (from the presence of obscurants to the need to handle tight
computational constraints). We do not shy away from discussing the dirty
details behind the different SubT SLAM systems, which are often omitted from
technical papers. Second, we discuss the maturity of the field by highlighting
what is possible with the current SLAM systems and what we believe is within
reach with some good systems engineering. Third, we outline what we believe are
fundamental open problems, that are likely to require further research to break
through. Finally, we provide a list of open-source SLAM implementations and
datasets that have been produced during the SubT challenge and related efforts,
and constitute a useful resource for researchers and practitioners.Comment: 21 pages including references. This survey paper is submitted to IEEE
Transactions on Robotics for pre-approva