463 research outputs found
A Near-to-Far Learning Framework for Terrain Characterization Using an Aerial/Ground-Vehicle Team
In this thesis, a novel framework for adaptive terrain characterization of untraversed far terrain in a natural outdoor setting is presented. The system learns the association between visual appearance of different terrain and the proprioceptive characteristics of that terrain in a self-supervised framework. The proprioceptive characteristics of the terrain are acquired by inertial sensors recording measurements of one second traversals that are mapped into the frequency domain and later through a clustering technique classified into discrete proprioceptive classes. Later, these labels are used as training inputs to the adaptive visual classifier. The visual classifier uses images captured by an aerial vehicle scouting ahead of the ground vehicle and extracts local and global descriptors from image patches. An incremental SVM is utilized on the set of images and training sets as they are grabbed sequentially. The framework proposed in this thesis has been experimentally validated in an outdoor environment. We compare the results of the adaptive approach with the offline a priori classification approach and yield an average 12% increase in accuracy results on outdoor settings. The adaptive classifier gradually learns the association between characteristics and visual features of new terrain interactions and modifies the decision boundaries
Assessment of simulated and real-world autonomy performance with small-scale unmanned ground vehicles
Off-road autonomy is a challenging topic that requires robust systems to both understand and navigate complex environments. While on-road autonomy has seen a major expansion in recent years in the consumer space, off-road systems are mostly relegated to niche applications. However, these applications can provide safety and navigation to dangerous areas that are the most suited for autonomy tasks. Traversability analysis is at the core of many of the algorithms employed in these topics. In this thesis, a Clearpath Robotics Jackal vehicle is equipped with a 3D Ouster laser scanner to define and traverse off-road environments. The Mississippi State University Autonomous Vehicle Simulator (MAVS) and the Navigating All Terrains Using Robotic Exploration (NATURE) autonomy stack are used in conjunction with the small-scale vehicle platform to traverse uneven terrain and collect data. Additionally, the NATURE stack is used as a point of comparison between a MAVS simulated and physical Clearpath Robotics Jackal vehicle in testing
Learning to Model and Plan for Wheeled Mobility on Vertically Challenging Terrain
Most autonomous navigation systems assume wheeled robots are rigid bodies and
their 2D planar workspaces can be divided into free spaces and obstacles.
However, recent wheeled mobility research, showing that wheeled platforms have
the potential of moving over vertically challenging terrain (e.g., rocky
outcroppings, rugged boulders, and fallen tree trunks), invalidate both
assumptions. Navigating off-road vehicle chassis with long suspension travel
and low tire pressure in places where the boundary between obstacles and free
spaces is blurry requires precise 3D modeling of the interaction between the
chassis and the terrain, which is complicated by suspension and tire
deformation, varying tire-terrain friction, vehicle weight distribution and
momentum, etc. In this paper, we present a learning approach to model wheeled
mobility, i.e., in terms of vehicle-terrain forward dynamics, and plan
feasible, stable, and efficient motion to drive over vertically challenging
terrain without rolling over or getting stuck. We present physical experiments
on two wheeled robots and show that planning using our learned model can
achieve up to 60% improvement in navigation success rate and 46% reduction in
unstable chassis roll and pitch angles.Comment: https://www.youtube.com/watch?v=VzpRoEZeyWk
https://cs.gmu.edu/~xiao/Research/Verti-Wheelers
Underwater Exploration and Mapping
This paper analyzes the open challenges of exploring and mapping in the underwater realm with the goal of identifying research opportunities that will enable an Autonomous Underwater Vehicle (AUV) to robustly explore different environments. A taxonomy of environments based on their 3D structure is presented together with an analysis on how that influences the camera placement. The difference between exploration and coverage is presented and how they dictate different motion strategies. Loop closure, while critical for the accuracy of the resulting map, proves to be particularly challenging due to the limited field of view and the sensitivity to viewing direction. Experimental results of enforcing loop closures in underwater caves demonstrate a novel navigation strategy. Dense 3D mapping, both online and offline, as well as other sensor configurations are discussed following the presented taxonomy. Experimental results from field trials illustrate the above analysis.acceptedVersio
Robot Mapping and Navigation in Real-World Environments
Robots can perform various tasks, such as mapping hazardous sites, taking part in search-and-rescue scenarios, or delivering goods and people. Robots operating in the real world face many challenges on the way to the completion of their mission. Essential capabilities required for the operation of such robots are mapping, localization and navigation. Solving all of these tasks robustly presents a substantial difficulty as these components are usually interconnected, i.e., a robot that starts without any knowledge about the environment must simultaneously build a map, localize itself in it, analyze the surroundings and plan a path to efficiently explore an unknown environment. In addition to the interconnections between these tasks, they highly depend on the sensors used by the robot and on the type of the environment in which the robot operates. For example, an RGB camera can be used in an outdoor scene for computing visual odometry, or to detect dynamic objects but becomes less useful in an environment that does not have enough light for cameras to operate. The software that controls the behavior of the robot must seamlessly process all the data coming from different sensors. This often leads to systems that are tailored to a particular robot and a particular set of sensors. In this thesis, we challenge this concept by developing and implementing methods for a typical robot navigation pipeline that can work with different types of the sensors seamlessly both, in indoor and outdoor environments. With the emergence of new range-sensing RGBD and LiDAR sensors, there is an opportunity to build a single system that can operate robustly both in indoor and outdoor environments equally well and, thus, extends the application areas of mobile robots. The techniques presented in this thesis aim to be used with both RGBD and LiDAR sensors without adaptations for individual sensor models by using range image representation and aim to provide methods for navigation and scene interpretation in both static and dynamic environments. For a static world, we present a number of approaches that address the core components of a typical robot navigation pipeline. At the core of building a consistent map of the environment using a mobile robot lies point cloud matching. To this end, we present a method for photometric point cloud matching that treats RGBD and LiDAR sensors in a uniform fashion and is able to accurately register point clouds at the frame rate of the sensor. This method serves as a building block for the further mapping pipeline. In addition to the matching algorithm, we present a method for traversability analysis of the currently observed terrain in order to guide an autonomous robot to the safe parts of the surrounding environment. A source of danger when navigating difficult to access sites is the fact that the robot may fail in building a correct map of the environment. This dramatically impacts the ability of an autonomous robot to navigate towards its goal in a robust way, thus, it is important for the robot to be able to detect these situations and to find its way home not relying on any kind of map. To address this challenge, we present a method for analyzing the quality of the map that the robot has built to date, and safely returning the robot to the starting point in case the map is found to be in an inconsistent state. The scenes in dynamic environments are vastly different from the ones experienced in static ones. In a dynamic setting, objects can be moving, thus making static traversability estimates not enough. With the approaches developed in this thesis, we aim at identifying distinct objects and tracking them to aid navigation and scene understanding. We target these challenges by providing a method for clustering a scene taken with a LiDAR scanner and a measure that can be used to determine if two clustered objects are similar that can aid the tracking performance. All methods presented in this thesis are capable of supporting real-time robot operation, rely on RGBD or LiDAR sensors and have been tested on real robots in real-world environments and on real-world datasets. All approaches have been published in peer-reviewed conference papers and journal articles. In addition to that, most of the presented contributions have been released publicly as open source software
AutoMerge: A Framework for Map Assembling and Smoothing in City-scale Environments
We present AutoMerge, a LiDAR data processing framework for assembling a
large number of map segments into a complete map. Traditional large-scale map
merging methods are fragile to incorrect data associations, and are primarily
limited to working only offline. AutoMerge utilizes multi-perspective fusion
and adaptive loop closure detection for accurate data associations, and it uses
incremental merging to assemble large maps from individual trajectory segments
given in random order and with no initial estimations. Furthermore, after
assembling the segments, AutoMerge performs fine matching and pose-graph
optimization to globally smooth the merged map. We demonstrate AutoMerge on
both city-scale merging (120km) and campus-scale repeated merging (4.5km x 8).
The experiments show that AutoMerge (i) surpasses the second- and third- best
methods by 14% and 24% recall in segment retrieval, (ii) achieves comparable 3D
mapping accuracy for 120 km large-scale map assembly, (iii) and it is robust to
temporally-spaced revisits. To the best of our knowledge, AutoMerge is the
first mapping approach that can merge hundreds of kilometers of individual
segments without the aid of GPS.Comment: 18 pages, 18 figur
Slip prediction using visual information
This paper considers prediction of slip from a
distance for wheeled ground robots using visual information as input. Large amounts of slippage which can occur on certain surfaces, such as sandy slopes, will negatively affect rover mobility. Therefore, obtaining information about slip before entering a particular terrain can be very useful for better planning and avoiding terrains with large slip. The proposed method is based on learning from experience and consists of terrain type recognition and nonlinear regression modeling. After learning, slip prediction is done remotely using only the visual information as input. The method has been implemented and tested offline on several off-road terrains including: soil, sand, gravel, and woodchips. The slip prediction error is about 20% of the step size
Adaptive Localization and Mapping for Planetary Rovers
Future rovers will be equipped with substantial onboard autonomy as space agencies and industry proceed with missions studies and technology development in preparation for the next planetary exploration missions. Simultaneous Localization and Mapping (SLAM) is a fundamental part of autonomous capabilities and has close connections to robot perception, planning and control. SLAM positively affects rover operations and mission success. The SLAM community has made great progress in the last decade by enabling real world solutions in terrestrial applications and is nowadays addressing important challenges in robust performance, scalability, high-level understanding, resources awareness and domain adaptation. In this thesis, an adaptive SLAM system is proposed in order to improve rover navigation performance and demand. This research presents a novel localization and mapping solution following a bottom-up approach. It starts with an Attitude and Heading Reference System (AHRS), continues with a 3D odometry dead reckoning solution and builds up to a full graph optimization scheme which uses visual odometry and takes into account rover traction performance, bringing scalability to modern SLAM solutions. A design procedure is presented in order to incorporate inertial sensors into the AHRS. The procedure follows three steps: error characterization, model derivation and filter design. A complete kinematics model of the rover locomotion subsystem is developed in order to improve the wheel odometry solution. Consequently, the parametric model predicts delta poses by solving a system of equations with weighed least squares. In addition, an odometry error model is learned using Gaussian processes (GPs) in order to predict non-systematic errors induced by poor traction of the rover with the terrain. The odometry error model complements the parametric solution by adding an estimation of the error. The gained information serves to adapt the localization and mapping solution to the current navigation demands (domain adaptation). The adaptivity strategy is designed to adjust the visual odometry computational load (active perception) and to influence the optimization back-end by including highly informative keyframes in the graph (adaptive information gain). Following this strategy, the solution is adapted to the navigation demands, providing an adaptive SLAM system driven by the navigation performance and conditions of the interaction with the terrain. The proposed methodology is experimentally verified on a representative planetary rover under realistic field test scenarios. This thesis introduces a modern SLAM system which adapts the estimated pose and map to the predicted error. The system maintains accuracy with fewer nodes, taking the best of both wheel and visual methods in a consistent graph-based smoothing approach
Exploiting graph structure in Active SLAM
Aplicando análisis provenientes de la teoría de grafos, la teoría espectral de grafos, la exploración de grafos en línea, generamos un sistema de SLAM activo que incluye la planificación de rutas bajo incertidumbre, extracción de grafos topológicos de entornos y SLAM activo \'optimo.En la planificación de trayectorias bajo incertidumbre, incluimos el análisis de la probabilidad de asociación correcta de datos. Reconociendo la naturaleza estocástica de la incertidumbre, demostramos que planificar para minimizar su valor esperado es más fiable que los actuales algoritmos de planificación de trayectorias con incertidumbre.Considerando el entorno como un conjunto de regiones convexas conectadas podemos tratar la exploración robótica como una exploración de grafos en línea. Se garantiza una cobertura total si el robot visita cada región. La mayoría de los métodos para segmentar el entorno están basados en píxeles y no garantizan que las regiones resultantes sean convexas, además pocos son algoritmos incrementales. En base a esto, modificamos un algoritmo basado en contornos en el que el entorno se representa como un conjunto de polígonos que debe segmentarse en un conjunto de polígonos pseudo convexos. El resultado es un algoritmo de segmentación que produjo regiones pseudo-convexas, robustas al ruido, estables y que obtienen un gran rendimiento en los conjuntos de datos de pruebas.La calidad de un algoritmo se puede medir en términos de cuan cercano al óptimo está su rendimiento. Con esta motivación definimos la esencia de la tarea de exploración en SLAM activo donde las únicas variables son la distancia recorrida y la calidad de la reconstrucción. Restringiendo el dominio al grafo que representa el entorno y probando la relación entre la matriz asociada a la exploración y la asociada al grafo subyacente, podemos calcular la ruta de exploración óptima.A diferencia de la mayoría de la literatura en SLAM activo, proponemos que la heurística para la exploración de grafos consiste en atravesar cada arco una vez. Demostramos que el tipo de grafos resultantes tiene un gran rendimiento con respecto a la trayectoria \'optima, con resultados superiores al 97 \% del \'optimo en algunas medidas de calidad.El algoritmo de SLAM activo TIGRE integra el algoritmo de extracción de grafos propuesto con nuestra versión del algoritmo de exploración incremental que atraviesa cada arco una vez. Nuestro algoritmo se basa en una modificación del algoritmo clásico de Tarry para la búsqueda en laberintos que logra el l\'imite inferior en la aproximación para un algoritmo incremental. Probamos nuestro sistema incremental en un escenario de exploración típico y demostramos que logra un rendimiento similar a los métodos fuera de línea y también demostramos que incluso el método \'optimo que visita todos los nodos calculado fuera de línea tiene un peor rendimiento que el nuestro.<br /
Locomotion Policy Guided Traversability Learning using Volumetric Representations of Complex Environments
Despite the progress in legged robotic locomotion, autonomous navigation in
unknown environments remains an open problem. Ideally, the navigation system
utilizes the full potential of the robots' locomotion capabilities while
operating within safety limits under uncertainty. The robot must sense and
analyze the traversability of the surrounding terrain, which depends on the
hardware, locomotion control, and terrain properties. It may contain
information about the risk, energy, or time consumption needed to traverse the
terrain. To avoid hand-crafted traversability cost functions we propose to
collect traversability information about the robot and locomotion policy by
simulating the traversal over randomly generated terrains using a physics
simulator. Thousand of robots are simulated in parallel controlled by the same
locomotion policy used in reality to acquire 57 years of real-world locomotion
experience equivalent. For deployment on the real robot, a sparse convolutional
network is trained to predict the simulated traversability cost, which is
tailored to the deployed locomotion policy, from an entirely geometric
representation of the environment in the form of a 3D voxel-occupancy map. This
representation avoids the need for commonly used elevation maps, which are
error-prone in the presence of overhanging obstacles and multi-floor or
low-ceiling scenarios. The effectiveness of the proposed traversability
prediction network is demonstrated for path planning for the legged robot
ANYmal in various indoor and natural environments.Comment: accepted for 2022 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS 2022
- …