145 research outputs found
Present and Future of SLAM in Extreme Underground Environments
This paper reports on the state of the art in underground SLAM by discussing
different SLAM strategies and results across six teams that participated in the
three-year-long SubT competition. In particular, the paper has four main goals.
First, we review the algorithms, architectures, and systems adopted by the
teams; particular emphasis is put on lidar-centric SLAM solutions (the go-to
approach for virtually all teams in the competition), heterogeneous multi-robot
operation (including both aerial and ground robots), and real-world underground
operation (from the presence of obscurants to the need to handle tight
computational constraints). We do not shy away from discussing the dirty
details behind the different SubT SLAM systems, which are often omitted from
technical papers. Second, we discuss the maturity of the field by highlighting
what is possible with the current SLAM systems and what we believe is within
reach with some good systems engineering. Third, we outline what we believe are
fundamental open problems, that are likely to require further research to break
through. Finally, we provide a list of open-source SLAM implementations and
datasets that have been produced during the SubT challenge and related efforts,
and constitute a useful resource for researchers and practitioners.Comment: 21 pages including references. This survey paper is submitted to IEEE
Transactions on Robotics for pre-approva
Flexible Supervised Autonomy for Exploration in Subterranean Environments
While the capabilities of autonomous systems have been steadily improving in
recent years, these systems still struggle to rapidly explore previously
unknown environments without the aid of GPS-assisted navigation. The DARPA
Subterranean (SubT) Challenge aimed to fast track the development of autonomous
exploration systems by evaluating their performance in real-world underground
search-and-rescue scenarios. Subterranean environments present a plethora of
challenges for robotic systems, such as limited communications, complex
topology, visually-degraded sensing, and harsh terrain. The presented solution
enables long-term autonomy with minimal human supervision by combining a
powerful and independent single-agent autonomy stack, with higher level mission
management operating over a flexible mesh network. The autonomy suite deployed
on quadruped and wheeled robots was fully independent, freeing the human
supervision to loosely supervise the mission and make high-impact strategic
decisions. We also discuss lessons learned from fielding our system at the SubT
Final Event, relating to vehicle versatility, system adaptability, and
re-configurable communications.Comment: Field Robotics special issue: DARPA Subterranean Challenge,
Advancement and Lessons Learned from the Final
ArtPlanner: Robust Legged Robot Navigation in the Field
Due to the highly complex environment present during the DARPA Subterranean
Challenge, all six funded teams relied on legged robots as part of their
robotic team. Their unique locomotion skills of being able to step over
obstacles require special considerations for navigation planning. In this work,
we present and examine ArtPlanner, the navigation planner used by team CERBERUS
during the Finals. It is based on a sampling-based method that determines valid
poses with a reachability abstraction and uses learned foothold scores to
restrict areas considered safe for stepping. The resulting planning graph is
assigned learned motion costs by a neural network trained in simulation to
minimize traversal time and limit the risk of failure. Our method achieves
real-time performance with a bounded computation time. We present extensive
experimental results gathered during the Finals event of the DARPA Subterranean
Challenge, where this method contributed to team CERBERUS winning the
competition. It powered navigation of four ANYmal quadrupeds for 90 minutes of
autonomous operation without a single planning or locomotion failure
GPGM-SLAM: a Robust SLAM System for Unstructured Planetary Environments with Gaussian Process Gradient Maps
Simultaneous Localization and Mapping (SLAM) techniques play a key role towards long-term autonomy of mobile robots due to the ability to correct localization errors and produce consistent maps of an environment over time. Contrarily to urban or man-made environments, where the presence of unique objects and structures offer unique cues for localization, the apperance of
unstructured natural environments is often ambiguous and self-similar, hindering the performances of loop closure detection. In this paper, we present an approach to improve the robustness of place
recognition in the context of a submap-based stereo SLAM based on Gaussian Process Gradient Maps (GPGMaps). GPGMaps embed a continuous representation of the gradients of the local terrain
elevation by means of Gaussian Process regression and Structured Kernel Interpolation, given solely noisy elevation measurements. We leverage the imagelike structure of GPGMaps to detect loop
closures using traditional visual features and Bag of Words. GPGMap matching is performed as an SE(2) alignment to establish loop closure constraints within a pose graph. We evaluate the
proposed pipeline on a variety of datasets recorded on Mt. Etna, Sicily and in the Morocco desert, respectively Moon- and Mars-like environments, and we compare the localization performances with
state-of-the-art approaches for visual SLAM and visual loop closure detection
Recommended from our members
Deep Perception Without a Camera: Enabling 3D Reconstruction and Object Recognition using Lidar and Sonar Sensing
Deep learning has recently revolutionized robot perception in many canonical robotic applications, such as autonomous driving. However, a similar transformation has yet to occur in more harsh environments including underwater and underground. This is due in part to the difficulty in deploying robots in these environments, which lack large real training datasets and often necessitate the use of non-traditional sensors for deep learning (e.g. imaging sonars and lidars). In this dissertation we demonstrate that by explicitly accounting for the sensor noise beget by challenging environments and by incorporating synthetic data in the training process, the power of deep learning can be leveraged for deployment in these harsh environments.
In our first contribution we develop a framework that enables the real-time 3D reconstruction of underwater environments using features from 2D sonar images. Due to noisy and low-resolution imagery as compared with standard cameras, accurate sonar image analysis necessitates the explicit consideration of noise. While deep learning by using Convolutional Neural Networks (CNNs) has been leveraged on sonar images, previous CNN-based methods do not explicitly consider the noise (from factors such as multi-pathing or irregular surfaces) often present in the images. In this contribution our key insight is to use atrous convolution, which has a larger field of context than standard convolution and is thus not misled as much by localized noise. We demonstrate that atrous convolution, as well as human-in-the-loop feature annotation, provides real-time reconstruction capability on datasets captured onboard our underwater vehicle while operating in a variety of environments.
In our second contribution we remove the human from the loop and develop an approach which leverages deep learning for a fully automated 3D underwater reconstruction algorithm using 2D sonar images as input. Our algorithm is able to produce accurate estimates even when common physical models break down due to phenomena such as non-diffuse reflections. Inspired by our success in the previous contribution, we propose the utilization of CNNs as a powerful method to extract meaningful information without being misled by noisy data. To ensure training convergence, we also introduce a self-supervised method that uses the physics of the sonar sensor to train the network on real data without ground-truth information for training. Our method can produce accurate 3D estimates given only a single image. We demonstrate that our method produces 3D reconstructions with an 80\% reduction in Root Mean Square Error compared to previous approaches, both in simulation and on real data.
We then extend this approach to leverage the series of images the robot collects as it moves through the environment. Specifically, we develop two CNNs that take as input multiple images captured at different points in time and output a more accurate prediction than just using a single image as input. To our knowledge this is the first such multi-sonar-image CNN designed for the 3D underwater reconstruction task. We validate this extension on synthetic and real data and show up to a 5\% improvement over competing methods.
Finally, we develop an improved method for incorporating synthetic data into the training process. This takes our previous contribution a step further by more tightly coupling synthetic and real point cloud feature extraction. We develop an adversarial training technique, which along with the standard object detection loss provides a training signal that encourages similar feature extraction from both synthetic and real clouds. This brings the training process closer to the preferred scenario: where the synthetic point clouds contain features that are very similar to those found in the real scans. We validate our approach in the context of the data-limited DARPA Subterranean Challenge and demonstrate that our 3D adversarial training architecture improves 3D object detection performance by up to 15\% depending on the data representation
Vision-based legged robot navigation: localisation, local planning, learning
The recent advances in legged locomotion control have made legged robots walk up staircases, go deep into underground caves, and walk in the forest. Nevertheless, autonomously achieving this task is still a challenge. Navigating and acomplishing missions in the wild relies not only on robust low-level controllers but also higher-level representations and perceptual systems that are aware of the robot's capabilities.
This thesis addresses the navigation problem for legged robots. The contributions are four systems designed to exploit unique characteristics of these platforms, from the sensing setup to their advanced mobility skills over different terrain. The systems address localisation, scene understanding, and local planning, and advance the capabilities of legged robots in challenging environments.
The first contribution tackles localisation with multi-camera setups available on legged platforms. It proposes a strategy to actively switch between the cameras and stay localised while operating in a visual teach and repeat context---in spite of transient changes in the environment. The second contribution focuses on local planning, effectively adding a safety layer for robot navigation. The approach uses a local map built on-the-fly to generate efficient vector field representations that enable fast and reactive navigation. The third contribution demonstrates how to improve local planning in natural environments by learning robot-specific traversability from demonstrations. The approach leverages classical and learning-based methods to enable online, onboard traversability learning. These systems are demonstrated via different robot deployments on industrial facilities, underground mines, and parklands.
The thesis concludes by presenting a real-world application: an autonomous forest inventory system with legged robots. This last contribution presents a mission planning system for autonomous surveying as well as a data analysis pipeline to extract forestry attributes. The approach was experimentally validated in a field campaign in Finland, evidencing the potential that legged platforms offer for future applications in the wild
Airborne laser scanning raster data visualization
This guide provides an insight into a range of visualization techniques for high-resolution digital elevation models (DEMs). It is provided in the context of investigation and interpretation of various types of historical and modern, cultural and natural small-scale relief features and landscape structures. It also provides concise guidance for selecting the best techniques when looking at a specific type of landscape and/or looking for particular kinds of forms.The three main sections – descriptions of visualization techniques, guidance for selection of the techniques, and visualization tools – accompany examples of visualizations, exemplar archaeological and geomorphological case studies, a glossary of terms, and a list of references and recommendations for further reading. The structure facilitates people of different academic background and level of expertise to understand different visualizations, how to read them, how to manipulate the settings in a calculation, and choose the best suited for the purpose of the intended investigation.A smaller amount of books is also available in hardcover (ISBN 978-961-05-0011-7, 24 EUR).Monografija nudi vpogled v nabor tehnik prikaza visokoločljivih modelov višin. Napisana je v kontekstu preučevanja in interpretacije različnih tipov zgodovinskih in modernih, kulturnih in naravnih majhnih reliefnih oblik. Daje jedrnate napotke za izbiro najboljših tehnik prikaza določenih tipov pokrajine in izrazitih oblik.Tri glavna poglavja – opis tehnik prikazovanja digitalnih modelov višin, napotki za njihovo izbiro in orodja za izračun prikazov –, spremljajo izbrani primeri tipičnih arheoloških in geomorfoloških študij, slovarček pojmov ter seznam literature in priporočenega branja. Posameznikom z različnih znanstvenih področij in z različnim predznanjem o tematiki je struktura v pomoč pri razumevanju različnih tehnik prikazov, kako jih brati, kako izbrati prave nastavitve pri njihovem izračunu in kako prepoznati najbolj primerne za namen zasnovane raziskave
Three Gorges Dam, China
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2001.Includes bibliographical references (leaf 66).by Elizabeth W. Craun.M.Eng
Quantitative Analysis of Non-Linear Probabilistic State Estimation Filters for Deployment on Dynamic Unmanned Systems
The work conducted in this thesis is a part of an EU Horizon 2020 research initiative project known as DigiArt. This part of the DigiArt project presents and explores the design, formulation and implementation of probabilistically orientated state estimation algorithms with focus towards unmanned system positioning and three-dimensional (3D) mapping. State estimation algorithms are considered an influential aspect of any dynamic system with autonomous capabilities. Possessing the ability to predictively estimate future conditions enables effective decision making and anticipating any possible changes in the environment. Initial experimental procedures utilised a wireless ultra-wide band (UWB) based communication network. This system functioned through statically situated beacon nodes used to localise a dynamically operating node. The simultaneous deployment of this UWB network, an unmanned system and a Robotic Total Station (RTS) with active and remote tracking features enabled the characterisation of the range measurement errors associated with the UWB network. These range error metrics were then integrated into an Range based Extended Kalman Filter (R-EKF) state estimation algorithm with active outlier identification to outperform the native approach used by the UWB system for two-dimensional (2D) pose estimation.The study was then expanded to focus on state estimation in 3D, where a Six Degreeof-Freedom EKF (6DOF-EKF) was designed using Light Detection and Ranging (LiDAR) as its primary observation source. A two step method was proposed which extracted information between consecutive LiDAR scans. Firstly, motion estimation concerning Cartesian states x, y and the unmanned system’s heading (ψ) was achieved through a 2D feature matching process. Secondly, the extraction and alignment of ground planes from the LiDAR scan enabled motion extraction for Cartesian position z and attitude angles roll (θ) and pitch (φ). Results showed that the ground plane alignment failed when two scans were at 10.5◦ offset. Therefore, to overcome this limitation an Error State Kalman Filter (ES-KF) was formulated and deployed as a sub-system within the 6DOF-EKF. This enabled the successful tracking of roll, pitch and the calculation of z. The 6DOF-EKF was seen to outperform the R-EKF and the native UWB approach, as it was much more stable, produced less noise in its position estimations and provided 3D pose estimation
Proceedings, High Altitude Revegetation Workshop no. 5: Colorado State University, Fort Collins, Colorado, March 8-9, 1982
Includes bibliographies.High Altitude Revegetation Workshop (5th : 1982 : Fort Collins, Colo.
- …