6,250 research outputs found
NeBula: TEAM CoSTAR’s robotic autonomy solution that won phase II of DARPA subterranean challenge
This paper presents and discusses algorithms, hardware, and software architecture developed by the TEAM CoSTAR (Collaborative SubTerranean Autonomous Robots), competing in the DARPA Subterranean Challenge. Specifically, it presents the techniques utilized within the Tunnel (2019) and Urban (2020) competitions, where CoSTAR achieved second and first place, respectively. We also discuss CoSTAR’s demonstrations in Martian-analog surface and subsurface (lava tubes) exploration. The paper introduces our autonomy solution, referred to as NeBula (Networked Belief-aware Perceptual Autonomy). NeBula is an uncertainty-aware framework that aims at enabling resilient and modular autonomy solutions by performing reasoning and decision making in the belief space (space of probability distributions over the robot and world states). We discuss various components of the NeBula framework, including (i) geometric and semantic environment mapping, (ii) a multi-modal positioning system, (iii) traversability analysis and local planning, (iv) global motion planning and exploration behavior, (v) risk-aware mission planning, (vi) networking and decentralized reasoning, and (vii) learning-enabled adaptation. We discuss the performance of NeBula on several robot types (e.g., wheeled, legged, flying), in various environments. We discuss the specific results and lessons learned from fielding this solution in the challenging courses of the DARPA Subterranean Challenge competition.Peer ReviewedAgha, A., Otsu, K., Morrell, B., Fan, D. D., Thakker, R., Santamaria-Navarro, A., Kim, S.-K., Bouman, A., Lei, X., Edlund, J., Ginting, M. F., Ebadi, K., Anderson, M., Pailevanian, T., Terry, E., Wolf, M., Tagliabue, A., Vaquero, T. S., Palieri, M., Tepsuporn, S., Chang, Y., Kalantari, A., Chavez, F., Lopez, B., Funabiki, N., Miles, G., Touma, T., Buscicchio, A., Tordesillas, J., Alatur, N., Nash, J., Walsh, W., Jung, S., Lee, H., Kanellakis, C., Mayo, J., Harper, S., Kaufmann, M., Dixit, A., Correa, G. J., Lee, C., Gao, J., Merewether, G., Maldonado-Contreras, J., Salhotra, G., Da Silva, M. S., Ramtoula, B., Fakoorian, S., Hatteland, A., Kim, T., Bartlett, T., Stephens, A., Kim, L., Bergh, C., Heiden, E., Lew, T., Cauligi, A., Heywood, T., Kramer, A., Leopold, H. A., Melikyan, H., Choi, H. C., Daftry, S., Toupet, O., Wee, I., Thakur, A., Feras, M., Beltrame, G., Nikolakopoulos, G., Shim, D., Carlone, L., & Burdick, JPostprint (published version
NeBula: Team CoSTAR's robotic autonomy solution that won phase II of DARPA Subterranean Challenge
This paper presents and discusses algorithms, hardware, and software architecture developed by the TEAM CoSTAR (Collaborative SubTerranean Autonomous Robots), competing in the DARPA Subterranean Challenge. Specifically, it presents the techniques utilized within the Tunnel (2019) and Urban (2020) competitions, where CoSTAR achieved second and first place, respectively. We also discuss CoSTAR¿s demonstrations in Martian-analog surface and subsurface (lava tubes) exploration. The paper introduces our autonomy solution, referred to as NeBula (Networked Belief-aware Perceptual Autonomy). NeBula is an uncertainty-aware framework that aims at enabling resilient and modular autonomy solutions by performing reasoning and decision making in the belief space (space of probability distributions over the robot and world states). We discuss various components of the NeBula framework, including (i) geometric and semantic environment mapping, (ii) a multi-modal positioning system, (iii) traversability analysis and local planning, (iv) global motion planning and exploration behavior, (v) risk-aware mission planning, (vi) networking and decentralized reasoning, and (vii) learning-enabled adaptation. We discuss the performance of NeBula on several robot types (e.g., wheeled, legged, flying), in various environments. We discuss the specific results and lessons learned from fielding this solution in the challenging courses of the DARPA Subterranean Challenge competition.The work is partially supported by the Jet Propulsion Laboratory, California Institute of Technology,
under a contract with the National Aeronautics and Space Administration (80NM0018D0004), and
Defense Advanced Research Projects Agency (DARPA)
Fast Iterative 3D Mapping for Large-Scale Outdoor Environments with Local Minima Escape Mechanism
This paper introduces a novel iterative 3D mapping framework for large scale natural terrain and complex environments. The framework is based on an Iterative-Closest-Point (ICP) algorithm and an iterative error minimization mechanism, allowing robust 3D map registration. This was accomplished by performing pairwise scan registrations without any prior known pose estimation information and taking into account the measurement uncertainties due to the 6D coordinates (translation and rotation) deviations in the acquired scans. Since the ICP algorithm does not guarantee to escape from local minima during the mapping, new algorithms for the local minima estimation and local minima escape process were proposed. The proposed framework is validated using large scale field test data sets. The experimental results were compared with those of standard, generalized and non-linear ICP registration methods and the performance evaluation is presented, showing improved performance of the proposed 3D mapping framework
Viewfinder: final activity report
The VIEW-FINDER project (2006-2009) is an 'Advanced Robotics' project that seeks to apply a semi-autonomous robotic system to inspect ground safety in the event of a fire. Its primary aim is to gather data (visual and chemical) in order to assist rescue personnel. A base station combines the gathered information with information retrieved from off-site sources.
The project addresses key issues related to map building and reconstruction, interfacing local command information with external sources, human-robot interfaces and semi-autonomous robot navigation.
The VIEW-FINDER system is a semi-autonomous; the individual robot-sensors operate autonomously within the limits of the task assigned to them, that is, they will autonomously navigate through and inspect an area. Human operators monitor their operations and send high level task requests as well as low level commands through the interface to any nodes in the entire system. The human interface has to ensure the human supervisor and human interveners are provided a reduced but good and relevant overview of the ground and the robots and human rescue workers therein
Visual Prediction of Rover Slip: Learning Algorithms and Field Experiments
Perception of the surrounding environment is an essential tool for intelligent navigation in any autonomous vehicle. In the context of Mars exploration, there is a strong motivation to enhance the perception of the rovers beyond geometry-based obstacle avoidance, so as to be able to predict potential interactions with the terrain. In this thesis we propose to remotely predict the amount of slip, which reflects the mobility of the vehicle on future terrain. The method is based on learning from experience and uses visual information from stereo imagery as input. We test the algorithm on several robot platforms and in different terrains. We also demonstrate its usefulness in an integrated system, onboard a Mars prototype rover in the JPL Mars Yard.
Another desirable capability for an autonomous robot is to be able to learn about its interactions with the environment in a fully automatic fashion. We propose an algorithm which uses the robot's sensors as supervision for vision-based learning of different terrain types. This algorithm can work with noisy and ambiguous signals provided from onboard sensors. To be able to cope with rich, high-dimensional visual representations we propose a novel, nonlinear dimensionality reduction technique which exploits automatic supervision. The method is the first to consider supervised nonlinear dimensionality reduction in a probabilistic framework using supervision which can be noisy or ambiguous.
Finally, we consider the problem of learning to recognize different terrains, which addresses the time constraints of an onboard autonomous system. We propose a method which automatically learns a variable-length feature representation depending on the complexity of the classification task. The proposed approach achieves a good trade-off between decrease in computational time and recognition performance.</p
Learning Ground Traversability from Simulations
Mobile ground robots operating on unstructured terrain must predict which
areas of the environment they are able to pass in order to plan feasible paths.
We address traversability estimation as a heightmap classification problem: we
build a convolutional neural network that, given an image representing the
heightmap of a terrain patch, predicts whether the robot will be able to
traverse such patch from left to right. The classifier is trained for a
specific robot model (wheeled, tracked, legged, snake-like) using simulation
data on procedurally generated training terrains; the trained classifier can be
applied to unseen large heightmaps to yield oriented traversability maps, and
then plan traversable paths. We extensively evaluate the approach in simulation
on six real-world elevation datasets, and run a real-robot validation in one
indoor and one outdoor environment.Comment: Webpage: http://romarcg.xyz/traversability_estimation
- …