2,512 research outputs found
Ten years of cooperation between mobile robots and sensor networks
This paper presents an overview of the work carried out by
the Group of Robotics, Vision and Control (GRVC) at the
University of Seville on the cooperation between mobile
robots and sensor networks. The GRVC, led by Professor
Anibal Ollero, has been working over the last ten years on
techniques where robots and sensor networks exploit
synergies and collaborate tightly, developing numerous
research projects on the topic. In this paper, based on our
research, we introduce what we consider some relevant
challenges when combining sensor networks with mobile
robots. Then, we describe our developed techniques and
main results for these challenges. In particular, the paper
focuses on autonomous self-deployment of sensor networks;
cooperative localization and tracking; self-localization
and mapping; and large-scale scenarios. Extensive
experimental results and lessons learnt are also discussed
in the paper
NeBula: TEAM CoSTAR’s robotic autonomy solution that won phase II of DARPA subterranean challenge
This paper presents and discusses algorithms, hardware, and software architecture developed by the TEAM CoSTAR (Collaborative SubTerranean Autonomous Robots), competing in the DARPA Subterranean Challenge. Specifically, it presents the techniques utilized within the Tunnel (2019) and Urban (2020) competitions, where CoSTAR achieved second and first place, respectively. We also discuss CoSTAR’s demonstrations in Martian-analog surface and subsurface (lava tubes) exploration. The paper introduces our autonomy solution, referred to as NeBula (Networked Belief-aware Perceptual Autonomy). NeBula is an uncertainty-aware framework that aims at enabling resilient and modular autonomy solutions by performing reasoning and decision making in the belief space (space of probability distributions over the robot and world states). We discuss various components of the NeBula framework, including (i) geometric and semantic environment mapping, (ii) a multi-modal positioning system, (iii) traversability analysis and local planning, (iv) global motion planning and exploration behavior, (v) risk-aware mission planning, (vi) networking and decentralized reasoning, and (vii) learning-enabled adaptation. We discuss the performance of NeBula on several robot types (e.g., wheeled, legged, flying), in various environments. We discuss the specific results and lessons learned from fielding this solution in the challenging courses of the DARPA Subterranean Challenge competition.Peer ReviewedAgha, A., Otsu, K., Morrell, B., Fan, D. D., Thakker, R., Santamaria-Navarro, A., Kim, S.-K., Bouman, A., Lei, X., Edlund, J., Ginting, M. F., Ebadi, K., Anderson, M., Pailevanian, T., Terry, E., Wolf, M., Tagliabue, A., Vaquero, T. S., Palieri, M., Tepsuporn, S., Chang, Y., Kalantari, A., Chavez, F., Lopez, B., Funabiki, N., Miles, G., Touma, T., Buscicchio, A., Tordesillas, J., Alatur, N., Nash, J., Walsh, W., Jung, S., Lee, H., Kanellakis, C., Mayo, J., Harper, S., Kaufmann, M., Dixit, A., Correa, G. J., Lee, C., Gao, J., Merewether, G., Maldonado-Contreras, J., Salhotra, G., Da Silva, M. S., Ramtoula, B., Fakoorian, S., Hatteland, A., Kim, T., Bartlett, T., Stephens, A., Kim, L., Bergh, C., Heiden, E., Lew, T., Cauligi, A., Heywood, T., Kramer, A., Leopold, H. A., Melikyan, H., Choi, H. C., Daftry, S., Toupet, O., Wee, I., Thakur, A., Feras, M., Beltrame, G., Nikolakopoulos, G., Shim, D., Carlone, L., & Burdick, JPostprint (published version
Improving perception and locomotion capabilities of mobile robots in urban search and rescue missions
NasazenĂ mobilnĂch robotĹŻ bÄ›hem zásahĹŻ záchrannĂ˝ch sloĹľek je zpĹŻsob, jak uÄŤinit práci záchranářů bezpeÄŤnÄ›jšà a efektivnÄ›jšĂ. Na roboty jsou ale pĹ™i takovĂ©m pouĹľitĂ kladeny vyššà nároky kvĹŻli podmĂnkám, kterĂ© pĹ™i tÄ›chto událostech panujĂ. Roboty se musejĂ pohybovat po nestabilnĂch površĂch, ve stĂsnÄ›nĂ˝ch prostorech nebo v kouĹ™i a prachu, coĹľ ztěžuje pouĹľitĂ nÄ›kterĂ˝ch senzorĹŻ. Lokalizace, v robotice běžná Ăşloha spoÄŤĂvajĂcĂ v urÄŤenĂ polohy robotu vĹŻÄŤi danĂ©mu souĹ™adnĂ©mu systĂ©mu, musĂ spolehlivÄ› fungovat i za tÄ›chto ztĂĹľenĂ˝ch podmĂnek. V tĂ©to dizertaÄŤnĂ práci popisujeme vĂ˝voj lokalizaÄŤnĂho systĂ©mu pásovĂ©ho mobilnĂho robotu, kterĂ˝ je urÄŤen pro nasazenĂ v pĹ™ĂpadÄ› zemÄ›tĹ™esenĂ nebo prĹŻmyslovĂ© havárie. Nejprve je pĹ™edveden lokalizaÄŤnĂ systĂ©m, kterĂ˝ vycházĂ pouze z měřenĂ proprioceptivnĂch senzorĹŻ a kterĂ˝ vyvstal jako nejlepšà varianta pĹ™i porovnánĂ nÄ›kolika moĹľnĂ˝ch uspořádánĂ takovĂ©ho systĂ©mu. Lokalizace je potĂ© zpĹ™esnÄ›na pĹ™idánĂm měřenĂ exteroceptivnĂch senzorĹŻ, kterĂ© zpomalujĂ kumulaci nejistoty urÄŤenĂ polohy robotu. ZvláštnĂ pozornost je vÄ›nována moĹľnĂ˝m vĂ˝padkĹŻm jednotlivĂ˝ch senzorickĂ˝ch modalit, prokluzĹŻm pásĹŻ, kterĂ© u tohoto typu robotĹŻ nevyhnutelnÄ› nastávajĂ, vĂ˝poÄŤetnĂm nárokĹŻm lokalizaÄŤnĂho systĂ©mu a rozdĂlnĂ˝m vzorkovacĂm frekvencĂm jednotlivĂ˝ch senzorĹŻ. Dále se vÄ›nujeme problĂ©mu kinematickĂ˝ch modelĹŻ pro pĹ™ejĂĹľdÄ›nĂ vertikálnĂch pĹ™ekážek, coĹľ je dalšà zdroj nepĹ™esnosti pĹ™i lokalizaci pásovĂ©ho robotu. DĂky účasti na vĂ˝zkumnĂ˝ch projektech, jejichĹľ ÄŤleny byly hasiÄŤskĂ© sbory Itálie, NÄ›mecka a Nizozemska, jsme mÄ›li pĹ™Ăstup na cviÄŤištÄ› urÄŤená pro pĹ™Ăpravu na zásahy bÄ›hem zemÄ›tĹ™esenĂ, prĹŻmyslovĂ˝ch a dopravnĂch nehod. PĹ™esnost našeho lokalizaÄŤnĂho systĂ©mu jsme tedy testovali v podmĂnkách, kterĂ© vÄ›rnÄ› napodobujĂ ty skuteÄŤnĂ©. Soubory senzorickĂ˝ch měřenĂ a referenÄŤnĂch poloh, kterĂ© jsme vytvoĹ™ili pro testovánĂ pĹ™esnosti lokalizace, jsou veĹ™ejnÄ› dostupnĂ© a povaĹľujeme je za jeden z pĹ™ĂnosĹŻ našà práce. Tato dizertaÄŤnĂ práce má podobu souboru třà časopiseckĂ˝ch publikacĂ a jednoho ÄŤlánku, kterĂ˝ je v dobÄ› jejĂho podánĂ v recenznĂm Ĺ™ĂzenĂ.eployment of mobile robots in search and rescue missions is a way to make job of human rescuers safer and more efficient. Such missions, however, require robots to be resilient to harsh conditions of natural disasters or human-inflicted accidents. They have to operate on unstable rough terrain, in confined spaces or in sensory-deprived environments filled with smoke or dust. Localization, a common task in mobile robotics which involves determining position and orientation with respect to a given coordinate frame, faces these conditions as well. In this thesis, we describe development of a localization system for tracked mobile robot intended for search and rescue missions. We present a proprioceptive 6-degrees-of-freedom localization system, which arose from the experimental comparison of several possible sensor fusion architectures. The system was modified to incorporate exteroceptive velocity measurements, which significantly improve accuracy by reducing a localization drift. A special attention was given to potential sensor outages and failures, to track slippage that inevitably occurs with this type of robots, to computational demands of the system and to different sampling rates sensory data arrive with. Additionally, we addressed the problem of kinematic models for tracked odometry on rough terrains containing vertical obstacles. Thanks to research projects the robot was designed for, we had access to training facilities used by fire brigades of Italy, Germany and Netherlands. Accuracy and robustness of proposed localization systems was tested in conditions closely resembling those seen in earthquake aftermath and industrial accidents. Datasets used to test our algorithms are publicly available and they are one of the contributions of this thesis. We form this thesis as a compilation of three published papers and one paper in review process
Viewfinder: final activity report
The VIEW-FINDER project (2006-2009) is an 'Advanced Robotics' project that seeks to apply a semi-autonomous robotic system to inspect ground safety in the event of a fire. Its primary aim is to gather data (visual and chemical) in order to assist rescue personnel. A base station combines the gathered information with information retrieved from off-site sources.
The project addresses key issues related to map building and reconstruction, interfacing local command information with external sources, human-robot interfaces and semi-autonomous robot navigation.
The VIEW-FINDER system is a semi-autonomous; the individual robot-sensors operate autonomously within the limits of the task assigned to them, that is, they will autonomously navigate through and inspect an area. Human operators monitor their operations and send high level task requests as well as low level commands through the interface to any nodes in the entire system. The human interface has to ensure the human supervisor and human interveners are provided a reduced but good and relevant overview of the ground and the robots and human rescue workers therein
NeBula: Team CoSTAR's robotic autonomy solution that won phase II of DARPA Subterranean Challenge
This paper presents and discusses algorithms, hardware, and software architecture developed by the TEAM CoSTAR (Collaborative SubTerranean Autonomous Robots), competing in the DARPA Subterranean Challenge. Specifically, it presents the techniques utilized within the Tunnel (2019) and Urban (2020) competitions, where CoSTAR achieved second and first place, respectively. We also discuss CoSTARÂżs demonstrations in Martian-analog surface and subsurface (lava tubes) exploration. The paper introduces our autonomy solution, referred to as NeBula (Networked Belief-aware Perceptual Autonomy). NeBula is an uncertainty-aware framework that aims at enabling resilient and modular autonomy solutions by performing reasoning and decision making in the belief space (space of probability distributions over the robot and world states). We discuss various components of the NeBula framework, including (i) geometric and semantic environment mapping, (ii) a multi-modal positioning system, (iii) traversability analysis and local planning, (iv) global motion planning and exploration behavior, (v) risk-aware mission planning, (vi) networking and decentralized reasoning, and (vii) learning-enabled adaptation. We discuss the performance of NeBula on several robot types (e.g., wheeled, legged, flying), in various environments. We discuss the specific results and lessons learned from fielding this solution in the challenging courses of the DARPA Subterranean Challenge competition.The work is partially supported by the Jet Propulsion Laboratory, California Institute of Technology,
under a contract with the National Aeronautics and Space Administration (80NM0018D0004), and
Defense Advanced Research Projects Agency (DARPA)
System Development of an Unmanned Ground Vehicle and Implementation of an Autonomous Navigation Module in a Mine Environment
There are numerous benefits to the insights gained from the exploration and exploitation of underground mines. There are also great risks and challenges involved, such as accidents that have claimed many lives. To avoid these accidents, inspections of the large mines were carried out by the miners, which is not always economically feasible and puts the safety of the inspectors at risk. Despite the progress in the development of robotic systems, autonomous navigation, localization and mapping algorithms, these environments remain particularly demanding for these systems. The successful implementation of the autonomous unmanned system will allow mine workers to autonomously determine the structural integrity of the roof and pillars through the generation of high-fidelity 3D maps. The generation of the maps will allow the miners to rapidly respond to any increasing hazards with proactive measures such as: sending workers to build/rebuild support structure to prevent accidents. The objective of this research is the development, implementation and testing of a robust unmanned ground vehicle (UGV) that will operate in mine environments for extended periods of time. To achieve this, a custom skid-steer four-wheeled UGV is designed to operate in these challenging underground mine environments. To autonomously navigate these environments, the UGV employs the use of a Light Detection and Ranging (LiDAR) and tactical grade inertial measurement unit (IMU) for the localization and mapping through a tightly-coupled LiDAR Inertial Odometry via Smoothing and Mapping framework (LIO-SAM). The autonomous navigation module was implemented based upon the Fast likelihood-based collision avoidance with an extension to human-guided navigation and a terrain traversability analysis framework. In order to successfully operate and generate high-fidelity 3D maps, the system was rigorously tested in different environments and terrain to verify its robustness. To assess the capabilities, several localization, mapping and autonomous navigation missions were carried out in a coal mine environment. These tests allowed for the verification and tuning of the system to be able to successfully autonomously navigate and generate high-fidelity maps
- …