2,418 research outputs found

    Self-Organized Multi-Camera Network for a Fast and Easy Deployment of Ubiquitous Robots in Unknown Environments

    Get PDF
    To bring cutting edge robotics from research centres to social environments, the robotics community must start providing affordable solutions: the costs must be reduced and the quality and usefulness of the robot services must be enhanced. Unfortunately, nowadays the deployment of robots and the adaptation of their services to new environments are tasks that usually require several days of expert work. With this in view, we present a multi-agent system made up of intelligent cameras and autonomous robots, which is easy and fast to deploy in different environments. The cameras will enhance the robot perceptions and allow them to react to situations that require their services. Additionally, the cameras will support the movement of the robots. This will enable our robots to navigate even when there are not maps available. The deployment of our system does not require expertise and can be done in a short period of time, since neither software nor hardware tuning is needed. Every system task is automatic, distributed and based on self-organization processes. Our system is scalable, robust, and flexible to the environment. We carried out several real world experiments, which show the good performance of our proposalThis work was supported by the research projects TIN2009-07737, INCITE08PXIB262202PR, and TIN2012-32262, the grant BES-2010-040813 FPI-MICINN, and by the grant “Consolidation of Competitive Research Groups, Xunta de Galicia ref. 2010/6”S

    NeBula: TEAM CoSTAR’s robotic autonomy solution that won phase II of DARPA subterranean challenge

    Get PDF
    This paper presents and discusses algorithms, hardware, and software architecture developed by the TEAM CoSTAR (Collaborative SubTerranean Autonomous Robots), competing in the DARPA Subterranean Challenge. Specifically, it presents the techniques utilized within the Tunnel (2019) and Urban (2020) competitions, where CoSTAR achieved second and first place, respectively. We also discuss CoSTAR’s demonstrations in Martian-analog surface and subsurface (lava tubes) exploration. The paper introduces our autonomy solution, referred to as NeBula (Networked Belief-aware Perceptual Autonomy). NeBula is an uncertainty-aware framework that aims at enabling resilient and modular autonomy solutions by performing reasoning and decision making in the belief space (space of probability distributions over the robot and world states). We discuss various components of the NeBula framework, including (i) geometric and semantic environment mapping, (ii) a multi-modal positioning system, (iii) traversability analysis and local planning, (iv) global motion planning and exploration behavior, (v) risk-aware mission planning, (vi) networking and decentralized reasoning, and (vii) learning-enabled adaptation. We discuss the performance of NeBula on several robot types (e.g., wheeled, legged, flying), in various environments. We discuss the specific results and lessons learned from fielding this solution in the challenging courses of the DARPA Subterranean Challenge competition.Peer ReviewedAgha, A., Otsu, K., Morrell, B., Fan, D. D., Thakker, R., Santamaria-Navarro, A., Kim, S.-K., Bouman, A., Lei, X., Edlund, J., Ginting, M. F., Ebadi, K., Anderson, M., Pailevanian, T., Terry, E., Wolf, M., Tagliabue, A., Vaquero, T. S., Palieri, M., Tepsuporn, S., Chang, Y., Kalantari, A., Chavez, F., Lopez, B., Funabiki, N., Miles, G., Touma, T., Buscicchio, A., Tordesillas, J., Alatur, N., Nash, J., Walsh, W., Jung, S., Lee, H., Kanellakis, C., Mayo, J., Harper, S., Kaufmann, M., Dixit, A., Correa, G. J., Lee, C., Gao, J., Merewether, G., Maldonado-Contreras, J., Salhotra, G., Da Silva, M. S., Ramtoula, B., Fakoorian, S., Hatteland, A., Kim, T., Bartlett, T., Stephens, A., Kim, L., Bergh, C., Heiden, E., Lew, T., Cauligi, A., Heywood, T., Kramer, A., Leopold, H. A., Melikyan, H., Choi, H. C., Daftry, S., Toupet, O., Wee, I., Thakur, A., Feras, M., Beltrame, G., Nikolakopoulos, G., Shim, D., Carlone, L., & Burdick, JPostprint (published version

    NeBula: Team CoSTAR's robotic autonomy solution that won phase II of DARPA Subterranean Challenge

    Get PDF
    This paper presents and discusses algorithms, hardware, and software architecture developed by the TEAM CoSTAR (Collaborative SubTerranean Autonomous Robots), competing in the DARPA Subterranean Challenge. Specifically, it presents the techniques utilized within the Tunnel (2019) and Urban (2020) competitions, where CoSTAR achieved second and first place, respectively. We also discuss CoSTAR¿s demonstrations in Martian-analog surface and subsurface (lava tubes) exploration. The paper introduces our autonomy solution, referred to as NeBula (Networked Belief-aware Perceptual Autonomy). NeBula is an uncertainty-aware framework that aims at enabling resilient and modular autonomy solutions by performing reasoning and decision making in the belief space (space of probability distributions over the robot and world states). We discuss various components of the NeBula framework, including (i) geometric and semantic environment mapping, (ii) a multi-modal positioning system, (iii) traversability analysis and local planning, (iv) global motion planning and exploration behavior, (v) risk-aware mission planning, (vi) networking and decentralized reasoning, and (vii) learning-enabled adaptation. We discuss the performance of NeBula on several robot types (e.g., wheeled, legged, flying), in various environments. We discuss the specific results and lessons learned from fielding this solution in the challenging courses of the DARPA Subterranean Challenge competition.The work is partially supported by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004), and Defense Advanced Research Projects Agency (DARPA)

    Collaborative autonomy in heterogeneous multi-robot systems

    Get PDF
    As autonomous mobile robots become increasingly connected and widely deployed in different domains, managing multiple robots and their interaction is key to the future of ubiquitous autonomous systems. Indeed, robots are not individual entities anymore. Instead, many robots today are deployed as part of larger fleets or in teams. The benefits of multirobot collaboration, specially in heterogeneous groups, are multiple. Significantly higher degrees of situational awareness and understanding of their environment can be achieved when robots with different operational capabilities are deployed together. Examples of this include the Perseverance rover and the Ingenuity helicopter that NASA has deployed in Mars, or the highly heterogeneous robot teams that explored caves and other complex environments during the last DARPA Sub-T competition. This thesis delves into the wide topic of collaborative autonomy in multi-robot systems, encompassing some of the key elements required for achieving robust collaboration: solving collaborative decision-making problems; securing their operation, management and interaction; providing means for autonomous coordination in space and accurate global or relative state estimation; and achieving collaborative situational awareness through distributed perception and cooperative planning. The thesis covers novel formation control algorithms, and new ways to achieve accurate absolute or relative localization within multi-robot systems. It also explores the potential of distributed ledger technologies as an underlying framework to achieve collaborative decision-making in distributed robotic systems. Throughout the thesis, I introduce novel approaches to utilizing cryptographic elements and blockchain technology for securing the operation of autonomous robots, showing that sensor data and mission instructions can be validated in an end-to-end manner. I then shift the focus to localization and coordination, studying ultra-wideband (UWB) radios and their potential. I show how UWB-based ranging and localization can enable aerial robots to operate in GNSS-denied environments, with a study of the constraints and limitations. I also study the potential of UWB-based relative localization between aerial and ground robots for more accurate positioning in areas where GNSS signals degrade. In terms of coordination, I introduce two new algorithms for formation control that require zero to minimal communication, if enough degree of awareness of neighbor robots is available. These algorithms are validated in simulation and real-world experiments. The thesis concludes with the integration of a new approach to cooperative path planning algorithms and UWB-based relative localization for dense scene reconstruction using lidar and vision sensors in ground and aerial robots

    Range-only SLAM schemes exploiting robot-sensor network cooperation

    Get PDF
    Simultaneous localization and mapping (SLAM) is a key problem in robotics. A robot with no previous knowledge of the environment builds a map of this environment and localizes itself in that map. Range-only SLAM is a particularization of the SLAM problem which only uses the information provided by range sensors. This PhD Thesis describes the design, integration, evaluation and validation of a set of schemes for accurate and e_cient range-only simultaneous localization and mapping exploiting the cooperation between robots and sensor networks. This PhD Thesis proposes a general architecture for range-only simultaneous localization and mapping (RO-SLAM) with cooperation between robots and sensor networks. The adopted architecture has two main characteristics. First, it exploits the sensing, computational and communication capabilities of sensor network nodes. Both, the robot and the beacons actively participate in the execution of the RO-SLAM _lter. Second, it integrates not only robot-beacon measurements but also range measurements between two di_erent beacons, the so-called inter-beacon measurements. Most reported RO-SLAM methods are executed in a centralized manner in the robot. In these methods all tasks in RO-SLAM are executed in the robot, including measurement gathering, integration of measurements in RO-SLAM and the Prediction stage. These fully centralized RO-SLAM methods require high computational burden in the robot and have very poor scalability. This PhD Thesis proposes three di_erent schemes that works under the aforementioned architecture. These schemes exploit the advantages of cooperation between robots and sensor networks and intend to minimize the drawbacks of this cooperation. The _rst scheme proposed in this PhD Thesis is a RO-SLAM scheme with dynamically con_gurable measurement gathering. Integrating inter-beacon measurements in RO-SLAM signi_cantly improves map estimation but involves high consumption of resources, such as the energy required to gather and transmit measurements, the bandwidth required by the measurement collection protocol and the computational burden necessary to integrate the larger number of measurements. The objective of this scheme is to reduce the increment in resource consumption resulting from the integration of inter-beacon measurements by adopting a centralized mechanism running in the robot that adapts measurement gathering. The second scheme of this PhD Thesis consists in a distributed RO-SLAM scheme based on the Sparse Extended Information Filter (SEIF). This scheme reduces the increment in resource consumption resulting from the integration of inter-beacon measurements by adopting a distributed SLAM _lter in which each beacon is responsible for gathering its measurements to the robot and to other beacons and computing the SLAM Update stage in order to integrate its measurements in SLAM. Moreover, it inherits the scalability of the SEIF. The third scheme of this PhD Thesis is a resource-constrained RO-SLAM scheme based on the distributed SEIF previously presented. This scheme includes the two mechanisms developed in the previous contributions {measurement gathering control and distribution of RO-SLAM Update stage between beacons{ in order to reduce the increment in resource consumption resulting from the integration of inter-beacon measurements. This scheme exploits robot-beacon cooperation to improve SLAM accuracy and e_ciency while meeting a given resource consumption bound. The resource consumption bound is expressed in terms of the maximum number of measurements that can be integrated in SLAM per iteration. The sensing channel capacity used, the beacon energy consumed or the computational capacity employed, among others, are proportional to the number of measurements that are gathered and integrated in SLAM. The performance of the proposed schemes have been analyzed and compared with each other and with existing works. The proposed schemes are validated in real experiments with aerial robots. This PhD Thesis proves that the cooperation between robots and sensor networks provides many advantages to solve the RO-SLAM problem. Resource consumption is an important constraint in sensor networks. The proposed architecture allows the exploitation of the cooperation advantages. On the other hand, the proposed schemes give solutions to the resource limitation without degrading performance
    corecore