2,849 research outputs found

    Self-Organized Multi-Camera Network for a Fast and Easy Deployment of Ubiquitous Robots in Unknown Environments

    Get PDF
    To bring cutting edge robotics from research centres to social environments, the robotics community must start providing affordable solutions: the costs must be reduced and the quality and usefulness of the robot services must be enhanced. Unfortunately, nowadays the deployment of robots and the adaptation of their services to new environments are tasks that usually require several days of expert work. With this in view, we present a multi-agent system made up of intelligent cameras and autonomous robots, which is easy and fast to deploy in different environments. The cameras will enhance the robot perceptions and allow them to react to situations that require their services. Additionally, the cameras will support the movement of the robots. This will enable our robots to navigate even when there are not maps available. The deployment of our system does not require expertise and can be done in a short period of time, since neither software nor hardware tuning is needed. Every system task is automatic, distributed and based on self-organization processes. Our system is scalable, robust, and flexible to the environment. We carried out several real world experiments, which show the good performance of our proposalThis work was supported by the research projects TIN2009-07737, INCITE08PXIB262202PR, and TIN2012-32262, the grant BES-2010-040813 FPI-MICINN, and by the grant “Consolidation of Competitive Research Groups, Xunta de Galicia ref. 2010/6”S

    Ten years of cooperation between mobile robots and sensor networks

    Get PDF
    This paper presents an overview of the work carried out by the Group of Robotics, Vision and Control (GRVC) at the University of Seville on the cooperation between mobile robots and sensor networks. The GRVC, led by Professor Anibal Ollero, has been working over the last ten years on techniques where robots and sensor networks exploit synergies and collaborate tightly, developing numerous research projects on the topic. In this paper, based on our research, we introduce what we consider some relevant challenges when combining sensor networks with mobile robots. Then, we describe our developed techniques and main results for these challenges. In particular, the paper focuses on autonomous self-deployment of sensor networks; cooperative localization and tracking; self-localization and mapping; and large-scale scenarios. Extensive experimental results and lessons learnt are also discussed in the paper

    Recognizing Objects In-the-wild: Where Do We Stand?

    Full text link
    The ability to recognize objects is an essential skill for a robotic system acting in human-populated environments. Despite decades of effort from the robotic and vision research communities, robots are still missing good visual perceptual systems, preventing the use of autonomous agents for real-world applications. The progress is slowed down by the lack of a testbed able to accurately represent the world perceived by the robot in-the-wild. In order to fill this gap, we introduce a large-scale, multi-view object dataset collected with an RGB-D camera mounted on a mobile robot. The dataset embeds the challenges faced by a robot in a real-life application and provides a useful tool for validating object recognition algorithms. Besides describing the characteristics of the dataset, the paper evaluates the performance of a collection of well-established deep convolutional networks on the new dataset and analyzes the transferability of deep representations from Web images to robotic data. Despite the promising results obtained with such representations, the experiments demonstrate that object classification with real-life robotic data is far from being solved. Finally, we provide a comparative study to analyze and highlight the open challenges in robot vision, explaining the discrepancies in the performance

    NeBula: TEAM CoSTAR’s robotic autonomy solution that won phase II of DARPA subterranean challenge

    Get PDF
    This paper presents and discusses algorithms, hardware, and software architecture developed by the TEAM CoSTAR (Collaborative SubTerranean Autonomous Robots), competing in the DARPA Subterranean Challenge. Specifically, it presents the techniques utilized within the Tunnel (2019) and Urban (2020) competitions, where CoSTAR achieved second and first place, respectively. We also discuss CoSTAR’s demonstrations in Martian-analog surface and subsurface (lava tubes) exploration. The paper introduces our autonomy solution, referred to as NeBula (Networked Belief-aware Perceptual Autonomy). NeBula is an uncertainty-aware framework that aims at enabling resilient and modular autonomy solutions by performing reasoning and decision making in the belief space (space of probability distributions over the robot and world states). We discuss various components of the NeBula framework, including (i) geometric and semantic environment mapping, (ii) a multi-modal positioning system, (iii) traversability analysis and local planning, (iv) global motion planning and exploration behavior, (v) risk-aware mission planning, (vi) networking and decentralized reasoning, and (vii) learning-enabled adaptation. We discuss the performance of NeBula on several robot types (e.g., wheeled, legged, flying), in various environments. We discuss the specific results and lessons learned from fielding this solution in the challenging courses of the DARPA Subterranean Challenge competition.Peer ReviewedAgha, A., Otsu, K., Morrell, B., Fan, D. D., Thakker, R., Santamaria-Navarro, A., Kim, S.-K., Bouman, A., Lei, X., Edlund, J., Ginting, M. F., Ebadi, K., Anderson, M., Pailevanian, T., Terry, E., Wolf, M., Tagliabue, A., Vaquero, T. S., Palieri, M., Tepsuporn, S., Chang, Y., Kalantari, A., Chavez, F., Lopez, B., Funabiki, N., Miles, G., Touma, T., Buscicchio, A., Tordesillas, J., Alatur, N., Nash, J., Walsh, W., Jung, S., Lee, H., Kanellakis, C., Mayo, J., Harper, S., Kaufmann, M., Dixit, A., Correa, G. J., Lee, C., Gao, J., Merewether, G., Maldonado-Contreras, J., Salhotra, G., Da Silva, M. S., Ramtoula, B., Fakoorian, S., Hatteland, A., Kim, T., Bartlett, T., Stephens, A., Kim, L., Bergh, C., Heiden, E., Lew, T., Cauligi, A., Heywood, T., Kramer, A., Leopold, H. A., Melikyan, H., Choi, H. C., Daftry, S., Toupet, O., Wee, I., Thakur, A., Feras, M., Beltrame, G., Nikolakopoulos, G., Shim, D., Carlone, L., & Burdick, JPostprint (published version

    Sparse robot swarms: Moving swarms to real-world applications

    Get PDF
    Robot swarms are groups of robots that each act autonomously based on only local perception and coordination with neighbouring robots. While current swarm implementations can be large in size (e.g. 1000 robots), they are typically constrained to working in highly controlled indoor environments. Moreover, a common property of swarms is the underlying assumption that the robots act in close proximity of each other (e.g. 10 body lengths apart), and typically employ uninterrupted, situated, close-range communication for coordination. Many real-world applications, including environmental monitoring and precision agriculture, however, require scalable groups of robots to act jointly over large distances (e.g. 1000 body lengths), rendering the use of dense swarms impractical. Using a dense swarm for such applications would be invasive to the environment and unrealistic in terms of mission deployment, maintenance and post-mission recovery. To address this problem, we propose the sparse swarm concept, and illustrate its use in the context of four application scenarios. For one scenario, which requires a group of rovers to traverse, and monitor, a forest environment, we identify the challenges involved at all levels in developing a sparse swarm—from the hardware platform to communication-constrained coordination algorithms—and discuss potential solutions. We outline open questions of theoretical and practical nature, which we hope will bring the concept of sparse swarms to fruition
    corecore