4,128 research outputs found

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Viewfinder: final activity report

    Get PDF
    The VIEW-FINDER project (2006-2009) is an 'Advanced Robotics' project that seeks to apply a semi-autonomous robotic system to inspect ground safety in the event of a fire. Its primary aim is to gather data (visual and chemical) in order to assist rescue personnel. A base station combines the gathered information with information retrieved from off-site sources. The project addresses key issues related to map building and reconstruction, interfacing local command information with external sources, human-robot interfaces and semi-autonomous robot navigation. The VIEW-FINDER system is a semi-autonomous; the individual robot-sensors operate autonomously within the limits of the task assigned to them, that is, they will autonomously navigate through and inspect an area. Human operators monitor their operations and send high level task requests as well as low level commands through the interface to any nodes in the entire system. The human interface has to ensure the human supervisor and human interveners are provided a reduced but good and relevant overview of the ground and the robots and human rescue workers therein

    Vision-model-based Real-time Localization of Unmanned Aerial Vehicle for Autonomous Structure Inspection under GPS-denied Environment

    Full text link
    UAVs have been widely used in visual inspections of buildings, bridges and other structures. In either outdoor autonomous or semi-autonomous flights missions strong GPS signal is vital for UAV to locate its own positions. However, strong GPS signal is not always available, and it can degrade or fully loss underneath large structures or close to power lines, which can cause serious control issues or even UAV crashes. Such limitations highly restricted the applications of UAV as a routine inspection tool in various domains. In this paper a vision-model-based real-time self-positioning method is proposed to support autonomous aerial inspection without the need of GPS support. Compared to other localization methods that requires additional onboard sensors, the proposed method uses a single camera to continuously estimate the inflight poses of UAV. Each step of the proposed method is discussed in detail, and its performance is tested through an indoor test case.Comment: 8 pages, 5 figures, submitted to i3ce 201

    Autonomous navigation of mobile robot using kinect sensor

    Get PDF
    The problem of achieving real time process in depth camera application, in particular when used for indoor mobile robot localization and navigation is far from being solved. Thus, this paper presents autonomous navigation of the mobile robot by using Kinect sensor. By using Microsoft Kinect XBOX 360 as the main sensor, the robot is expected to navigate and avoid obstacles safely. By using depth data, 3D point clouds, filtering and clustering process, the Kinect sensor is expected to be able to differentiate the obstacles and the path in order to navigate safely. Therefore, this research requirement to propose a creation of low-cost autonomous mobile robot that can be navigated safely

    Mobile Robot Range Sensing through Visual Looming

    Full text link
    This article describes and evaluates visual looming as a monocular range sensing method for mobile robots. The looming algorithm is based on the relationship between the displacement of a camera relative to an object, and the resulting change in the size of the object's image on the focal plane of the camera. We have carried out systematic experiments to evaluate the ranging accuracy of the looming algorithm using a Pioneer I mobile robot equipped with a color camera. We have also performed noise sensitivity for the looming algorithm, obtaining theoretical error bounds on the range estimates for given levels of odometric and visual noise, which were verified through experimental data. Our results suggest that looming can be used as a robust, inexpensive range sensor as a complement to sonar.Defense Advanced Research Projects Agency; Office of Naval Research; Navy Research Laboratory (00014-96-1-0772, 00014-95-1-0409

    Mobile Robot Range Sensing through Visual Looming

    Get PDF
    This article describes and evaluates visual looming as a monocular range sensing method for mobile robots. The looming algorithm is based on the relationship between the displacement of a camera relative to an object, and the resulting change in the size of the object's image on the focal plane of the camera. We have carried out systematic experiments to evaluate the ranging accuracy of the looming algorithm using a Pioneer I mobile robot equipped with a color camera. We have also performed noise sensitivity for the looming algorithm, obtaining theoretical error bounds on the range estimates for given levels of odometric and visual noise, which were verified through experimental data. Our results suggest that looming can be used as a robust, inexpensive range sensor as a complement to sonar.Defense Advanced Research Projects Agency; Office of Naval Research; Navy Research Laboratory (00014-96-1-0772, 00014-95-1-0409

    Proceedings of the 4th field robot event 2006, Stuttgart/Hohenheim, Germany, 23-24th June 2006

    Get PDF
    Zeer uitgebreid verslag van het 4e Fieldrobotevent, dat gehouden werd op 23 en 24 juni 2006 in Stuttgart/Hohenhei
    corecore