418 research outputs found

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    The simultaneous localization and mapping (SLAM):An overview

    Get PDF
    Positioning is a need for many applications related to mapping and navigation either in civilian or military domains. The significant developments in satellite-based techniques, sensors, telecommunications, computer hardware and software, image processing, etc. positively influenced to solve the positioning problem efficiently and instantaneously. Accordingly, the mentioned development empowered the applications and advancement of autonomous navigation. One of the most interesting developed positioning techniques is what is called in robotics as the Simultaneous Localization and Mapping SLAM. The SLAM problem solution has witnessed a quick improvement in the last decades either using active sensors like the RAdio Detection And Ranging (Radar) and Light Detection and Ranging (LiDAR) or passive sensors like cameras. Definitely, positioning and mapping is one of the main tasks for Geomatics engineers, and therefore it's of high importance for them to understand the SLAM topic which is not easy because of the huge documentation and algorithms available and the various SLAM solutions in terms of the mathematical models, complexity, the sensors used, and the type of applications. In this paper, a clear and simplified explanation is introduced about SLAM from a Geomatical viewpoint avoiding going into the complicated algorithmic details behind the presented techniques. In this way, a general overview of SLAM is presented showing the relationship between its different components and stages like the core part of the front-end and back-end and their relation to the SLAM paradigm. Furthermore, we explain the major mathematical techniques of filtering and pose graph optimization either using visual or LiDAR SLAM and introduce a summary of the deep learning efficient contribution to the SLAM problem. Finally, we address examples of some existing practical applications of SLAM in our reality

    Contributions to autonomous robust navigation of mobile robots in industrial applications

    Get PDF
    151 p.Un aspecto en el que las plataformas móviles actuales se quedan atrás en comparación con el punto que se ha alcanzado ya en la industria es la precisión. La cuarta revolución industrial trajo consigo la implantación de maquinaria en la mayor parte de procesos industriales, y una fortaleza de estos es su repetitividad. Los robots móviles autónomos, que son los que ofrecen una mayor flexibilidad, carecen de esta capacidad, principalmente debido al ruido inherente a las lecturas ofrecidas por los sensores y al dinamismo existente en la mayoría de entornos. Por este motivo, gran parte de este trabajo se centra en cuantificar el error cometido por los principales métodos de mapeado y localización de robots móviles,ofreciendo distintas alternativas para la mejora del posicionamiento.Asimismo, las principales fuentes de información con las que los robots móviles son capaces de realizarlas funciones descritas son los sensores exteroceptivos, los cuales miden el entorno y no tanto el estado del propio robot. Por esta misma razón, algunos métodos son muy dependientes del escenario en el que se han desarrollado, y no obtienen los mismos resultados cuando este varía. La mayoría de plataformas móviles generan un mapa que representa el entorno que les rodea, y fundamentan en este muchos de sus cálculos para realizar acciones como navegar. Dicha generación es un proceso que requiere de intervención humana en la mayoría de casos y que tiene una gran repercusión en el posterior funcionamiento del robot. En la última parte del presente trabajo, se propone un método que pretende optimizar este paso para así generar un modelo más rico del entorno sin requerir de tiempo adicional para ello

    Advances towards behaviour-based indoor robotic exploration

    Get PDF
    215 p.The main contributions of this research work remain in object recognition by computer vision, by one side, and in robot localisation and mapping by the other. The first contribution area of the research address object recognition in mobile robots. In this area, door handle recognition is of great importance, as it help the robot to identify doors in places where the camera is not able to view the whole door. In this research, a new two step algorithm is presented based on feature extraction that aimed at improving the extracted features to reduce the superfluous keypoints to be compared at the same time that it increased its efficiency by improving accuracy and reducing the computational time. Opposite to segmentation based paradigms, the feature extraction based two-step method can easily be generalized to other types of handles or even more, to other type of objects such as road signals. Experiments have shown very good accuracy when tested in real environments with different kind of door handles. With respect to the second contribution, a new technique to construct a topological map during the exploration phase a robot would perform on an unseen office-like environment is presented. Firstly a preliminary approach proposed to merge the Markovian localisation in a distributed system, which requires low storage and computational resources and is adequate to be applied in dynamic environments. In the same area, a second contribution to terrain inspection level behaviour based navigation concerned to the development of an automatic mapping method for acquiring the procedural topological map. The new approach is based on a typicality test called INCA to perform the so called loop-closing action. The method was integrated in a behaviour-based control architecture and tested in both, simulated and real robot/environment system. The developed system proved to be useful also for localisation purpose

    Learning cognitive maps: Finding useful structure in an uncertain world

    Get PDF
    In this chapter we will describe the central mechanisms that influence how people learn about large-scale space. We will focus particularly on how these mechanisms enable people to effectively cope with both the uncertainty inherent in a constantly changing world and also with the high information content of natural environments. The major lessons are that humans get by with a less is more approach to building structure, and that they are able to quickly adapt to environmental changes thanks to a range of general purpose mechanisms. By looking at abstract principles, instead of concrete implementation details, it is shown that the study of human learning can provide valuable lessons for robotics. Finally, these issues are discussed in the context of an implementation on a mobile robot. © 2007 Springer-Verlag Berlin Heidelberg

    Visual Localisation of Quadruped Walking Robots

    Get PDF

    SeDAR: Reading Floorplans Like a Human—Using Deep Learning to Enable Human-Inspired Localisation

    Get PDF
    This is the final version. Available from Springer Verlag via the DOI in this record. The use of human-level semantic information to aid robotic tasks has recently become an important area for both Computer Vision and Robotics. This has been enabled by advances in Deep Learning that allow consistent and robust semantic understanding. Leveraging this semantic vision of the world has allowed human-level understanding to naturally emerge from many different approaches. Particularly, the use of semantic information to aid in localisation and reconstruction has been at the forefront of both fields. Like robots, humans also require the ability to localise within a structure. To aid this, humans have designed high-level semantic maps of our structures called floorplans. We are extremely good at localising in them, even with limited access to the depth information used by robots. This is because we focus on the distribution of semantic elements, rather than geometric ones. Evidence of this is that humans are normally able to localise in a floorplan that has not been scaled properly. In order to grant this ability to robots, it is necessary to use localisation approaches that leverage the same semantic information humans use. In this paper, we present a novel method for semantically enabled global localisation. Our approach relies on the semantic labels present in the floorplan. Deep Learning is leveraged to extract semantic labels from RGB images, which are compared to the floorplan for localisation. While our approach is able to use range measurements if available, we demonstrate that they are unnecessary as we can achieve results comparable to state-of-the-art without them.EPSRCInnovate UKNVIDIA Corporatio

    Formation Navigation and Relative Localisation of Multi-Robot Systems

    Get PDF
    When proceeding from single to multiple robots, cooperative action is one of the most relevant topics. The domain of robotic security systems contains typical applications for a multi-robot system (MRS). Possible scenarios are safety and security issues on airports, harbours, large industry plants or museums. Additionally, the field of environmental supervision is an up-coming issue. Inherent to these applications is the need for an organised and coordinated navigation of the robots, and a vital prerequisite for any coordinated movements is a good localisation. This dissertation will present novel approaches to the problems of formation navigation and relative localisation with multiple ground-based mobile robots. It also looks into the question what kind of metric is applicable for multi-robot navigation problems. Thereby, the focus of this work will be on aspects of 1. coordinated navigation and movement A new potential-field-based approach to formation navigation is presented. In contradiction to classical potential-field-based formation approaches, the proposed method also uses the orientation between neighbours in the formation. Consequently, each robot has a designated position within the formation. Therefore, the new method is called directed potential field approach. Extensive experiments prove that the method is capable of generating all kinds of formation shapes, even in the presence of dense obstacles. All tests have been conducted with simulated and real robots and successfully guided the robot formation through environments with varying obstacle configurations. In comparison, the nondirected potential field approach turns out to be unstable regarding the positions of the robots within formations. The robots strive to switch their positions, e.g. when passing through narrow passages. Under such conditions the directed approach shows a preferable behaviour, called “breathing”. The formation shrinks or inflates depending on the obstacle situation while trying to maintain its shape and keep the robots at their desired positions inside the formation. For a more particular comparison of formation algorithms it is important to have measures that allow a meaningful evaluation of the experimental data. For this purpose a new formation metric is developed. If there are many obstacles, the formation error must be scaled down to be comparable to an empty environment where the error would be small. Assuming that the environment is unknown and possibly non-static, only actual sensor information can be used for these calculations. We developed a special weighting factor, which is inverse proportional to the “density” of obstacles and which turns out to model the influence of the environment adequately. 2. relative localisation A new method for relative localisation between the members of a robot group is introduced. This relative localisation approach uses mutual sensor observations to localise the robots with respect to other objects – without having an environment model. Techniques like the Extended Kalman Filter (EKF) have proven to be powerful tools in the field of single robot applications. This work presents extensions to these algorithms with respect to the use in MRS. These aspects are investigated and combined under the topic of improving and stabilising the performance of the localisation and navigation process. Most of the common localisation approaches use maps and/or landmarks with the intention of generating a globally consistent world-coordinate system for the robot group. The aim of the here presented relative localisation approach, on the other hand, is to maintain only relative positioning between the robots. The presented method enables a group of mobile robots to start at an unknown location in an unknown environment and then to incrementally estimate their own positions and the relative locations of the other robots using only sensor information. The result is a robust, fast and precise approach, which does not need any preconditions or special assumptions about the environment. To validate the approach extensive tests with both, real and simulated, robots have been conducted. For a more specific evaluation, the Mean Localisation Error (MLE) is introduced. The conducted experiments include a comparison between the proposed Extended Kalman Filter and a standard SLAM-based approach. The developed method robustly delivered an accuracy better than 2 cm and performed at least as well as the SLAM approach. The algorithm coped with scattered groups of robots while moving on arbitrarily shaped paths. In summary, this thesis presents novel approaches to the field of coordinated navigation in multi-robot systems. The results facilitate cooperative movements of robot groups as well as relative localisation among the group members. In addition, a solid foundation for a non-environment related metric for formation navigation is introduced

    Robotic navigation and inspection of bridge bearings

    Get PDF
    This thesis focuses on the development of a robotic platform for bridge bearing inspection. The existing literature on this topic highlights an aspiration for increased automation of bridge inspection, due to an increasing amount of ageing infrastructure and costly inspection. Furthermore, bridge bearings are highlighted as being one of the most costly components of the bridge to maintain. However, although autonomous robotic inspection is often stated as an aspiration, the existing literature for robotic bridge inspection often neglects to include the requirement of autonomous navigation. To achieve autonomous inspection, some methods for mapping and localising in the bridge structure are required. This thesis compares existing methods for simultaneous localisation and mapping (SLAM) with localisation-only methods. In addition, a method for using pre-existing data to create maps for localisation is proposed. A robotic platform was developed and these methods for localisation and mapping were then compared in a laboratory environment and then in a real bridge environment. The errors in the bridge environment are greater than in the laboratory environment, but remained within a defined error bound. A combined approach is suggested as an appropriate method for combining the lower errors of a SLAM approach with the advantages of a localisation approach for defining existing goals. Longer-term testing in a real bridge environment is still required. The use of existing inspection data is then extended to the creation of a simulation environment, with the goal of creating a methodology for testing different configurations of bridges or robots in a more realistic environment than laboratory testing, or other existing simulation environments. Finally, the inspection of the structure surrounding the bridge bearing is considered, with a particular focus on the detection and segmentation of cracks in concrete. A deep learning approach is used to segment cracks from an existing dataset and compared to an existing machine learning approach, with the deep-learning approach achieving a higher performance using a pixel-based evaluation. Other evaluation methods were also compared that take the structure of the crack, and other related datasets, into account. The generalisation of the approach for crack segmentation is evaluated by comparing the results of the trained on different datasets. Finally, recommendations for improving the datasets to allow better comparisons in future work is given
    corecore