145 research outputs found

    Visual Place Recognition in Changing Environments

    Get PDF
    Localization is an essential capability of mobile robots and place recognition is an important component of localization. Only having precise localization, robots can reliably plan, navigate and understand the environment around them. The main task of visual place recognition algorithms is to recognize based on the visual input if the robot has seen previously a given place in the environment. Cameras are one of the popular sensors robots get information from. They are lightweight, affordable, and provide detailed descriptions of the environment in the form of images. Cameras are shown to be useful for the vast variety of emerging applications, from virtual and augmented reality applications to autonomous cars or even fleets of autonomous cars. All these applications need precise localization. Nowadays, the state-of-the-art methods are able to reliably estimate the position of the robots using image streams. One of the big challenges still is the ability to localize a camera given an image stream in the presence of drastic visual appearance changes in the environment. Visual appearance changes may be caused by a variety of different reasons, starting from camera-related factors, such as changes in exposure time, camera position-related factors, e.g. the scene is observed from a different position or viewing angle, occlusions, as well as factors that stem from natural sources, for example seasonal changes, different weather conditions, illumination changes, etc. These effects change the way the same place in the environments appears in the image and can lead to situations where it becomes hard even for humans to recognize the places. Also, the performance of the traditional visual localization approaches, such as FABMAP or DBow, decreases dramatically in the presence of strong visual appearance changes. The techniques presented in this thesis aim at improving visual place recognition capabilities for robotic systems in the presence of dramatic visual appearance changes. To reduce the effect of visual changes on image matching performance, we exploit sequences of images rather than individual images. This becomes possible as robotic systems collect data sequentially and not in random order. We formulate the visual place recognition problem under strong appearance changes as a problem of matching image sequences collected by a robotic system at different points in time. A key insight here is the fact that matching sequences reduces the ambiguities in the data associations. This allows us to establish image correspondences between different sequences and thus recognize if two images represent the same place in the environment. To perform a search for image correspondences, we construct a graph that encodes the potential matches between the sequences and at the same time preserves the sequentiality of the data. The shortest path through such a data association graph provides the valid image correspondences between the sequences. Robots operating reliably in an environment should be able to recognize a place in an online manner and not after having recorded all data beforehand. As opposed to collecting image sequences and then determining the associations between the sequences offline, a real-world system should be able to make a decision for every incoming image. In this thesis, we therefore propose an algorithm that is able to perform visual place recognition in changing environments in an online fashion between the query and the previously recorded reference sequences. Then, for every incoming query image, our algorithm checks if the robot is in the previously seen environment, i.e. there exists a matching image in the reference sequence, as well as if the current measurement is consistent with previously obtained query images. Additionally, to be able to recognize places in an online manner, a robot needs to recognize the fact that it has left the previously mapped area as well as relocalize when it re-enters environment covered by the reference sequence. Thus, we relax the assumption that the robot should always travel within the previously mapped area and propose an improved graph-based matching procedure that allows for visual place recognition in case of partially overlapping image sequences. To achieve a long-term autonomy, we further increase the robustness of our place recognition algorithm by incorporating information from multiple image sequences, collected along different overlapping and non-overlapping routes. This allows us to grow the coverage of the environment in terms of area as well as various scene appearances. The reference dataset then contains more images to match against and this increases the probability of finding a matching image, which can lead to improved localization. To be able to deploy a robot that performs localization in large scaled environments over extended periods of time, however, collecting a reference dataset may be a tedious, resource consuming and in some cases intractable task. Avoiding an explicit map collection stage fosters faster deployment of robotic systems in the real world since no map has to be collected beforehand. By using our visual place recognition approach the map collection stage can be skipped, as we are able to incorporate the information from a publicly available source, e.g., from Google Street View, into our framework due to its general formulation. This automatically enables us to perform place recognition on already existing publicly available data and thus avoid costly mapping phase. In this thesis, we additionally show how to organize the images from the publicly available source into the sequences to perform out-of-the-box visual place recognition without previously collecting the otherwise required reference image sequences at city scale. All approaches described in this thesis have been published in peer-reviewed conference papers and journal articles. In addition to that, most of the presented contributions have been released publicly as open source software

    X-ICP: Localizability-Aware LiDAR Registration for Robust Localization in Extreme Environments

    Full text link
    Modern robotic systems are required to operate in challenging environments, which demand reliable localization under challenging conditions. LiDAR-based localization methods, such as the Iterative Closest Point (ICP) algorithm, can suffer in geometrically uninformative environments that are known to deteriorate point cloud registration performance and push optimization toward divergence along weakly constrained directions. To overcome this issue, this work proposes i) a robust fine-grained localizability detection module, and ii) a localizability-aware constrained ICP optimization module, which couples with the localizability detection module in a unified manner. The proposed localizability detection is achieved by utilizing the correspondences between the scan and the map to analyze the alignment strength against the principal directions of the optimization as part of its fine-grained LiDAR localizability analysis. In the second part, this localizability analysis is then integrated into the scan-to-map point cloud registration to generate drift-free pose updates by enforcing controlled updates or leaving the degenerate directions of the optimization unchanged. The proposed method is thoroughly evaluated and compared to state-of-the-art methods in simulated and real-world experiments, demonstrating the performance and reliability improvement in LiDAR-challenging environments. In all experiments, the proposed framework demonstrates accurate and generalizable localizability detection and robust pose estimation without environment-specific parameter tuning.Comment: 20 Pages, 20 Figures Submitted to IEEE Transactions On Robotics. Supplementary Video: https://youtu.be/SviLl7q69aA Project Website: https://sites.google.com/leggedrobotics.com/x-ic

    Localizability Optimization for Multi Robot Systems and Applications to Ultra-Wide Band Positioning

    Get PDF
    RÉSUMÉ: RÉSUMÉ Les Systèmes Multi-Robots (SMR) permettent d’effectuer des missions de manière efficace et robuste du fait de leur redondance. Cependant, les robots étant des véhicules autonomes, ils nécessitent un positionnement précis en temps réel. Les techniques de localisation qui utilisent des Mesures Relatives (MR) entre les robots, pouvant être des distances ou des angles, sont particulièrement adaptées puisqu’elles peuvent bénéficier d’algorithmes coopératifs au sein du SMR afin d’améliorer la précision pour l’ensemble des robots. Dans cette thèse, nous proposons des stratégies pour améliorer la localisabilité des SMR, qui est fonction de deux facteurs. Premièrement, la géométrie du SMR influence fondamentalement la qualité de son positionnement pour des MR bruitées. Deuxièmement, les erreurs de mesures dépendent fortement de la technologie utilisée. Dans nos expériences, nous nous focalisons sur la technologie UWB (Ultra-Wide Band), qui est populaire pour le positionnement des robots en environnement intérieur en raison de son coût modéré et sa haute précision. Par conséquent, une partie de notre travail est consacrée à la correction des erreurs de mesure UWB afin de fournir un système de navigation opérationnel. En particulier, nous proposons une méthode de calibration des biais systématiques et un algorithme d’atténuation des trajets multiples pour les mesures de distance en milieu intérieur. Ensuite, nous proposons des Fonctions de Coût de Localisabilité (FCL) pour caractériser la géométrie du SMR, et sa capacité à se localiser. Pour cela, nous utilisons la Borne Inférieure de Cramér-Rao (BICR) en vue de quantifier les incertitudes de positionnement. Par la suite, nous fournissons des schémas d’optimisation décentralisés pour les FCL sous l’hypothèse de MR gaussiennes ou log-normales. En effet, puisque le SMR peut se déplacer, certains de ses robots peuvent être déployés afin de minimiser la FCL. Cependant, l’optimisation de la localisabilité doit être décentralisée pour être adaptée à des SMRs à grande échelle. Nous proposons également des extensions des FCL à des scénarios où les robots embarquent plusieurs capteurs, où les mesures se dégradent avec la distance, ou encore où des informations préalables sur la localisation des robots sont disponibles, permettant d’utiliser la BICR bayésienne. Ce dernier résultat est appliqué au placement d’ancres statiques connaissant la distribution statistique des MR et au maintien de la localisabilité des robots qui se localisent par filtrage de Kalman. Les contributions théoriques de notre travail ont été validées à la fois par des simulations à grande échelle et des expériences utilisant des SMR terrestres. Ce manuscrit est rédigé par publication, il est constitué de quatre articles évalués par des pairs et d’un chapitre supplémentaire. ABSTRACT: ABSTRACT Multi-Robot Systems (MRS) are increasingly interesting to perform tasks eÿciently and robustly. However, since the robots are autonomous vehicles, they require accurate real-time positioning. Localization techniques that use relative measurements (RMs), i.e., distances or angles, between the robots are particularly suitable because they can take advantage of cooperative schemes within the MRS in order to enhance the precision of its positioning. In this thesis, we propose strategies to improve the localizability of the SMR, which is a function of two factors. First, the geometry of the MRS fundamentally influences the quality of its positioning under noisy RMs. Second, the measurement errors are strongly influenced by the technology chosen to gather the RMs. In our experiments, we focus on the Ultra-Wide Band (UWB) technology, which is popular for indoor robot positioning because of its mod-erate cost and high accuracy. Therefore, one part of our work is dedicated to correcting the UWB measurement errors in order to provide an operable navigation system. In particular, we propose a calibration method for systematic biases and a multi-path mitigation algorithm for indoor distance measurements. Then, we propose Localizability Cost Functions (LCF) to characterize the MRS’s geometry, using the Cramér-Rao Lower Bound (CRLB) as a proxy to quantify the positioning uncertainties. Subsequently, we provide decentralized optimization schemes for the LCF under an assumption of Gaussian or Log-Normal RMs. Indeed, since the MRS can move, some of its robots can be deployed in order to decrease the LCF. However, the optimization of the localizability must be decentralized for large-scale MRS. We also propose extensions of LCFs to scenarios where robots carry multiple sensors, where the RMs deteriorate with distance, and finally, where prior information on the robots’ localization is available, allowing the use of the Bayesian CRLB. The latter result is applied to static anchor placement knowing the statistical distribution of the MRS and localizability maintenance of robots using Kalman filtering. The theoretical contributions of our work have been validated both through large-scale simulations and experiments using ground MRS. This manuscript is written by publication, it contains four peer-reviewed articles and an additional chapter

    A Decentralized Architecture for Active Sensor Networks

    Get PDF
    This thesis is concerned with the Distributed Information Gathering (DIG) problem in which a Sensor Network is tasked with building a common representation of environment. The problem is motivated by the advantages offered by distributed autonomous sensing systems and the challenges they present. The focus of this study is on Macro Sensor Networks, characterized by platform mobility, heterogeneous teams, and long mission duration. The system under consideration may consist of an arbitrary number of mobile autonomous robots, stationary sensor platforms, and human operators, all linked in a network. This work describes a comprehensive framework called Active Sensor Network (ASN) which addresses the tasks of information fusion, decistion making, system configuration, and user interaction. The main design objectives are scalability with the number of robotic platforms, maximum flexibility in implementation and deployment, and robustness to component and communication failure. The framework is described from three complementary points of view: architecture, algorithms, and implementation. The main contribution of this thesis is the development of the ASN architecture. Its design follows three guiding principles: decentralization, modularity, and locality of interactions. These principles are applied to all aspects of the architecture and the framework in general. To achieve flexibility, the design approach emphasizes interactions between components rather than the definition of the components themselves. The architecture specifies a small set of interfaces sufficient to implement a wide range of information gathering systems. In the area of algorithms, this thesis builds on the earlier work on Decentralized Data Fusion (DDF) and its extension to information-theoretic decistion making. It presents the Bayesian Decentralized Data Fusion (BDDF) algorithm formulated for environment features represented by a general probability density function. Several specific representations are also considered: Gaussian, discrete, and the Certainty Grid map. Well known algorithms for these representations are shown to implement various aspects of the Bayesian framework. As part of the ASN implementation, a practical indoor sensor network has been developed and tested. Two series of experiments were conducted, utilizing two types of environment representation: 1) point features with Gaussian position uncertainty and 2) Certainty Grid maps. The network was operational for several days at a time, with individual platforms coming on and off-line. On several occasions, the network consisted of 39 software components. The lessons learned during the system's development may be applicable to other heterogeneous distributed systems with data-intensive algorithms

    Anchor Self-Calibrating Schemes for UWB based Indoor Localization

    Get PDF
    Traditional indoor localization techniques that use Received Signal Strength or Inertial Measurement Units for dead-reckoning suffer from signal attenuation and sensor drift, resulting in inaccurate position estimates. Newly available Ultra-Wideband radio modules can measure distances at a centimeter-level accuracy while mitigating the effects of multipath propagation due to their very fine time resolution. Known locations of fixed anchor nodes are required to determine the position of tag nodes within an indoor environment. For a large system consisting of several anchor nodes spanning a wide area, physically mapping out the locations of each anchor node is a tedious task and thus makes the scalability of such systems difficult. Hence it is important to develop indoor localization systems wherein the anchors can self-calibrate by determining their relative positions in Euclidean 3D space with respect to each other. In this thesis, we propose two novel anchor self-calibrating algorithms - Triangle Reconstruction Algorithm (TRA) and Channel Impulse Response Positioning (CIRPos) that improve upon existing range-based implementations and solve existing problems such as flip ambiguity and node localization success rate. The localization accuracy and scalability of the self-calibrating anchor schemes are tested in a simulated environment based on the ranging accuracy of the Ultra-Wideband modules

    FIT-SLAM -- Fisher Information and Traversability estimation-based Active SLAM for exploration in 3D environments

    Full text link
    Active visual SLAM finds a wide array of applications in GNSS-Denied sub-terrain environments and outdoor environments for ground robots. To achieve robust localization and mapping accuracy, it is imperative to incorporate the perception considerations in the goal selection and path planning towards the goal during an exploration mission. Through this work, we propose FIT-SLAM (Fisher Information and Traversability estimation-based Active SLAM), a new exploration method tailored for unmanned ground vehicles (UGVs) to explore 3D environments. This approach is devised with the dual objectives of sustaining an efficient exploration rate while optimizing SLAM accuracy. Initially, an estimation of a global traversability map is conducted, which accounts for the environmental constraints pertaining to traversability. Subsequently, we propose a goal candidate selection approach along with a path planning method towards this goal that takes into account the information provided by the landmarks used by the SLAM backend to achieve robust localization and successful path execution . The entire algorithm is tested and evaluated first in a simulated 3D world, followed by a real-world environment and is compared to pre-existing exploration methods. The results obtained during this evaluation demonstrate a significant increase in the exploration rate while effectively minimizing the localization covariance.Comment: 6 pages, 6 figures, IEEE ICARA 202
    • …
    corecore