72 research outputs found

    Perception-aware Tag Placement Planning for Robust Localization of UAVs in Indoor Construction Environments

    Full text link
    Tag-based visual-inertial localization is a lightweight method for enabling autonomous data collection missions of low-cost unmanned aerial vehicles (UAVs) in indoor construction environments. However, finding the optimal tag configuration (i.e., number, size, and location) on dynamic construction sites remains challenging. This paper proposes a perception-aware genetic algorithm-based tag placement planner (PGA-TaPP) to determine the optimal tag configuration using 4D-BIM, considering the project progress, safety requirements, and UAV's localizability. The proposed method provides a 4D plan for tag placement by maximizing the localizability in user-specified regions of interest (ROIs) while limiting the installation costs. Localizability is quantified using the Fisher information matrix (FIM) and encapsulated in navigable grids. The experimental results show the effectiveness of our method in finding an optimal 4D tag placement plan for the robust localization of UAVs on under-construction indoor sites.Comment: [Final draft] This material may be downloaded for personal use only. Any other use requires prior permission of the American Society of Civil Engineers and the Journal of Computing in Civil Engineerin

    X-ICP: Localizability-Aware LiDAR Registration for Robust Localization in Extreme Environments

    Full text link
    Modern robotic systems are required to operate in challenging environments, which demand reliable localization under challenging conditions. LiDAR-based localization methods, such as the Iterative Closest Point (ICP) algorithm, can suffer in geometrically uninformative environments that are known to deteriorate point cloud registration performance and push optimization toward divergence along weakly constrained directions. To overcome this issue, this work proposes i) a robust fine-grained localizability detection module, and ii) a localizability-aware constrained ICP optimization module, which couples with the localizability detection module in a unified manner. The proposed localizability detection is achieved by utilizing the correspondences between the scan and the map to analyze the alignment strength against the principal directions of the optimization as part of its fine-grained LiDAR localizability analysis. In the second part, this localizability analysis is then integrated into the scan-to-map point cloud registration to generate drift-free pose updates by enforcing controlled updates or leaving the degenerate directions of the optimization unchanged. The proposed method is thoroughly evaluated and compared to state-of-the-art methods in simulated and real-world experiments, demonstrating the performance and reliability improvement in LiDAR-challenging environments. In all experiments, the proposed framework demonstrates accurate and generalizable localizability detection and robust pose estimation without environment-specific parameter tuning.Comment: 20 Pages, 20 Figures Submitted to IEEE Transactions On Robotics. Supplementary Video: https://youtu.be/SviLl7q69aA Project Website: https://sites.google.com/leggedrobotics.com/x-ic

    Where to Map? Iterative Rover-Copter Path Planning for Mars Exploration

    Full text link
    In addition to conventional ground rovers, the Mars 2020 mission will send a helicopter to Mars. The copter's high-resolution data helps the rover to identify small hazards such as steps and pointy rocks, as well as providing rich textual information useful to predict perception performance. In this paper, we consider a three-agent system composed of a Mars rover, copter, and orbiter. The objective is to provide good localization to the rover by selecting an optimal path that minimizes the localization uncertainty accumulation during the rover's traverse. To achieve this goal, we quantify the localizability as a goodness measure associated with the map, and conduct a joint-space search over rover's path and copter's perceptual actions given prior information from the orbiter. We jointly address where to map by the copter and where to drive by the rover using the proposed iterative copter-rover path planner. We conducted numerical simulations using the map of Mars 2020 landing site to demonstrate the effectiveness of the proposed planner.Comment: 8 pages, 7 figure

    A Decentralized Architecture for Active Sensor Networks

    Get PDF
    This thesis is concerned with the Distributed Information Gathering (DIG) problem in which a Sensor Network is tasked with building a common representation of environment. The problem is motivated by the advantages offered by distributed autonomous sensing systems and the challenges they present. The focus of this study is on Macro Sensor Networks, characterized by platform mobility, heterogeneous teams, and long mission duration. The system under consideration may consist of an arbitrary number of mobile autonomous robots, stationary sensor platforms, and human operators, all linked in a network. This work describes a comprehensive framework called Active Sensor Network (ASN) which addresses the tasks of information fusion, decistion making, system configuration, and user interaction. The main design objectives are scalability with the number of robotic platforms, maximum flexibility in implementation and deployment, and robustness to component and communication failure. The framework is described from three complementary points of view: architecture, algorithms, and implementation. The main contribution of this thesis is the development of the ASN architecture. Its design follows three guiding principles: decentralization, modularity, and locality of interactions. These principles are applied to all aspects of the architecture and the framework in general. To achieve flexibility, the design approach emphasizes interactions between components rather than the definition of the components themselves. The architecture specifies a small set of interfaces sufficient to implement a wide range of information gathering systems. In the area of algorithms, this thesis builds on the earlier work on Decentralized Data Fusion (DDF) and its extension to information-theoretic decistion making. It presents the Bayesian Decentralized Data Fusion (BDDF) algorithm formulated for environment features represented by a general probability density function. Several specific representations are also considered: Gaussian, discrete, and the Certainty Grid map. Well known algorithms for these representations are shown to implement various aspects of the Bayesian framework. As part of the ASN implementation, a practical indoor sensor network has been developed and tested. Two series of experiments were conducted, utilizing two types of environment representation: 1) point features with Gaussian position uncertainty and 2) Certainty Grid maps. The network was operational for several days at a time, with individual platforms coming on and off-line. On several occasions, the network consisted of 39 software components. The lessons learned during the system's development may be applicable to other heterogeneous distributed systems with data-intensive algorithms

    Localizability Optimization for Multi Robot Systems and Applications to Ultra-Wide Band Positioning

    Get PDF
    RÉSUMÉ: RÉSUMÉ Les Systèmes Multi-Robots (SMR) permettent d’effectuer des missions de manière efficace et robuste du fait de leur redondance. Cependant, les robots étant des véhicules autonomes, ils nécessitent un positionnement précis en temps réel. Les techniques de localisation qui utilisent des Mesures Relatives (MR) entre les robots, pouvant être des distances ou des angles, sont particulièrement adaptées puisqu’elles peuvent bénéficier d’algorithmes coopératifs au sein du SMR afin d’améliorer la précision pour l’ensemble des robots. Dans cette thèse, nous proposons des stratégies pour améliorer la localisabilité des SMR, qui est fonction de deux facteurs. Premièrement, la géométrie du SMR influence fondamentalement la qualité de son positionnement pour des MR bruitées. Deuxièmement, les erreurs de mesures dépendent fortement de la technologie utilisée. Dans nos expériences, nous nous focalisons sur la technologie UWB (Ultra-Wide Band), qui est populaire pour le positionnement des robots en environnement intérieur en raison de son coût modéré et sa haute précision. Par conséquent, une partie de notre travail est consacrée à la correction des erreurs de mesure UWB afin de fournir un système de navigation opérationnel. En particulier, nous proposons une méthode de calibration des biais systématiques et un algorithme d’atténuation des trajets multiples pour les mesures de distance en milieu intérieur. Ensuite, nous proposons des Fonctions de Coût de Localisabilité (FCL) pour caractériser la géométrie du SMR, et sa capacité à se localiser. Pour cela, nous utilisons la Borne Inférieure de Cramér-Rao (BICR) en vue de quantifier les incertitudes de positionnement. Par la suite, nous fournissons des schémas d’optimisation décentralisés pour les FCL sous l’hypothèse de MR gaussiennes ou log-normales. En effet, puisque le SMR peut se déplacer, certains de ses robots peuvent être déployés afin de minimiser la FCL. Cependant, l’optimisation de la localisabilité doit être décentralisée pour être adaptée à des SMRs à grande échelle. Nous proposons également des extensions des FCL à des scénarios où les robots embarquent plusieurs capteurs, où les mesures se dégradent avec la distance, ou encore où des informations préalables sur la localisation des robots sont disponibles, permettant d’utiliser la BICR bayésienne. Ce dernier résultat est appliqué au placement d’ancres statiques connaissant la distribution statistique des MR et au maintien de la localisabilité des robots qui se localisent par filtrage de Kalman. Les contributions théoriques de notre travail ont été validées à la fois par des simulations à grande échelle et des expériences utilisant des SMR terrestres. Ce manuscrit est rédigé par publication, il est constitué de quatre articles évalués par des pairs et d’un chapitre supplémentaire. ABSTRACT: ABSTRACT Multi-Robot Systems (MRS) are increasingly interesting to perform tasks eÿciently and robustly. However, since the robots are autonomous vehicles, they require accurate real-time positioning. Localization techniques that use relative measurements (RMs), i.e., distances or angles, between the robots are particularly suitable because they can take advantage of cooperative schemes within the MRS in order to enhance the precision of its positioning. In this thesis, we propose strategies to improve the localizability of the SMR, which is a function of two factors. First, the geometry of the MRS fundamentally influences the quality of its positioning under noisy RMs. Second, the measurement errors are strongly influenced by the technology chosen to gather the RMs. In our experiments, we focus on the Ultra-Wide Band (UWB) technology, which is popular for indoor robot positioning because of its mod-erate cost and high accuracy. Therefore, one part of our work is dedicated to correcting the UWB measurement errors in order to provide an operable navigation system. In particular, we propose a calibration method for systematic biases and a multi-path mitigation algorithm for indoor distance measurements. Then, we propose Localizability Cost Functions (LCF) to characterize the MRS’s geometry, using the Cramér-Rao Lower Bound (CRLB) as a proxy to quantify the positioning uncertainties. Subsequently, we provide decentralized optimization schemes for the LCF under an assumption of Gaussian or Log-Normal RMs. Indeed, since the MRS can move, some of its robots can be deployed in order to decrease the LCF. However, the optimization of the localizability must be decentralized for large-scale MRS. We also propose extensions of LCFs to scenarios where robots carry multiple sensors, where the RMs deteriorate with distance, and finally, where prior information on the robots’ localization is available, allowing the use of the Bayesian CRLB. The latter result is applied to static anchor placement knowing the statistical distribution of the MRS and localizability maintenance of robots using Kalman filtering. The theoretical contributions of our work have been validated both through large-scale simulations and experiments using ground MRS. This manuscript is written by publication, it contains four peer-reviewed articles and an additional chapter

    Visual Place Recognition in Changing Environments

    Get PDF
    Localization is an essential capability of mobile robots and place recognition is an important component of localization. Only having precise localization, robots can reliably plan, navigate and understand the environment around them. The main task of visual place recognition algorithms is to recognize based on the visual input if the robot has seen previously a given place in the environment. Cameras are one of the popular sensors robots get information from. They are lightweight, affordable, and provide detailed descriptions of the environment in the form of images. Cameras are shown to be useful for the vast variety of emerging applications, from virtual and augmented reality applications to autonomous cars or even fleets of autonomous cars. All these applications need precise localization. Nowadays, the state-of-the-art methods are able to reliably estimate the position of the robots using image streams. One of the big challenges still is the ability to localize a camera given an image stream in the presence of drastic visual appearance changes in the environment. Visual appearance changes may be caused by a variety of different reasons, starting from camera-related factors, such as changes in exposure time, camera position-related factors, e.g. the scene is observed from a different position or viewing angle, occlusions, as well as factors that stem from natural sources, for example seasonal changes, different weather conditions, illumination changes, etc. These effects change the way the same place in the environments appears in the image and can lead to situations where it becomes hard even for humans to recognize the places. Also, the performance of the traditional visual localization approaches, such as FABMAP or DBow, decreases dramatically in the presence of strong visual appearance changes. The techniques presented in this thesis aim at improving visual place recognition capabilities for robotic systems in the presence of dramatic visual appearance changes. To reduce the effect of visual changes on image matching performance, we exploit sequences of images rather than individual images. This becomes possible as robotic systems collect data sequentially and not in random order. We formulate the visual place recognition problem under strong appearance changes as a problem of matching image sequences collected by a robotic system at different points in time. A key insight here is the fact that matching sequences reduces the ambiguities in the data associations. This allows us to establish image correspondences between different sequences and thus recognize if two images represent the same place in the environment. To perform a search for image correspondences, we construct a graph that encodes the potential matches between the sequences and at the same time preserves the sequentiality of the data. The shortest path through such a data association graph provides the valid image correspondences between the sequences. Robots operating reliably in an environment should be able to recognize a place in an online manner and not after having recorded all data beforehand. As opposed to collecting image sequences and then determining the associations between the sequences offline, a real-world system should be able to make a decision for every incoming image. In this thesis, we therefore propose an algorithm that is able to perform visual place recognition in changing environments in an online fashion between the query and the previously recorded reference sequences. Then, for every incoming query image, our algorithm checks if the robot is in the previously seen environment, i.e. there exists a matching image in the reference sequence, as well as if the current measurement is consistent with previously obtained query images. Additionally, to be able to recognize places in an online manner, a robot needs to recognize the fact that it has left the previously mapped area as well as relocalize when it re-enters environment covered by the reference sequence. Thus, we relax the assumption that the robot should always travel within the previously mapped area and propose an improved graph-based matching procedure that allows for visual place recognition in case of partially overlapping image sequences. To achieve a long-term autonomy, we further increase the robustness of our place recognition algorithm by incorporating information from multiple image sequences, collected along different overlapping and non-overlapping routes. This allows us to grow the coverage of the environment in terms of area as well as various scene appearances. The reference dataset then contains more images to match against and this increases the probability of finding a matching image, which can lead to improved localization. To be able to deploy a robot that performs localization in large scaled environments over extended periods of time, however, collecting a reference dataset may be a tedious, resource consuming and in some cases intractable task. Avoiding an explicit map collection stage fosters faster deployment of robotic systems in the real world since no map has to be collected beforehand. By using our visual place recognition approach the map collection stage can be skipped, as we are able to incorporate the information from a publicly available source, e.g., from Google Street View, into our framework due to its general formulation. This automatically enables us to perform place recognition on already existing publicly available data and thus avoid costly mapping phase. In this thesis, we additionally show how to organize the images from the publicly available source into the sequences to perform out-of-the-box visual place recognition without previously collecting the otherwise required reference image sequences at city scale. All approaches described in this thesis have been published in peer-reviewed conference papers and journal articles. In addition to that, most of the presented contributions have been released publicly as open source software

    Localizability of unicycle mobiles robots: an algebraic point of view.

    Get PDF
    International audienceA single landmark based localization algorithm for unicycle mobile robots was provided in [1]. It is based on the algebraic localizability notion and an efficient differentiation algorithm in noisy environment ([2], [3]). Let us stress that this localization algorithm do not need to know the linear and the angular velocities which are reconstructed by this algorithm using the kinematic model. In this paper, a sensibility study leads to a new fusion algorithm in the multi landmark case us- ing as a basis our posture differentiation based estimator. Some simulations and experimental results are presented in order to prove the effectiveness of the proposed method compared to the well known EKF method

    INSTRUCTIONS FOR PREPARATION OF CAMERA-READY MANUSCRIPTS FOR BULLETIN OF GRADUATE SCIENCE AND ENGINEERING, ENGINEERING STUDIES

    Get PDF
    In the field of autonomous mobile robotics, reliable localization performance is essential. However, there are real environments where localization is a failure. In this paper, we propose a method for estimating localizability based on occupancy grid maps. Localizability indicates the reliability of localization. There are several approaches to estimate localizability, and we propose a method using local map correlations. The covariance matrix of the Gaussian distribution from local map correlations is used to estimate localizability. In this way, we can estimate the magnitude of the localization error and the characteristics of the error. The experiment confirmed the characteristics of the distribution of correlations for each location on occupancy grid maps. And the localizability of the whole map was estimated using an occupancy grid map containing a vast and complex. The simulation experiment results showed that the proposed method could estimate localization error and the characteristics of the error on occupancy grid maps. The proposed method was confirmed to be effective in estimating localizability

    Toward Certifying Maps for Safe Localization Under Adversarial Corruption

    Full text link
    In this paper, we propose a way to model the resilience of the Iterative Closest Point (ICP) algorithm in the presence of corrupted measurements. In the context of autonomous vehicles, certifying the safety of the localization process poses a significant challenge. As robots evolve in a complex world, various types of noise can impact the measurements. Conventionally, this noise has been assumed to be distributed according to a zero-mean Gaussian distribution. However, this assumption does not hold in numerous scenarios, including adverse weather conditions, occlusions caused by dynamic obstacles, or long-term changes in the map. In these cases, the measurements are instead affected by a large, deterministic fault. This paper introduces a closed-form formula approximating the highest pose error caused by corrupted measurements using the ICP algorithm. Using this formula, we develop a metric to certify and pinpoint specific regions within the environment where the robot is more vulnerable to localization failures in the presence of faults in the measurements.Comment: 8 pages, 5 figures. Submitted to IEEE Robotics and Automation Letters (RA-L
    corecore