18 research outputs found
Where to Map? Iterative Rover-Copter Path Planning for Mars Exploration
In addition to conventional ground rovers, the Mars 2020 mission will send a
helicopter to Mars. The copter's high-resolution data helps the rover to
identify small hazards such as steps and pointy rocks, as well as providing
rich textual information useful to predict perception performance. In this
paper, we consider a three-agent system composed of a Mars rover, copter, and
orbiter. The objective is to provide good localization to the rover by
selecting an optimal path that minimizes the localization uncertainty
accumulation during the rover's traverse. To achieve this goal, we quantify the
localizability as a goodness measure associated with the map, and conduct a
joint-space search over rover's path and copter's perceptual actions given
prior information from the orbiter. We jointly address where to map by the
copter and where to drive by the rover using the proposed iterative
copter-rover path planner. We conducted numerical simulations using the map of
Mars 2020 landing site to demonstrate the effectiveness of the proposed
planner.Comment: 8 pages, 7 figure
Visual Place Recognition in Changing Environments
Localization is an essential capability of mobile robots and place recognition is an important component of localization. Only having precise localization, robots can reliably plan, navigate and understand the environment around them. The main task of visual place recognition algorithms is to recognize based on the visual input if the robot has seen previously a given place in the environment. Cameras are one of the popular sensors robots get information from. They are lightweight, affordable, and provide detailed descriptions of the environment in the form of images. Cameras are shown to be useful for the vast variety of emerging applications, from virtual and augmented reality applications to autonomous cars or even fleets of autonomous cars. All these applications need precise localization. Nowadays, the state-of-the-art methods are able to reliably estimate the position of the robots using image streams. One of the big challenges still is the ability to localize a camera given an image stream in the presence of drastic visual appearance changes in the environment. Visual appearance changes may be caused by a variety of different reasons, starting from camera-related factors, such as changes in exposure time, camera position-related factors, e.g. the scene is observed from a different position or viewing angle, occlusions, as well as factors that stem from natural sources, for example seasonal changes, different weather conditions, illumination changes, etc. These effects change the way the same place in the environments appears in the image and can lead to situations where it becomes hard even for humans to recognize the places. Also, the performance of the traditional visual localization approaches, such as FABMAP or DBow, decreases dramatically in the presence of strong visual appearance changes. The techniques presented in this thesis aim at improving visual place recognition capabilities for robotic systems in the presence of dramatic visual appearance changes. To reduce the effect of visual changes on image matching performance, we exploit sequences of images rather than individual images. This becomes possible as robotic systems collect data sequentially and not in random order. We formulate the visual place recognition problem under strong appearance changes as a problem of matching image sequences collected by a robotic system at different points in time. A key insight here is the fact that matching sequences reduces the ambiguities in the data associations. This allows us to establish image correspondences between different sequences and thus recognize if two images represent the same place in the environment. To perform a search for image correspondences, we construct a graph that encodes the potential matches between the sequences and at the same time preserves the sequentiality of the data. The shortest path through such a data association graph provides the valid image correspondences between the sequences. Robots operating reliably in an environment should be able to recognize a place in an online manner and not after having recorded all data beforehand. As opposed to collecting image sequences and then determining the associations between the sequences offline, a real-world system should be able to make a decision for every incoming image. In this thesis, we therefore propose an algorithm that is able to perform visual place recognition in changing environments in an online fashion between the query and the previously recorded reference sequences. Then, for every incoming query image, our algorithm checks if the robot is in the previously seen environment, i.e. there exists a matching image in the reference sequence, as well as if the current measurement is consistent with previously obtained query images. Additionally, to be able to recognize places in an online manner, a robot needs to recognize the fact that it has left the previously mapped area as well as relocalize when it re-enters environment covered by the reference sequence. Thus, we relax the assumption that the robot should always travel within the previously mapped area and propose an improved graph-based matching procedure that allows for visual place recognition in case of partially overlapping image sequences. To achieve a long-term autonomy, we further increase the robustness of our place recognition algorithm by incorporating information from multiple image sequences, collected along different overlapping and non-overlapping routes. This allows us to grow the coverage of the environment in terms of area as well as various scene appearances. The reference dataset then contains more images to match against and this increases the probability of finding a matching image, which can lead to improved localization. To be able to deploy a robot that performs localization in large scaled environments over extended periods of time, however, collecting a reference dataset may be a tedious, resource consuming and in some cases intractable task. Avoiding an explicit map collection stage fosters faster deployment of robotic systems in the real world since no map has to be collected beforehand. By using our visual place recognition approach the map collection stage can be skipped, as we are able to incorporate the information from a publicly available source, e.g., from Google Street View, into our framework due to its general formulation. This automatically enables us to perform place recognition on already existing publicly available data and thus avoid costly mapping phase. In this thesis, we additionally show how to organize the images from the publicly available source into the sequences to perform out-of-the-box visual place recognition without previously collecting the otherwise required reference image sequences at city scale. All approaches described in this thesis have been published in peer-reviewed conference papers and journal articles. In addition to that, most of the presented contributions have been released publicly as open source software
A Decentralized Architecture for Active Sensor Networks
This thesis is concerned with the Distributed Information Gathering (DIG) problem in which a Sensor Network is tasked with building a common representation of environment. The problem is motivated by the advantages offered by distributed autonomous sensing systems and the challenges they present. The focus of this study is on Macro Sensor Networks, characterized by platform mobility, heterogeneous teams, and long mission duration. The system under consideration may consist of an arbitrary number of mobile autonomous robots, stationary sensor platforms, and human operators, all linked in a network. This work describes a comprehensive framework called Active Sensor Network (ASN) which addresses the tasks of information fusion, decistion making, system configuration, and user interaction. The main design objectives are scalability with the number of robotic platforms, maximum flexibility in implementation and deployment, and robustness to component and communication failure. The framework is described from three complementary points of view: architecture, algorithms, and implementation. The main contribution of this thesis is the development of the ASN architecture. Its design follows three guiding principles: decentralization, modularity, and locality of interactions. These principles are applied to all aspects of the architecture and the framework in general. To achieve flexibility, the design approach emphasizes interactions between components rather than the definition of the components themselves. The architecture specifies a small set of interfaces sufficient to implement a wide range of information gathering systems. In the area of algorithms, this thesis builds on the earlier work on Decentralized Data Fusion (DDF) and its extension to information-theoretic decistion making. It presents the Bayesian Decentralized Data Fusion (BDDF) algorithm formulated for environment features represented by a general probability density function. Several specific representations are also considered: Gaussian, discrete, and the Certainty Grid map. Well known algorithms for these representations are shown to implement various aspects of the Bayesian framework. As part of the ASN implementation, a practical indoor sensor network has been developed and tested. Two series of experiments were conducted, utilizing two types of environment representation: 1) point features with Gaussian position uncertainty and 2) Certainty Grid maps. The network was operational for several days at a time, with individual platforms coming on and off-line. On several occasions, the network consisted of 39 software components. The lessons learned during the system's development may be applicable to other heterogeneous distributed systems with data-intensive algorithms
Localizability Optimization for Multi Robot Systems and Applications to Ultra-Wide Band Positioning
RÉSUMÉ: RÉSUMÉ Les Systèmes Multi-Robots (SMR) permettent d’effectuer des missions de manière efficace et robuste du fait de leur redondance. Cependant, les robots étant des véhicules autonomes, ils nécessitent un positionnement précis en temps réel. Les techniques de localisation qui utilisent des Mesures Relatives (MR) entre les robots, pouvant être des distances ou des angles, sont particulièrement adaptées puisqu’elles peuvent bénéficier d’algorithmes coopératifs au sein du SMR afin d’améliorer la précision pour l’ensemble des robots. Dans cette thèse, nous proposons des stratégies pour améliorer la localisabilité des SMR, qui est fonction de deux facteurs. Premièrement, la géométrie du SMR influence fondamentalement la qualité de son positionnement pour des MR bruitées. Deuxièmement, les erreurs de mesures dépendent fortement de la technologie utilisée. Dans nos expériences, nous nous focalisons sur la technologie UWB (Ultra-Wide Band), qui est populaire pour le positionnement des robots en environnement intérieur en raison de son coût modéré et sa haute précision. Par conséquent, une partie de notre travail est consacrée à la correction des erreurs de mesure UWB afin de fournir un système de navigation opérationnel. En particulier, nous proposons une méthode de calibration des biais systématiques et un algorithme d’atténuation des trajets multiples pour les mesures de distance en milieu intérieur. Ensuite, nous proposons des Fonctions de Coût de Localisabilité (FCL) pour caractériser la géométrie du SMR, et sa capacité à se localiser. Pour cela, nous utilisons la Borne Inférieure de Cramér-Rao (BICR) en vue de quantifier les incertitudes de positionnement. Par la suite, nous fournissons des schémas d’optimisation décentralisés pour les FCL sous l’hypothèse de MR gaussiennes ou log-normales. En effet, puisque le SMR peut se déplacer, certains de ses robots peuvent être déployés afin de minimiser la FCL. Cependant, l’optimisation de la localisabilité doit être décentralisée pour être adaptée à des SMRs à grande échelle. Nous proposons également des extensions des FCL à des scénarios où les robots embarquent plusieurs capteurs, où les mesures se dégradent avec la distance, ou encore où des informations préalables sur la localisation des robots sont disponibles, permettant d’utiliser la BICR bayésienne. Ce dernier résultat est appliqué au placement d’ancres statiques connaissant la distribution statistique des MR et au maintien de la localisabilité des robots qui se localisent par filtrage de Kalman. Les contributions théoriques de notre travail ont été validées à la fois par des simulations à grande échelle et des expériences utilisant des SMR terrestres. Ce manuscrit est rédigé par publication, il est constitué de quatre articles évalués par des pairs et d’un chapitre supplémentaire. ABSTRACT: ABSTRACT Multi-Robot Systems (MRS) are increasingly interesting to perform tasks eÿciently and robustly. However, since the robots are autonomous vehicles, they require accurate real-time positioning. Localization techniques that use relative measurements (RMs), i.e., distances or angles, between the robots are particularly suitable because they can take advantage of cooperative schemes within the MRS in order to enhance the precision of its positioning. In this thesis, we propose strategies to improve the localizability of the SMR, which is a function of two factors. First, the geometry of the MRS fundamentally influences the quality of its positioning under noisy RMs. Second, the measurement errors are strongly influenced by the technology chosen to gather the RMs. In our experiments, we focus on the Ultra-Wide Band (UWB) technology, which is popular for indoor robot positioning because of its mod-erate cost and high accuracy. Therefore, one part of our work is dedicated to correcting the UWB measurement errors in order to provide an operable navigation system. In particular, we propose a calibration method for systematic biases and a multi-path mitigation algorithm for indoor distance measurements. Then, we propose Localizability Cost Functions (LCF) to characterize the MRS’s geometry, using the Cramér-Rao Lower Bound (CRLB) as a proxy to quantify the positioning uncertainties. Subsequently, we provide decentralized optimization schemes for the LCF under an assumption of Gaussian or Log-Normal RMs. Indeed, since the MRS can move, some of its robots can be deployed in order to decrease the LCF. However, the optimization of the localizability must be decentralized for large-scale MRS. We also propose extensions of LCFs to scenarios where robots carry multiple sensors, where the RMs deteriorate with distance, and finally, where prior information on the robots’ localization is available, allowing the use of the Bayesian CRLB. The latter result is applied to static anchor placement knowing the statistical distribution of the MRS and localizability maintenance of robots using Kalman filtering. The theoretical contributions of our work have been validated both through large-scale simulations and experiments using ground MRS. This manuscript is written by publication, it contains four peer-reviewed articles and an additional chapter
Optimal Path Generation for Monocular Simultaneous Localization and Mapping
Monocular Simultaneous Localization and Mapping (MonoSLAM), a derivative of Simultaneous Localization and Mapping (SLAM), is a navigation method for autonomous vehicles that uses only an inertial measurement unit and a camera to map the environment and localize the vehicle's position within the environment. Prior to this work, multiple different attempts have been made to generate optimal paths for SLAM, but no optimal path work has been performed specifically for MonoSLAM. The author details an optimal path generation (OPG) method designed specifically for MonoSLAM. In MonoSLAM, the vehicle gains useful data when it can detect a change in bearing to objects in the environment (also known as features). The OPG in question maximizes parallax among all visible features in the environment, with the goal of optimizing fuel usage and estimation accuracy.
In simulations comparing paths from this OPG method with typical MonoSLAM paths, it is evident that the OPG method produces extremely large fuel savings (up to 98%). These fuel savings come at the expense of estimation accuracy, however the OPG method still produces estimation performance that is acceptable for many applications. Looking forward, this work proves that it is indeed possible to improve upon the paths that are typically used in MonoSLAM. This thesis examines a two-dimensional MonoSLAM simulation only; no hardware implementation is performed
Estimation, planning, and mapping for autonomous flight using an RGB-D camera in GPS-denied environments
RGB-D cameras provide both color images and per-pixel depth estimates. The richness of this data and the recent development of low-cost sensors have combined to present an attractive opportunity for mobile robotics research. In this paper, we describe a system for visual odometry and mapping using an RGB-D camera, and its application to autonomous flight. By leveraging results from recent state-of-the-art algorithms and hardware, our system enables 3D flight in cluttered environments using only onboard sensor data. All computation and sensing required for local position control are performed onboard the vehicle, reducing the dependence on an unreliable wireless link to a ground station. However, even with accurate 3D sensing and position estimation, some parts of the environment have more perceptual structure than others, leading to state estimates that vary in accuracy across the environment. If the vehicle plans a path without regard to how well it can localize itself along that path, it runs the risk of becoming lost or worse. We show how the belief roadmap algorithm prentice2009belief, a belief space extension of the probabilistic roadmap algorithm, can be used to plan vehicle trajectories that incorporate the sensing model of the RGB-D camera. We evaluate the effectiveness of our system for controlling a quadrotor micro air vehicle, demonstrate its use for constructing detailed 3D maps of an indoor environment, and discuss its limitations.United States. Office of Naval Research (Grant MURI N00014-07-1-0749)United States. Office of Naval Research (Science of Autonomy Program N00014-09-1-0641)United States. Army Research Office (MAST CTA)United States. Office of Naval Research. Multidisciplinary University Research Initiative (Grant N00014-09-1-1052)National Science Foundation (U.S.) (Contract IIS-0812671)United States. Army Research Office (Robotics Consortium Agreement W911NF-10-2-0016)National Science Foundation (U.S.). Division of Information, Robotics, and Intelligent Systems (Grant 0546467
Reinforcement learning-based autonomous robot navigation and tracking
Autonomous navigation requires determining a collision-free path for a mobile robot
using only partial observations of the environment. This capability is highly needed
for a wide range of applications, such as search and rescue operations, surveillance,
environmental monitoring, and domestic service robots. In many scenarios, an accurate global map is not available beforehand, posing significant challenges for a robot
planning its path. This type of navigation is often referred to as Mapless Navigation,
and such work is not limited to only Unmanned Ground Vehicle (UGV) but also
other vehicles, such as Unmanned Aerial Vehicles (UAV) and more. This research
aims to develop Reinforcement Learning (RL)-based methods for autonomous navigation for mobile robots, as well as effective tracking strategies for a UAV to follow
a moving target.
Mapless navigation usually assumes accurate localisation, which is unrealistic.
In the real world, localisation methods, such as simultaneous localisation and mapping (SLAM), are needed. However, the localisation performance could deteriorate
depending on the environment and observation quality. Therefore, To avoid de-teriorated localisation, this work introduces an RL-based navigation algorithm to
enable mobile robots to navigate in unknown environments, while incorporating
localisation performance in training the policy. Specifically, a localisation-related
penalty is introduced in the reward space, ensuring localisation safety is taken into
consideration during navigation. Different metrics are formulated to identify if the
localisation performance starts to deteriorate in order to penalise the robot. As such, the navigation policy will not only optimise its paths in terms of travel distance and
collision avoidance towards the goal but also avoid venturing into areas that pose
challenges for localisation algorithms.
The localisation-safe algorithm is further extended to UAV navigation, which
uses image-based observations. Instead of deploying an end-to-end control pipeline,
this work establishes a hierarchical control framework that leverages both the capabilities of neural networks for perception and the stability and safety guarantees of
conventional controllers. The high-level controller in this hierarchical framework is a
neural network policy with semantic image inputs, trained using RL algorithms with
localisation-related rewards. The efficacy of the trained policy is demonstrated in
real-world experiments for localisation-safe navigation, and, notably, it exhibits effectiveness without the need for retraining, thanks to the hierarchical control scheme
and semantic inputs. Last, a tracking policy is introduced to enable a UAV to track a moving target. This study designs a reward space, enabling a vision-based UAV, which utilises
depth images for perception, to follow a target within a safe and visible range. The
objective is to maintain the mobile target at the centre of the drone camera’s image
without being occluded by other objects and to avoid collisions with obstacles. It
is observed that training such a policy from scratch may lead to local minima. To
address this, a state-based teacher policy is trained to perform the tracking task,
with environmental perception relying on direct access to state information, including position coordinates of obstacles, instead of depth images. An RL algorithm is
then constructed to train the vision-based policy, incorporating behavioural guidance from the state-based teacher policy. This approach yields promising tracking
performance
Formulation of control strategies for requirement definition of multi-agent surveillance systems
In a multi-agent system (MAS), the overall performance is greatly influenced by both the design and the control of the agents. The physical design determines the agent capabilities, and the control strategies drive the agents to pursue their objectives using the available capabilities. The objective of this thesis is to incorporate control strategies in the early conceptual design of an MAS. As such, this thesis proposes a methodology that mainly explores the interdependency between the design variables of the agents and the control strategies used by the agents. The output of the proposed methodology, i.e. the interdependency between the design variables and the control strategies, can be utilized in the requirement analysis as well as in the later design stages to optimize the overall system through some higher fidelity analyses.
In this thesis, the proposed methodology is applied to a persistent multi-UAV surveillance problem, whose objective is to increase the situational awareness of a base that receives some instantaneous monitoring information from a group of UAVs. Each UAV has a limited energy capacity and a limited communication range. Accordingly, the connectivity of the communication network becomes essential for the information flow from the UAVs to the base. In long-run missions, the UAVs need to return to the base for refueling with certain frequencies depending on their endurance. Whenever a UAV leaves the surveillance area, the remaining UAVs may need relocation to mitigate the impact of its absence. In the control part of this thesis, a set of energy-aware control strategies are developed for efficient multi-UAV surveillance operations. To this end, this thesis first proposes a decentralized strategy to recover the connectivity of the communication network. Second, it presents two return policies for UAVs to achieve energy-aware persistent surveillance. In the design part of this thesis, a design space exploration is performed to investigate the overall performance by varying a set of design variables and the candidate control strategies. Overall, it is shown that a control strategy used by an MAS affects the influence of the design variables on the mission performance. Furthermore, the proposed methodology identifies the preferable pairs of design variables and control strategies through low fidelity analysis in the early design stages.Ph.D
Recommended from our members
Multi-SLAM Systems for Fault-Tolerant Simultaneous Localization and Mapping
Mobile robots need accurate, high fidelity models of their operating environments in order to complete their tasks safely and efficiently. Generating these models is most often done via Simultaneous Localization and Mapping (SLAM), a paradigm where the robot alternatively estimates the most up-to-date model of the environment and its position relative to this model as it acquires new information from its sensors over time. Because robots operate in many different environments with different compute, memory, sensing, and form constraints, the nature and quality of information available to individual instances of different SLAM systems varies substantially. `One-size-fits-all\u27 solutions are thus exceedingly difficult to engineer, and highly specialized systems, which represent the state-of-the-art for most types of deployments, are not robust to operating conditions in which their assumptions are not met. This thesis seeks to investigate an alternative approach to these robustness and universality problems by incorporating existing SLAM solutions within a larger framework supported by planning and learning. The central idea is to combine learned models that estimate SLAM algorithm performance under a variety of sensory conditions, in this case neural networks, with planners designed for planning under uncertainty and partial observability, in this case partially observable Markov decision problems (POMDPs). Models of existing SLAM algorithms can be learned, and these models can then be used online to estimate the performance of a range of solutions to the SLAM problem at hand. The POMDP policy then selects the appropriate algorithm, given the estimated performance, cost of switching methods, and other information. This general approach may also be applicable to many other robotics problems that rely on data-fusion, such as grasp planning, motion planning, or object identification