139 research outputs found

    Cooperative localization for mobile agents: a recursive decentralized algorithm based on Kalman filter decoupling

    Full text link
    We consider cooperative localization technique for mobile agents with communication and computation capabilities. We start by provide and overview of different decentralization strategies in the literature, with special focus on how these algorithms maintain an account of intrinsic correlations between state estimate of team members. Then, we present a novel decentralized cooperative localization algorithm that is a decentralized implementation of a centralized Extended Kalman Filter for cooperative localization. In this algorithm, instead of propagating cross-covariance terms, each agent propagates new intermediate local variables that can be used in an update stage to create the required propagated cross-covariance terms. Whenever there is a relative measurement in the network, the algorithm declares the agent making this measurement as the interim master. By acquiring information from the interim landmark, the agent the relative measurement is taken from, the interim master can calculate and broadcast a set of intermediate variables which each robot can then use to update its estimates to match that of a centralized Extended Kalman Filter for cooperative localization. Once an update is done, no further communication is needed until the next relative measurement

    Towards Collaborative Simultaneous Localization and Mapping: a Survey of the Current Research Landscape

    Get PDF
    Motivated by the tremendous progress we witnessed in recent years, this paper presents a survey of the scientific literature on the topic of Collaborative Simultaneous Localization and Mapping (C-SLAM), also known as multi-robot SLAM. With fleets of self-driving cars on the horizon and the rise of multi-robot systems in industrial applications, we believe that Collaborative SLAM will soon become a cornerstone of future robotic applications. In this survey, we introduce the basic concepts of C-SLAM and present a thorough literature review. We also outline the major challenges and limitations of C-SLAM in terms of robustness, communication, and resource management. We conclude by exploring the area's current trends and promising research avenues.Comment: 44 pages, 3 figure

    On-manifold Decentralized State Estimation using Pseudomeasurements and Preintegration

    Full text link
    This paper addresses the problem of decentralized, collaborative state estimation in robotic teams. In particular, this paper considers problems where individual robots estimate similar physical quantities, such as each other's position relative to themselves. The use of \emph{pseudomeasurements} is introduced as a means of modelling such relationships between robots' state estimates, and is shown to be a tractable way to approach the decentralized state estimation problem. Moreover, this formulation easily leads to a general-purpose observability test that simultaneously accounts for measurements that robots collect from their own sensors, as well as the communication structure within the team. Finally, input preintegration is proposed as a communication-efficient way of sharing odometry information between robots, and the entire theory is appropriate for both vector-space and Lie-group state definitions. The proposed framework is evaluated on three different simulated problems, and one experiment involving three quadcopters.Comment: 15 pages, 13 figures, submitted to IEE

    Cooperative Visual-Inertial Sensor Fusion: Fundamental Equations

    Get PDF
    International audienceThis paper provides a new theoretical and basic result in the framework of cooperative visual-inertial sensor fusion. Specifically, the case of two aerial vehicles is investigated. Each vehicle is equipped with inertial sensors (accelerometer and gyroscope) and with a monocular camera. By using the monocular camera, each vehicle can observe the other vehicle. No additional camera observations (e.g., of external point features in the environment) are considered. First, the entire observable state is analytically derived. This state includes the relative position between the two aerial vehicles (which includes the absolute scale), the relative velocity and the three Euler angles that express the rotation between the two vehicle frames. Then, the basic equations that describe this system are analytically obtained. In other words, both the dynamics of the observable state and all the camera observations are expressed only in terms of the components of the observable state and in terms of the inertial measurements. These are the fundamental equations that fully characterize the problem of fusing visual and inertial data in the cooperative case. The last part of the paper describes the use of these equations to achieve the state estimation through an EKF. In particular, a simple manner to limit communication among the vehicles is discussed. Results obtained through simulations show the performance of the proposed solution, and in particular how it is affected by limiting the communication between the two vehicles

    Vision Based Collaborative Localization and Path Planning for Micro Aerial Vehicles

    Get PDF
    Autonomous micro aerial vehicles (MAV) have gained immense popularity in both the commercial and research worlds over the last few years. Due to their small size and agility, MAVs are considered to have great potential for civil and industrial tasks such as photography, search and rescue, exploration, inspection and surveillance. Autonomy on MAVs usually involves solving the major problems of localization and path planning. While GPS is a popular choice for localization for many MAV platforms today, it suffers from issues such as inaccurate estimation around large structures, and complete unavailability in remote areas/indoor scenarios. From the alternative sensing mechanisms, cameras arise as an attractive choice to be an onboard sensor due to the richness of information captured, along with small size and inexpensiveness. Another consideration that comes into picture for micro aerial vehicles is the fact that these small platforms suffer from inability to fly for long amounts of time or carry heavy payload, scenarios that can be solved by allocating a group, or a swarm of MAVs to perform a task than just one. Collaboration between multiple vehicles allows for better accuracy of estimation, task distribution and mission efficiency. Combining these rationales, this dissertation presents collaborative vision based localization and path planning frameworks. Although these were created as two separate steps, the ideal application would contain both of them as a loosely coupled localization and planning algorithm. A forward-facing monocular camera onboard each MAV is considered as the sole sensor for computing pose estimates. With this minimal setup, this dissertation first investigates methods to perform feature-based localization, with the possibility of fusing two types of localization data: one that is computed onboard each MAV, and the other that comes from relative measurements between the vehicles. Feature based methods were preferred over direct methods for vision because of the relative ease with which tangible data packets can be transferred between vehicles, and because feature data allows for minimal data transfer compared to large images. Inspired by techniques from multiple view geometry and structure from motion, this localization algorithm presents a decentralized full 6-degree of freedom pose estimation method complete with a consistent fusion methodology to obtain robust estimates only at discrete instants, thus not requiring constant communication between vehicles. This method was validated on image data obtained from high fidelity simulations as well as real life MAV tests. These vision based collaborative constraints were also applied to the problem of path planning with a focus on performing uncertainty-aware planning, where the algorithm is responsible for generating not only a valid, collision-free path, but also making sure that this path allows for successful localization throughout. As joint multi-robot planning can be a computationally intractable problem, planning was divided into two steps from a vision-aware perspective. As the first step for improving localization performance is having access to a better map of features, a next-best-multi-view algorithm was developed which can compute the best viewpoints for multiple vehicles that can improve an existing sparse reconstruction. This algorithm contains a cost function containing vision-based heuristics that determines the quality of expected images from any set of viewpoints; which is minimized through an efficient evolutionary strategy known as Covariance Matrix Adaption (CMA-ES) that can handle very high dimensional sample spaces. In the second step, a sampling based planner called Vision-Aware RRT* (VA-RRT*) was developed which includes similar vision heuristics in an information gain based framework in order to drive individual vehicles towards areas that can benefit feature tracking and thus localization. Both steps of the planning framework were tested and validated using results from simulation

    Bibliographic Review on Distributed Kalman Filtering

    Get PDF
    In recent years, a compelling need has arisen to understand the effects of distributed information structures on estimation and filtering. In this paper, a bibliographical review on distributed Kalman filtering (DKF) is provided.\ud The paper contains a classification of different approaches and methods involved to DKF. The applications of DKF are also discussed and explained separately. A comparison of different approaches is briefly carried out. Focuses on the contemporary research are also addressed with emphasis on the practical applications of the techniques. An exhaustive list of publications, linked directly or indirectly to DKF in the open literature, is compiled to provide an overall picture of different developing aspects of this area

    Leader-assisted localization approach for a heterogeneous multi-robot system

    Get PDF
    This thesis presents the design, implementation, and validation of a novel leader assisted localization framework for a heterogeneous multi-robot system (MRS) with sensing and communication range constraints. It is assumed that the given heterogeneous MRS has a more powerful robot (or group of robots) with accurate self localization capabilities (leader robots) while the rest of the team (child robots), i.e. less powerful robots, is localized with the assistance of leader robots and inter-robot observation between teammates. This will eventually pose a condition that the child robots should be operated within the sensing and communication range of leader robots. The bounded navigation space therefore may require added algorithms to avoid inter-robot collisions and limit robots’ maneuverability. To address this limitation, first, the thesis introduces a novel distributed graph search and global pose composition algorithm to virtually enhance the leader robots’ sensing and communication range while avoiding possible double counting of common information. This allows child robots to navigate beyond the sensing and communication range of the leader robot, yet receive localization services from the leader robots. A time-delayed measurement update algorithm and a memory optimization approach are then integrated into the proposed localization framework. This eventually improves the robustness of the algorithm against the unknown processing and communication time-delays associated with the inter-robot data exchange network. Finally, a novel hierarchical sensor fusion architecture is introduced so that the proposed localization scheme can be implemented using inter-robot relative range and bearing measurements. The performance of the proposed localization framework is evaluated through a series of indoor experiments, a publicly available multi-robot localization and mapping data-set and a set of numerical simulations. The results illustrate that the proposed leader-assisted localization framework is capable of establishing accurate and nonoverconfident localization for the child robots even when the child robots operate beyond the sensing and communication boundaries of the leader robots
    • …
    corecore