19 research outputs found

    Cooperative Localization under Limited Connectivity

    Full text link
    We report two decentralized multi-agent cooperative localization algorithms in which, to reduce the communication cost, inter-agent state estimate correlations are not maintained but accounted for implicitly. In our first algorithm, to guarantee filter consistency, we account for unknown inter-agent correlations via an upper bound on the joint covariance matrix of the agents. In the second method, we use an optimization framework to estimate the unknown inter-agent cross-covariance matrix. In our algorithms, each agent localizes itself in a global coordinate frame using a local filter driven by local dead reckoning and occasional absolute measurement updates, and opportunistically corrects its pose estimate whenever it can obtain relative measurements with respect to other mobile agents. To process any relative measurement, only the agent taken the measurement and the agent the measurement is taken from need to communicate with each other. Consequently, our algorithms are decentralized algorithms that do not impose restrictive network-wide connectivity condition. Moreover, we make no assumptions about the type of agents or relative measurements. We demonstrate our algorithms in simulation and a robotic~experiment.Comment: 9 pages, 5 figure

    Cooperative localization for mobile agents: a recursive decentralized algorithm based on Kalman filter decoupling

    Full text link
    We consider cooperative localization technique for mobile agents with communication and computation capabilities. We start by provide and overview of different decentralization strategies in the literature, with special focus on how these algorithms maintain an account of intrinsic correlations between state estimate of team members. Then, we present a novel decentralized cooperative localization algorithm that is a decentralized implementation of a centralized Extended Kalman Filter for cooperative localization. In this algorithm, instead of propagating cross-covariance terms, each agent propagates new intermediate local variables that can be used in an update stage to create the required propagated cross-covariance terms. Whenever there is a relative measurement in the network, the algorithm declares the agent making this measurement as the interim master. By acquiring information from the interim landmark, the agent the relative measurement is taken from, the interim master can calculate and broadcast a set of intermediate variables which each robot can then use to update its estimates to match that of a centralized Extended Kalman Filter for cooperative localization. Once an update is done, no further communication is needed until the next relative measurement

    Cascaded Filtering Using the Sigma Point Transformation (Extended Version)

    Full text link
    It is often convenient to separate a state estimation task into smaller "local" tasks, where each local estimator estimates a subset of the overall system state. However, neglecting cross-covariance terms between state estimates can result in overconfident estimates, which can ultimately degrade the accuracy of the estimator. Common cascaded filtering techniques focus on the problem of modelling cross-covariances when the local estimators share a common state vector. This letter introduces a novel cascaded and decentralized filtering approach that approximates the cross-covariances when the local estimators consider distinct state vectors. The proposed estimator is validated in simulations and in experiments on a three-dimensional attitude and position estimation problem. The proposed approach is compared to a naive cascaded filtering approach that neglects cross-covariance terms, a sigma point-based Covariance Intersection filter, and a full-state filter. In both simulations and experiments, the proposed filter outperforms the naive and the Covariance Intersection filters, while performing comparatively to the full-state filter.Comment: This is an extended version of the original letter to be published in the IEEE Robotics and Automation Letter

    Cooperative Visual-Inertial Sensor Fusion: Fundamental Equations

    Get PDF
    International audienceThis paper provides a new theoretical and basic result in the framework of cooperative visual-inertial sensor fusion. Specifically, the case of two aerial vehicles is investigated. Each vehicle is equipped with inertial sensors (accelerometer and gyroscope) and with a monocular camera. By using the monocular camera, each vehicle can observe the other vehicle. No additional camera observations (e.g., of external point features in the environment) are considered. First, the entire observable state is analytically derived. This state includes the relative position between the two aerial vehicles (which includes the absolute scale), the relative velocity and the three Euler angles that express the rotation between the two vehicle frames. Then, the basic equations that describe this system are analytically obtained. In other words, both the dynamics of the observable state and all the camera observations are expressed only in terms of the components of the observable state and in terms of the inertial measurements. These are the fundamental equations that fully characterize the problem of fusing visual and inertial data in the cooperative case. The last part of the paper describes the use of these equations to achieve the state estimation through an EKF. In particular, a simple manner to limit communication among the vehicles is discussed. Results obtained through simulations show the performance of the proposed solution, and in particular how it is affected by limiting the communication between the two vehicles
    corecore