501 research outputs found

    PEBO-SLAM: Observer design for visual inertial SLAM with convergence guarantees

    Full text link
    This paper introduces a new linear parameterization to the problem of visual inertial simultaneous localization and mapping (VI-SLAM) -- without any approximation -- for the case only using information from a single monocular camera and an inertial measurement unit. In this problem set, the system state evolves on the nonlinear manifold SE(3)×R3nSE(3)\times \mathbb{R}^{3n}, on which we design dynamic extensions carefully to generate invariant foliations, such that the problem can be reformulated into online \emph{constant parameter} identification, then interestingly with linear regression models obtained. It demonstrates that VI-SLAM can be translated into a linear least squares problem, in the deterministic sense, \emph{globally} and \emph{exactly}. Based on this observation, we propose a novel SLAM observer, following the recently established parameter estimation-based observer (PEBO) methodology. A notable merit is that the proposed observer enjoys almost global asymptotic stability, requiring neither persistency of excitation nor uniform complete observability, which, however, are widely adopted in most existing works with provable stability but can hardly be assured in many practical scenarios

    An Equivariant Observer Design for Visual Localisation and Mapping

    Full text link
    This paper builds on recent work on Simultaneous Localisation and Mapping (SLAM) in the non-linear observer community, by framing the visual localisation and mapping problem as a continuous-time equivariant observer design problem on the symmetry group of a kinematic system. The state-space is a quotient of the robot pose expressed on SE(3) and multiple copies of real projective space, used to represent both points in space and bearings in a single unified framework. An observer with decoupled Riccati-gains for each landmark is derived and we show that its error system is almost globally asymptotically stable and exponentially stable in-the-large.Comment: 12 pages, 2 figures, published in 2019 IEEE CD

    MOMA: Visual Mobile Marker Odometry

    Full text link
    In this paper, we present a cooperative odometry scheme based on the detection of mobile markers in line with the idea of cooperative positioning for multiple robots [1]. To this end, we introduce a simple optimization scheme that realizes visual mobile marker odometry via accurate fixed marker-based camera positioning and analyse the characteristics of errors inherent to the method compared to classical fixed marker-based navigation and visual odometry. In addition, we provide a specific UAV-UGV configuration that allows for continuous movements of the UAV without doing stops and a minimal caterpillar-like configuration that works with one UGV alone. Finally, we present a real-world implementation and evaluation for the proposed UAV-UGV configuration

    Attention and Anticipation in Fast Visual-Inertial Navigation

    Get PDF
    We study a Visual-Inertial Navigation (VIN) problem in which a robot needs to estimate its state using an on-board camera and an inertial sensor, without any prior knowledge of the external environment. We consider the case in which the robot can allocate limited resources to VIN, due to tight computational constraints. Therefore, we answer the following question: under limited resources, what are the most relevant visual cues to maximize the performance of visual-inertial navigation? Our approach has four key ingredients. First, it is task-driven, in that the selection of the visual cues is guided by a metric quantifying the VIN performance. Second, it exploits the notion of anticipation, since it uses a simplified model for forward-simulation of robot dynamics, predicting the utility of a set of visual cues over a future time horizon. Third, it is efficient and easy to implement, since it leads to a greedy algorithm for the selection of the most relevant visual cues. Fourth, it provides formal performance guarantees: we leverage submodularity to prove that the greedy selection cannot be far from the optimal (combinatorial) selection. Simulations and real experiments on agile drones show that our approach ensures state-of-the-art VIN performance while maintaining a lean processing time. In the easy scenarios, our approach outperforms appearance-based feature selection in terms of localization errors. In the most challenging scenarios, it enables accurate visual-inertial navigation while appearance-based feature selection fails to track robot's motion during aggressive maneuvers.Comment: 20 pages, 7 figures, 2 table

    Input uncertainty sensitivity enhanced non-singleton fuzzy logic controllers for long-term navigation of quadrotor UAVs

    Get PDF
    Input uncertainty, e.g., noise on the on-board camera and inertial measurement unit, in vision-based control of unmanned aerial vehicles (UAVs) is an inevitable problem. In order to handle input uncertainties as well as further analyze the interaction between the input and the antecedent fuzzy sets (FSs) of non-singleton fuzzy logic controllers (NSFLCs), an input uncertainty sensitivity enhanced NSFLC has been developed in robot operating system (ROS) using the C++ programming language. Based on recent advances in non-singleton inference, the centroid of the intersection of the input and antecedent FSs (Cen-NSFLC) is utilized to calculate the firing strength of each rule instead of the maximum of the intersection used in traditional NSFLC (Tra-NSFLC). An 8-shaped trajectory, consisting of straight and curved lines, is used for the real-time validation of the proposed controllers for a trajectory following problem. An accurate monocular keyframe-based visual-inertial simultaneous localization and mapping (SLAM) approach is used to estimate the position of the quadrotor UAV in GPS denied unknown environments. The performance of the Cen-NSFLC is compared with a conventional proportional integral derivative (PID) controller, a singleton FLC (SFLC) and a Tra-NSFLC. All controllers are evaluated for different flight speeds, thus introducing different levels of uncertainty into the control problem. Visual-inertial SLAM-based real time quadrotor UAV flight tests demonstrate that not only does the Cen-NSFLC achieve the best control performance among the four controllers, but it also shows better control performance when compared to their singleton counterparts. Considering the bias in the use of model based controllers, e.g. PID, for the control of UAVs, this paper advocates an alternative method, namely Cen-NSFLCs, in uncertain working environments
    • …
    corecore