9,269 research outputs found

    Invariant EKF Design for Scan Matching-aided Localization

    Full text link
    Localization in indoor environments is a technique which estimates the robot's pose by fusing data from onboard motion sensors with readings of the environment, in our case obtained by scan matching point clouds captured by a low-cost Kinect depth camera. We develop both an Invariant Extended Kalman Filter (IEKF)-based and a Multiplicative Extended Kalman Filter (MEKF)-based solution to this problem. The two designs are successfully validated in experiments and demonstrate the advantage of the IEKF design

    Convergence and Consistency Analysis for A 3D Invariant-EKF SLAM

    Full text link
    In this paper, we investigate the convergence and consistency properties of an Invariant-Extended Kalman Filter (RI-EKF) based Simultaneous Localization and Mapping (SLAM) algorithm. Basic convergence properties of this algorithm are proven. These proofs do not require the restrictive assumption that the Jacobians of the motion and observation models need to be evaluated at the ground truth. It is also shown that the output of RI-EKF is invariant under any stochastic rigid body transformation in contrast to SO(3)\mathbb{SO}(3) based EKF SLAM algorithm (SO(3)\mathbb{SO}(3)-EKF) that is only invariant under deterministic rigid body transformation. Implications of these invariance properties on the consistency of the estimator are also discussed. Monte Carlo simulation results demonstrate that RI-EKF outperforms SO(3)\mathbb{SO}(3)-EKF, Robocentric-EKF and the "First Estimates Jacobian" EKF, for 3D point feature based SLAM

    Self-motion and the perception of stationary objects

    Get PDF
    One of the ways we perceive shape is through seeing motion. Visual motion may be actively generated (for example, in locomotion), or passively observed. In the study of how we perceive 3D structure from motion (SfM), the non-moving, passive observer in an environment of moving rigid objects has been used as a substitute for an active observer moving in an environment of stationary objects; the 'rigidity hypothesis' has played a central role in computational and experimental studies of SfM. Here we demonstrate that this substitution is not fully adequate, because active observers perceive 3D structure differently from passive observers, despite experiencing the same visual stimulus: active observers' perception of 3D structure depends on extra-visual self-motion information. Moreover, the visual system, making use of the self-motion information treats objects that are stationary (in an allocentric, earth-fixed reference frame) differently from objects that are merely rigid. These results show that action plays a central role in depth perception, and argue for a revision of the rigidity hypothesis to incorporate the special case of stationary objects

    Observability/Identifiability of Rigid Motion under Perspective Projection

    Get PDF
    The "visual motion" problem consists of estimating the motion of an object viewed under projection. In this paper we address the feasibility of such a problem. We will show that the model which defines the visual motion problem for feature points in the euclidean 3D space lacks of both linear and local (weak) observability. The locally observable manifold is covered with three levels of lie differentiations. Indeed, by imposing metric constraints on the state-space, it is possible to reduce the set of indistinguishable states. We will then analyze a model for visual motion estimation in terms of identification of an Exterior Differential System, with the parameters living on a topological manifold, called the "essential manifold", which includes explicitly in its definition the forementioned metric constraints. We will show that rigid motion is globally observable/identifiable under perspective projection with zero level of lie differentiation under some general position conditions. Such conditions hold when the viewer does not move on a quadric surface containing all the visible points

    A sliding mode approach to visual motion estimation

    Get PDF
    The problem of estimating motion from a sequence of images has been a major research theme in machine vision for many years and remains one of the most challenging ones. In this work, we use sliding mode observers to estimate the motion of a moving body with the aid of a CCD camera. We consider a variety of dynamical systems which arise in machine vision applications and develop a novel identication procedure for the estimation of both constant and time varying parameters. The basic procedure introduced for parameter estimation is to recast image feature dynamics linearly in terms of unknown parameters and construct a sliding mode observer to produce asymptotically correct estimates of the observed image features, and then use “equivalent control” to explicitly compute parameters. Much of our analysis has been substantiated by computer simulations and real experiments

    Robust position control of a tilt-wing quadrotor

    Get PDF
    This paper presents a robust position controller for a tilt-wing quadrotor to track desired trajectories under external wind and aerodynamic disturbances. Wind effects are modeled using Dryden model and are included in the dynamic model of the vehicle. Robust position control is achieved by introducing a disturbance observer which estimates the total disturbance acting on the system. In the design of the disturbance observer, the nonlinear terms which appear in the dynamics of the aerial vehicle are also treated as disturbances and included in the total disturbance. Utilization of the disturbance observer implies a linear model with nominal parameters. Since the resulting dynamics are linear, only PID type simple controllers are designed for position and attitude control. Simulations and experimental results show that the performance of the observer based position control system is quite satisfactory
    corecore