7 research outputs found

    A Constant-Time Algorithm for Vector Field SLAM using an Exactly Sparse Extended Information Filter

    Full text link
    Abstract β€” Designing a localization system for a low-cost robotic consumer product poses a major challenge. In previous work, we introduced Vector Field SLAM [5], a system for simultaneously estimating robot pose and a vector field induced by stationary signal sources present in the environment. In this paper we show how this method can be realized on a low-cost embedded processing unit by applying the concepts of the Exactly Sparse Extended Information Filter [15]. By restricting the set of active features to the 4 nodes of the current cell, the size of the map becomes linear in the area explored by the robot while the time for updating the state can be held constant under certain approximations. We report results from running our method on an ARM 7 embedded board with 64 kByte RAM controlling a Roomba 510 vacuum cleaner in a standard test environment. NS spot1 X (sensor units) Spot1 X readings Node X1 estimate

    Monocular graph SLAM with complexity reduction

    Get PDF
    Abstract-We present a graph-based SLAM approach, using monocular vision and odometry, designed to operate on computationally constrained platforms. When computation and memory are limited, visual tracking becomes difficult or impossible, and map representation and update costs must remain low. Our system constructs a map of structured views using only weak temporal assumptions, and performs recognition and relative pose estimation over the set of views. Visual observations are fused with differential sensors in an incrementally optimized graph representation. Using variable elimination and constraint pruning, the graph complexity and storage is kept linear in explored space rather than in time. We evaluate performance on sequences with ground truth, and also compare to a standard graph SLAM approach

    Semi-automatic Annotations in Unknown Environments

    No full text
    of the note automatically and renders it correctly registered to the environment. Unknown environments pose a particular challenge for augmented reality applications because the 3D models required for tracking, rendering and interaction are not available ahead of time. Consequently, authoring of AR content must take place on-line. This work describes a set of techniques to simplify the online authoring of annotations in unknown environments using a simultaneous localisation and mapping (SLAM) system. The point-based SLAM system is extended to specifically track and estimate high-level features indicated by the user. The automatic estimation of these complex landmarks by the system relieves the user from the burden of manually specifying the full 3D pose of annotations while improving accuracy. These properties are especially interesting for remote collaboration applications where either user interfaces on handhelds or camera control by the remote expert are limited
    corecore