589 research outputs found

    Autonomous Navigation and Mapping using Monocular Low-Resolution Grayscale Vision

    Get PDF
    Vision has been a powerful tool for navigation of intelligent and man-made systems ever since the cybernetics revolution in the 1970s. There have been two basic approaches to the navigation of computer controlled systems: The self-contained bottom-up development of sensorimotor abilities, namely perception and mobility, and the top-down approach, namely artificial intelligence, reasoning and knowledge based methods. The three-fold goal of autonomous exploration, mapping and localization of a mobile robot however, needs to be developed within a single framework. An algorithm is proposed to answer the challenges of autonomous corridor navigation and mapping by a mobile robot equipped with a single forward-facing camera. Using a combination of corridor ceiling lights, visual homing, and entropy, the robot is able to perform straight line navigation down the center of an unknown corridor. Turning at the end of a corridor is accomplished using Jeffrey divergence and time-to-collision, while deflection from dead ends and blank walls uses a scalar entropy measure of the entire image. When combined, these metrics allow the robot to navigate in both textured and untextured environments. The robot can autonomously explore an unknown indoor environment, recovering from difficult situations like corners, blank walls, and initial heading toward a wall. While exploring, the algorithm constructs a Voronoi-based topo-geometric map with nodes representing distinctive places like doors, water fountains, and other corridors. Because the algorithm is based entirely upon low-resolution (32 x 24) grayscale images, processing occurs at over 1000 frames per second

    Characterization of image sets: the Galois Lattice approach

    Get PDF
    This paper presents a new method for supervised image classification. One or several landmarks are attached to each class, with the intention of characterizing it and discriminating it from the other classes. The different features, deduced from image primitives, and their relationships with the sets of images are structured and organized into a hierarchy thanks to an original method relying on a mathematical formalism called Galois (or Concept) Lattices. Such lattices allow us to select features as landmarks of specific classes. This paper details the feature selection process and illustrates this through a robotic example in a structured environment. The class of any image is the room from which the image is shot by the robot camera. In the discussion, we compare this approach with decision trees and we give some issues for future research
    corecore