1,035 research outputs found

    Omnidirectional Vision Based Topological Navigation

    Get PDF
    Goedemé T., Van Gool L., ''Omnidirectional vision based topological navigation'', Mobile robots navigation, pp. 172-196, Barrera Alejandra, ed., March 2010, InTech.status: publishe

    SIFTing the relevant from the irrelevant: Automatically detecting objects in training images

    Get PDF
    Many state-of-the-art object recognition systems rely on identifying the location of objects in images, in order to better learn its visual attributes. In this paper, we propose four simple yet powerful hybrid ROI detection methods (combining both local and global features), based on frequently occurring keypoints. We show that our methods demonstrate competitive performance in two different types of datasets, the Caltech101 dataset and the GRAZ-02 dataset, where the pairs of keypoint bounding box method achieved the best accuracies overall

    A decentralised neural model explaining optimal integration of navigational strategies in insects

    Get PDF
    Insect navigation arises from the coordinated action of concurrent guidance systems but the neural mechanisms through which each functions, and are then coordinated, remains unknown. We propose that insects require distinct strategies to retrace familiar routes (route-following) and directly return from novel to familiar terrain (homing) using different aspects of frequency encoded views that are processed in different neural pathways. We also demonstrate how the Central Complex and Mushroom Bodies regions of the insect brain may work in tandem to coordinate the directional output of different guidance cues through a contextually switched ring-attractor inspired by neural recordings. The resultant unified model of insect navigation reproduces behavioural data from a series of cue conflict experiments in realistic animal environments and offers testable hypotheses of where and how insects process visual cues, utilise the different information that they provide and coordinate their outputs to achieve the adaptive behaviours observed in the wild

    Ant homing ability is not diminished when traveling backwards

    Get PDF
    Ants are known to be capable of homing to their nest after displacement to a novel location. This is widely assumed to involve some form of retinotopic matching between their current view and previously experienced views. One simple algorithm proposed to explain this behavior is continuous retinotopic alignment, in which the ant constantly adjusts its heading by rotating to minimize the pixel-wise difference of its current view from all views stored while facing the nest. However, ants with large prey items will often drag them home while facing backwards. We tested whether displaced ants (Myrmecia croslandi) dragging prey could still home despite experiencing an inverted view of their surroundings under these conditions. Ants moving backwards with food took similarly direct paths to the nest as ants moving forward without food, demonstrating that continuous retinotopic alignment is not a critical component of homing. It is possible that ants use initial or intermittent retinotopic alignment, coupled with some other direction stabilizing cue that they can utilize when moving backward. However, though most ants dragging prey would occasionally look toward the nest, we observed that their heading direction was not noticeably improved afterwards. We assume ants must use comparison of current and stored images for corrections of their path, but suggest they are either able to chose the appropriate visual memory for comparison using an additional mechanism; or can make such comparisons without retinotopic alignment

    Autonomous Navigation and Mapping using Monocular Low-Resolution Grayscale Vision

    Get PDF
    Vision has been a powerful tool for navigation of intelligent and man-made systems ever since the cybernetics revolution in the 1970s. There have been two basic approaches to the navigation of computer controlled systems: The self-contained bottom-up development of sensorimotor abilities, namely perception and mobility, and the top-down approach, namely artificial intelligence, reasoning and knowledge based methods. The three-fold goal of autonomous exploration, mapping and localization of a mobile robot however, needs to be developed within a single framework. An algorithm is proposed to answer the challenges of autonomous corridor navigation and mapping by a mobile robot equipped with a single forward-facing camera. Using a combination of corridor ceiling lights, visual homing, and entropy, the robot is able to perform straight line navigation down the center of an unknown corridor. Turning at the end of a corridor is accomplished using Jeffrey divergence and time-to-collision, while deflection from dead ends and blank walls uses a scalar entropy measure of the entire image. When combined, these metrics allow the robot to navigate in both textured and untextured environments. The robot can autonomously explore an unknown indoor environment, recovering from difficult situations like corners, blank walls, and initial heading toward a wall. While exploring, the algorithm constructs a Voronoi-based topo-geometric map with nodes representing distinctive places like doors, water fountains, and other corridors. Because the algorithm is based entirely upon low-resolution (32 x 24) grayscale images, processing occurs at over 1000 frames per second

    Incremental On-Line Topological Map Learning for A Visual Homing Application

    Full text link

    Characterization of image sets: the Galois Lattice approach

    Get PDF
    This paper presents a new method for supervised image classification. One or several landmarks are attached to each class, with the intention of characterizing it and discriminating it from the other classes. The different features, deduced from image primitives, and their relationships with the sets of images are structured and organized into a hierarchy thanks to an original method relying on a mathematical formalism called Galois (or Concept) Lattices. Such lattices allow us to select features as landmarks of specific classes. This paper details the feature selection process and illustrates this through a robotic example in a structured environment. The class of any image is the room from which the image is shot by the robot camera. In the discussion, we compare this approach with decision trees and we give some issues for future research
    corecore