3,067 research outputs found

    A comparative evaluation of interest point detectors and local descriptors for visual SLAM

    Get PDF
    Abstract In this paper we compare the behavior of different interest points detectors and descriptors under the conditions needed to be used as landmarks in vision-based simultaneous localization and mapping (SLAM). We evaluate the repeatability of the detectors, as well as the invariance and distinctiveness of the descriptors, under different perceptual conditions using sequences of images representing planar objects as well as 3D scenes. We believe that this information will be useful when selecting an appropriat

    Mobile Robot Localization using Panoramic Vision and Combinations of Feature Region Detectors

    Get PDF
    IEEE International Conference on Robotics and Automation (ICRA 2008, Pasadena, California, May 19-23, 2008), pp. 538-543.This paper presents a vision-based approach for mobile robot localization. The environmental model is topological. The new approach uses a constellation of different types of affine covariant regions to characterize a place. This type of representation permits a reliable and distinctive environment modeling. The performance of the proposed approach is evaluated using a database of panoramic images from different rooms. Additionally, we compare different combinations of complementary feature region detectors to find the one that achieves the best results. Our experimental results show promising results for this new localization method. Additionally, similarly to what happens with single detectors, different combinations exhibit different strengths and weaknesses depending on the situation, suggesting that a context-aware method to combine the different detectors would improve the localization results.This work was partially supported by USC Women in Science and Engineering (WiSE), the FI grant from the Generalitat de Catalunya, the European Social Fund, and the MID-CBR project grant TIN2006-15140-C03-01 and FEDER funds and the grant 2005-SGR-00093

    Analysis of feature detector and descriptor combinations with a localization experiment for various performance metrics

    Full text link
    The purpose of this study is to provide a detailed performance comparison of feature detector/descriptor methods, particularly when their various combinations are used for image-matching. The localization experiments of a mobile robot in an indoor environment are presented as a case study. In these experiments, 3090 query images and 127 dataset images were used. This study includes five methods for feature detectors (features from accelerated segment test (FAST), oriented FAST and rotated binary robust independent elementary features (BRIEF) (ORB), speeded-up robust features (SURF), scale invariant feature transform (SIFT), and binary robust invariant scalable keypoints (BRISK)) and five other methods for feature descriptors (BRIEF, BRISK, SIFT, SURF, and ORB). These methods were used in 23 different combinations and it was possible to obtain meaningful and consistent comparison results using the performance criteria defined in this study. All of these methods were used independently and separately from each other as either feature detector or descriptor. The performance analysis shows the discriminative power of various combinations of detector and descriptor methods. The analysis is completed using five parameters: (i) accuracy, (ii) time, (iii) angle difference between keypoints, (iv) number of correct matches, and (v) distance between correctly matched keypoints. In a range of 60{\deg}, covering five rotational pose points for our system, the FAST-SURF combination had the lowest distance and angle difference values and the highest number of matched keypoints. SIFT-SURF was the most accurate combination with a 98.41% correct classification rate. The fastest algorithm was ORB-BRIEF, with a total running time of 21,303.30 s to match 560 images captured during motion with 127 dataset images.Comment: 11 pages, 3 figures, 1 tabl

    Image features for visual teach-and-repeat navigation in changing environments

    Get PDF
    We present an evaluation of standard image features in the context of long-term visual teach-and-repeat navigation of mobile robots, where the environment exhibits significant changes in appearance caused by seasonal weather variations and daily illumination changes. We argue that for long-term autonomous navigation, the viewpoint-, scale- and rotation- invariance of the standard feature extractors is less important than their robustness to the mid- and long-term environment appearance changes. Therefore, we focus our evaluation on the robustness of image registration to variable lighting and naturally-occurring seasonal changes. We combine detection and description components of different image extractors and evaluate their performance on five datasets collected by mobile vehicles in three different outdoor environments over the course of one year. Moreover, we propose a trainable feature descriptor based on a combination of evolutionary algorithms and Binary Robust Independent Elementary Features, which we call GRIEF (Generated BRIEF). In terms of robustness to seasonal changes, the most promising results were achieved by the SpG/CNN and the STAR/GRIEF feature, which was slightly less robust, but faster to calculate

    Comparing Combinations of Feature Regions for Panoramic VSLAM

    Get PDF
    Invariant (or covariant) image feature region detectors and descriptors are useful in visual robot navigation because they provide a fast and reliable way to extract relevant and discriminative information from an image and, at the same time, avoid the problems of changes in illumination or in point of view. Furthermore, complementary types of image features can be used simultaneously to extract even more information. However, this advantage always entails the cost of more processing time and sometimes, if not used wisely, the performance can be even worse. In this paper we present the results of a comparison between various combinations of region detectors and descriptors. The test performed consists in computing the essential matrix between panoramic images using correspondences established with these methods. Different combinations of region detectors and descriptors are evaluated and validated using ground truth data. The results will help us to find the best combination to use it in an autonomous robot navigation system.This work has been partially supported by the FI grant from the Generalitat de Catalunya, the European Social Fund and the MID-CBR project grant TIN2006-15140-C03-01 and FEDER funds.Peer reviewe

    Synthesizing Training Data for Object Detection in Indoor Scenes

    Full text link
    Detection of objects in cluttered indoor environments is one of the key enabling functionalities for service robots. The best performing object detection approaches in computer vision exploit deep Convolutional Neural Networks (CNN) to simultaneously detect and categorize the objects of interest in cluttered scenes. Training of such models typically requires large amounts of annotated training data which is time consuming and costly to obtain. In this work we explore the ability of using synthetically generated composite images for training state-of-the-art object detectors, especially for object instance detection. We superimpose 2D images of textured object models into images of real environments at variety of locations and scales. Our experiments evaluate different superimposition strategies ranging from purely image-based blending all the way to depth and semantics informed positioning of the object models into real scenes. We demonstrate the effectiveness of these object detector training strategies on two publicly available datasets, the GMU-Kitchens and the Washington RGB-D Scenes v2. As one observation, augmenting some hand-labeled training data with synthetic examples carefully composed onto scenes yields object detectors with comparable performance to using much more hand-labeled data. Broadly, this work charts new opportunities for training detectors for new objects by exploiting existing object model repositories in either a purely automatic fashion or with only a very small number of human-annotated examples.Comment: Added more experiments and link to project webpag
    corecore