6,748 research outputs found

    Performance comparison of image feature detectors utilizing a large number of scenes

    Get PDF
    Selecting the most suitable local invariant feature detector for a particular application has rendered the task of evaluating feature detectors a critical issue in vi sion research. No state-of-the-art image feature detector works satisfactorily under all types of image transformations. Although the literature offers a variety of comparison works focusing on performance evaluation of image feature detectors under several types of image transformation, the influence of the scene content on the performance of local feature detectors has received little attention so far. This paper aims to bridge this gap with a new framework for determining the type of scenes, which maximize and minimize the performance of detectors in terms of repeatability rate. Several state-of-the-art feature detectors have been assessed utilizing a large database of 12936 images generated by applying uniform light and blur changes to 539 scenes captured from the real world. The results obtained provide new insights into the behaviour of feature detectors

    Performance and analysis of feature tracking approaches in laser speckle instrumentation

    Get PDF
    This paper investigates the application of feature tracking algorithms as an alternative data processing method for laser speckle instrumentation. The approach is capable of determining both the speckle pattern translation and rotation and can therefore be used to detect the in-plane rotation and translation of an object simultaneously. A performance assessment of widely used feature detection and matching algorithms from the computer vision field, for both translation and rotation measurements from laser speckle patterns, is presented. The accuracy of translation measurements using the feature tracking approach was found to be similar to that of correlation-based processing with accuracies of 0.025–0.04 pixels and a typical precision of 0.02–0.09 pixels depending upon the method and image size used. The performance for in-plane rotation measurements are also presented with rotation measurement accuracies of <0.01 found to be achievable over an angle range of ±10 and of <0.1 over a range of ±25 ±25 , with a typical precision between 0.02 and 0.08 depending upon method and image size. The measurement range is found to be limited by the failure to match sufficient speckles at larger rotation angles. An analysis of each stage of the process was conducted to identify the most suitable approaches for use with laser speckle images and areas requiring further improvement. A quantitative approach to assessing different feature tracking methods is described, and reference data sets of experimentally translated and rotated speckle patterns from a range of surface finishes and surface roughness are presented. As a result, three areas that lead to the failure of the matching process are identified as areas for future investigation: the inability to detect the same features in partially decorrelated images leading to unmatchable features, the variance of computed feature orientation between frames leading to different descriptors being calculated for the same feature, and the failure of the matching processes due to the inability to discriminate between different features in speckle images

    A Generic Framework for Assessing the Performance Bounds of Image Feature Detectors

    Get PDF
    Since local feature detection has been one of the most active research areas in computer vision during the last decade and has found wide range of applications (such as matching and registration of remotely sensed image data), a large number of detectors have been proposed. The interest in feature-based applications continues to grow and has thus rendered the task of characterizing the performance of various feature detection methods an important issue in vision research. Inspired by the good practices of electronic system design, a generic framework based on the repeatability measure is presented in this paper that allows assessment of the upper and lower bounds of detector performance and finds statistically significant performance differences between detectors as a function of image transformation amount by introducing a new variant of McNemar’s test in an effort to design more reliable and effective vision systems. The proposed framework is then employed to establish operating and guarantee regions for several state-of-the art detectors and to identify their statistical performance differences for three specific image transformations: JPEG compression, uniform light changes and blurring. The results are obtained using a newly acquired, large image database (20,482 images) with 539 different scenes. These results provide new insights into the behavior of detectors and are also useful from the vision systems design perspective. Finally, results for some local feature detectors are presented for a set of remote sensing images to showcase the potential and utility of this framework for remote sensing applications in general

    Faster and better: a machine learning approach to corner detection

    Full text link
    The repeatability and efficiency of a corner detector determines how likely it is to be useful in a real-world application. The repeatability is importand because the same scene viewed from different positions should yield features which correspond to the same real-world 3D locations [Schmid et al 2000]. The efficiency is important because this determines whether the detector combined with further processing can operate at frame rate. Three advances are described in this paper. First, we present a new heuristic for feature detection, and using machine learning we derive a feature detector from this which can fully process live PAL video using less than 5% of the available processing time. By comparison, most other detectors cannot even operate at frame rate (Harris detector 115%, SIFT 195%). Second, we generalize the detector, allowing it to be optimized for repeatability, with little loss of efficiency. Third, we carry out a rigorous comparison of corner detectors based on the above repeatability criterion applied to 3D scenes. We show that despite being principally constructed for speed, on these stringent tests, our heuristic detector significantly outperforms existing feature detectors. Finally, the comparison demonstrates that using machine learning produces significant improvements in repeatability, yielding a detector that is both very fast and very high quality.Comment: 35 pages, 11 figure

    Machine Analysis of Facial Expressions

    Get PDF
    No abstract

    Keyframe-based monocular SLAM: design, survey, and future directions

    Get PDF
    Extensive research in the field of monocular SLAM for the past fifteen years has yielded workable systems that found their way into various applications in robotics and augmented reality. Although filter-based monocular SLAM systems were common at some time, the more efficient keyframe-based solutions are becoming the de facto methodology for building a monocular SLAM system. The objective of this paper is threefold: first, the paper serves as a guideline for people seeking to design their own monocular SLAM according to specific environmental constraints. Second, it presents a survey that covers the various keyframe-based monocular SLAM systems in the literature, detailing the components of their implementation, and critically assessing the specific strategies made in each proposed solution. Third, the paper provides insight into the direction of future research in this field, to address the major limitations still facing monocular SLAM; namely, in the issues of illumination changes, initialization, highly dynamic motion, poorly textured scenes, repetitive textures, map maintenance, and failure recovery
    • …
    corecore