3,443 research outputs found

    Performance Characterization of Image Feature Detectors in Relation to the Scene Content Utilizing a Large Image Database

    Get PDF
    Selecting the most suitable local invariant feature detector for a particular application has rendered the task of evaluating feature detectors a critical issue in vision research. Although the literature offers a variety of comparison works focusing on performance evaluation of image feature detectors under several types of image transformations, the influence of the scene content on the performance of local feature detectors has received little attention so far. This paper aims to bridge this gap with a new framework for determining the type of scenes which maximize and minimize the performance of detectors in terms of repeatability rate. The results are presented for several state-of-the-art feature detectors that have been obtained using a large image database of 20482 images under JPEG compression, uniform light and blur changes with 539 different scenes captured from real-world scenarios. These results provide new insights into the behavior of feature detectors

    Performance comparison of image feature detectors utilizing a large number of scenes

    Get PDF
    Selecting the most suitable local invariant feature detector for a particular application has rendered the task of evaluating feature detectors a critical issue in vi sion research. No state-of-the-art image feature detector works satisfactorily under all types of image transformations. Although the literature offers a variety of comparison works focusing on performance evaluation of image feature detectors under several types of image transformation, the influence of the scene content on the performance of local feature detectors has received little attention so far. This paper aims to bridge this gap with a new framework for determining the type of scenes, which maximize and minimize the performance of detectors in terms of repeatability rate. Several state-of-the-art feature detectors have been assessed utilizing a large database of 12936 images generated by applying uniform light and blur changes to 539 scenes captured from the real world. The results obtained provide new insights into the behaviour of feature detectors

    Automatic Selection of the Optimal Local Feature Detector

    Get PDF
    A large number of different local feature detectors have been proposed in the last few years. However, each feature detector has its own strengths and weaknesses that limit its use to a specific range of applications. In this paper is presented a tool capable of quickly analysing input images to determine which type and amount of transformation is applied to them and then selecting the optimal feature detector, which is expected to perform the best. The results show that the performance and the fast execution time render the proposed tool suitable for real-world vision applications

    An Approach to Automatic Selection of the Optimal Local Feature Detector

    Get PDF
    Feature matching techniques have significantly contributed in making vision applications more reliable by solving the image correspondence problem. The feature matching process requires an effective feature detection stage capable of providing high quality interest points. The effort of the research community in this field has produced a wide number of different approaches to the problem of feature detection. However, imaging conditions influence the performance of a feature detector, making it suitable only for a limited range of applications. This thesis aims to improve the reliability and effectiveness of feature detection by proposing an approach for the automatic selection of the optimal feature detector in relation to the input image characteristics. Having knowledge of how the imaging conditions will influence a feature detector's performance is fundamental to this research. Thus, the behaviour of feature detectors under varying image changes and in relation to the scene content is investigated. The results obtained through analysis allowed to make the first but important step towards a fully adaptive selection method of the optimal feature detector for any given operating condition

    Computer Vision for Timber Harvesting

    Get PDF

    PUGTIFs: Passively user-generated thermal invariant features

    Get PDF
    Feature detection is a vital aspect of computer vision applications, but adverse environments, distance and illumination can affect the quality and repeatability of features or even prevent their identification. Invariance to these constraints would make an ideal feature attribute. Here we propose the first exploitation of consistently occurring thermal signatures generated by a moving platform, a paradigm we define as passively user-generated thermal invariant features (PUGTIFs). In this particular instance, the PUGTIF concept is applied through the use of thermal footprints that are passively and continuously user generated by heat differences, so that features are no longer dependent on the changing scene structure (as in classical approaches) but now maintain a spatial coherency and remain invariant to changes in illumination. A framework suitable for any PUGTIF has been designed consisting of three methods: first, the known footprint size is used to solve for monocular localisation and thus scale ambiguity; second, the consistent spatial pattern allows us to determine heading orientation; and third, these principles are combined in our automated thermal footprint detector (ATFD) method to achieve segmentation/feature detection. We evaluated the detection of PUGTIFs in four laboratory environments (sand, grass, grass with foliage, and carpet) and compared ATFD to typical image segmentation methods. We found that ATFD is superior to other methods while also solving for scaled monocular camera localisation and providing user heading in multiple environments

    A framework based on Gaussian mixture models and Kalman filters for the segmentation and tracking of anomalous events in shipboard video

    Get PDF
    Anomalous indications in monitoring equipment on board U.S. Navy vessels must be handled in a timely manner to prevent catastrophic system failure. The development of sensor data analysis techniques to assist a ship\u27s crew in monitoring machinery and summon required ship-to-shore assistance is of considerable benefit to the Navy. In addition, the Navy has a large interest in the development of distance support technology in its ongoing efforts to reduce manning on ships. In this thesis, algorithms have been developed for the detection of anomalous events that can be identified from the analysis of monochromatic stationary ship surveillance video streams. The specific anomalies that we have focused on are the presence and growth of smoke and fire events inside the frames of the video stream. The algorithm consists of the following steps. First, a foreground segmentation algorithm based on adaptive Gaussian mixture models is employed to detect the presence of motion in a scene. The algorithm is adapted to emphasize gray-level characteristics related to smoke and fire events in the frame. Next, shape discriminant features in the foreground are enhanced using morphological operations. Following this step, the anomalous indication is tracked between frames using Kalman filtering. Finally, gray level shape and motion features corresponding to the anomaly are subjected to principal component analysis and classified using a multilayer perceptron neural network. The algorithm is exercised on 68 video streams that include the presence of anomalous events (such as fire and smoke) and benign/nuisance events (such as humans walking the field of view). Initial results show that the algorithm is successful in detecting anomalies in video streams, and is suitable for application in shipboard environments

    On Creating Benchmark Dataset for Aerial Image Interpretation: Reviews, Guidances and Million-AID

    Get PDF
    The past years have witnessed great progress on remote sensing (RS) image interpretation and its wide applications. With RS images becoming more accessible than ever before, there is an increasing demand for the automatic interpretation of these images. In this context, the benchmark datasets serve as essential prerequisites for developing and testing intelligent interpretation algorithms. After reviewing existing benchmark datasets in the research community of RS image interpretation, this article discusses the problem of how to efficiently prepare a suitable benchmark dataset for RS image interpretation. Specifically, we first analyze the current challenges of developing intelligent algorithms for RS image interpretation with bibliometric investigations. We then present the general guidances on creating benchmark datasets in efficient manners. Following the presented guidances, we also provide an example on building RS image dataset, i.e., Million-AID, a new large-scale benchmark dataset containing a million instances for RS image scene classification. Several challenges and perspectives in RS image annotation are finally discussed to facilitate the research in benchmark dataset construction. We do hope this paper will provide the RS community an overall perspective on constructing large-scale and practical image datasets for further research, especially data-driven ones
    • …
    corecore