58,657 research outputs found

    A method to improve interest point detection and its GPU implementation

    Get PDF
    Interest point detection is an important low-level image processing technique with a wide range of applications. The point detectors have to be robust under affine, scale and photometric changes. There are many scale and affine invariant point detectors but they are not robust to high illumination changes. Many affine invariant interest point detectors and region descriptors, work on the points detected using scale invariant operators. Since the performance of those detectors depends on the performance of the scale invariant detectors, it is important that the scale invariant initial stage detectors should have good robustness. It is therefore important to design a detector that is very robust to illumination because illumination changes are the most common. In this research the illumination problem has been taken as the main focus and have developed a scale invariant detector that has good robustness to illumination changes. In the paper [6] it has been proved that by using contrast stretching technique the performance of the Harris operator improved considerably for illumination variations. In this research the same contrast stretching function has been incorporated into two different scale invariant operators to make them illumination invariant. The performances of the algorithms are compared with the Harris-Laplace and Hessian-Laplace algorithms [15]

    Review of Person Re-identification Techniques

    Full text link
    Person re-identification across different surveillance cameras with disjoint fields of view has become one of the most interesting and challenging subjects in the area of intelligent video surveillance. Although several methods have been developed and proposed, certain limitations and unresolved issues remain. In all of the existing re-identification approaches, feature vectors are extracted from segmented still images or video frames. Different similarity or dissimilarity measures have been applied to these vectors. Some methods have used simple constant metrics, whereas others have utilised models to obtain optimised metrics. Some have created models based on local colour or texture information, and others have built models based on the gait of people. In general, the main objective of all these approaches is to achieve a higher-accuracy rate and lowercomputational costs. This study summarises several developments in recent literature and discusses the various available methods used in person re-identification. Specifically, their advantages and disadvantages are mentioned and compared.Comment: Published 201

    D2-Net: A Trainable CNN for Joint Detection and Description of Local Features

    Full text link
    In this work we address the problem of finding reliable pixel-level correspondences under difficult imaging conditions. We propose an approach where a single convolutional neural network plays a dual role: It is simultaneously a dense feature descriptor and a feature detector. By postponing the detection to a later stage, the obtained keypoints are more stable than their traditional counterparts based on early detection of low-level structures. We show that this model can be trained using pixel correspondences extracted from readily available large-scale SfM reconstructions, without any further annotations. The proposed method obtains state-of-the-art performance on both the difficult Aachen Day-Night localization dataset and the InLoc indoor localization benchmark, as well as competitive performance on other benchmarks for image matching and 3D reconstruction.Comment: Accepted at CVPR 201

    Large scale evaluation of local image feature detectors on homography datasets

    Full text link
    We present a large scale benchmark for the evaluation of local feature detectors. Our key innovation is the introduction of a new evaluation protocol which extends and improves the standard detection repeatability measure. The new protocol is better for assessment on a large number of images and reduces the dependency of the results on unwanted distractors such as the number of detected features and the feature magnification factor. Additionally, our protocol provides a comprehensive assessment of the expected performance of detectors under several practical scenarios. Using images from the recently-introduced HPatches dataset, we evaluate a range of state-of-the-art local feature detectors on two main tasks: viewpoint and illumination invariant detection. Contrary to previous detector evaluations, our study contains an order of magnitude more image sequences, resulting in a quantitative evaluation significantly more robust to over-fitting. We also show that traditional detectors are still very competitive when compared to recent deep-learning alternatives.Comment: Accepted to BMVC 201
    corecore