5 research outputs found

    Extracting scale and illuminant invariant regions through color

    No full text
    Despite the fact that color is a powerful cue in object recognition, the extraction of scale-invariant interest regions from color images frequently begins with a conversion of the image to grayscale. The isolation of interest points is then completely determined by luminance, and the use of color is deferred to the stage of descriptor formation. This seemingly innocuous conversion to grayscale is known to suppress saliency and can lead to representative regions being undetected by procedures based only on luminance. Furthermore, grayscaled images of the same scene under even slightly different illuminants can appear sufficiently different as to affect the repeatability of detections across images. We propose a method that combines information from the color channels to drive the detection of scale-invariant keypoints. By factoring out the local effect of the illuminant using an expressive linear model, we demonstrate robustness to a change in the illuminant without having to estimate its properties from the image. Results are shown on challenging images from two commonly used color constancy datasets.

    Extracting Scale and Illuminant Invariant Regions through Color

    No full text
    Despite the fact that color is a powerful cue in object recognition, the extraction of scale-invariant interest regions from color images frequently begins with a conversion of the image to grayscale. The isolation of interest points is then completely determined by luminance, and the use of color is deferred to the stage of descriptor formation. This seemingly innocuous conversion to grayscale is known to suppress saliency and can lead to representative regions being undetected by procedures based only on luminance. Furthermore, grayscaled images of the same scene under even slightly different illuminants can appear sufficiently different as to affect the repeatability of detections across images. We propose a method that combines information from the color channels to drive the detection of scale-invariant keypoints. By factoring out the local effect of the illuminant using an expressive linear model, we demonstrate robustness to a change in the illuminant without having to estimate its properties from the image. Results are shown on challenging images from two commonly used color constancy datasets

    Colour local feature fusion for image matching and recognition

    Get PDF
    This thesis investigates the use of colour information for local image feature extraction. The work is motivated by the inherent limitation of the most widely used state of the art local feature techniques, caused by their disregard of colour information. Colour contains important information that improves the description of the world around us, and by disregarding it; chromatic edges may be lost and thus decrease the level of saliency and distinctiveness of the resulting grayscale image. This thesis addresses the question of whether colour can improve the distinctive and descriptive capabilities of local features, and if this leads to better performances in image feature matching and object recognition applications. To ensure that the developed local colour features are robust to general imaging conditions and capable for real-world applications, this work utilises the most prominent photometric colour invariant gradients from the literature. The research addresses several limitations of previous studies that used colour invariants, by implementing robust local colour features in the form of a Harris-Laplace interest region detection and a SIFT description which characterises the detected image region. Additionally, a comprehensive and rigorous evaluation is performed, that compares the largest number of colour invariants of any previous study. This research provides for the first time, conclusive findings on the capability of the chosen colour invariants for practical real-world computer vision tasks. The last major aspect of the research involves the proposal of a feature fusion extraction strategy, that uses grayscale intensity and colour information conjointly. Two separate fusion approaches are implemented and evaluated, one for local feature matching tasks and another approach for object recognition. Results from the fusion analysis strongly indicate, that the colour invariants contain unique and useful information that can enhance the performance of techniques that use grayscale only based features
    corecore