80,966 research outputs found

    No-reference blur estimation based on the average cone ratio in the wavelet domain

    Get PDF
    We propose a wavelet based metric of blurriness in the digital images named CogACR – Center of gravity of the Average Cone Ratio. The metric is highly robust to noise and able to distinguish between a great range of blurriness. To automate the CogACR estimation of blur in a no-reference scenario, we introduce a novel method for image classification based on edge content similarity. Our results indicate high accuracy of the CogACR metric for a range of natural scene images distorted with the out-of-focus blur. Within the considered range of blur radius of 0 to 10 pixels, varied in steps of 0.25 pixels, the proposed metric estimates the blur radius with an absolute error of up to 1 pixel in 80 to 90% of the images

    Person re-identification via efficient inference in fully connected CRF

    Full text link
    In this paper, we address the problem of person re-identification problem, i.e., retrieving instances from gallery which are generated by the same person as the given probe image. This is very challenging because the person's appearance usually undergoes significant variations due to changes in illumination, camera angle and view, background clutter, and occlusion over the camera network. In this paper, we assume that the matched gallery images should not only be similar to the probe, but also be similar to each other, under suitable metric. We express this assumption with a fully connected CRF model in which each node corresponds to a gallery and every pair of nodes are connected by an edge. A label variable is associated with each node to indicate whether the corresponding image is from target person. We define unary potential for each node using existing feature calculation and matching techniques, which reflect the similarity between probe and gallery image, and define pairwise potential for each edge in terms of a weighed combination of Gaussian kernels, which encode appearance similarity between pair of gallery images. The specific form of pairwise potential allows us to exploit an efficient inference algorithm to calculate the marginal distribution of each label variable for this dense connected CRF. We show the superiority of our method by applying it to public datasets and comparing with the state of the art.Comment: 7 pages, 4 figure

    Comparison between Structural Similarity Index Metric and Human Perception

    Get PDF
    This thesis examines the image quality assessment using Structural Similarity Index Metric (SSIM). The performance of Structural Similarity Index Metric was evaluated by comparing Mean Structural Similarity Index (MSSIM) index values with the Probability of Identification (PID) values. The perception experiments were designed for letter images with blur and letter images with blur and noise to obtain the PID values from an ensemble of observers. The other set of images used in this study were tank images for which PID data existed. All the images used in the experiment belong to Gaussian and Exponential filter shapes at various blur levels. All images at a specific blur level and specific filter shape were compared and MSSIM was obtained. MSSIM was compared with blur and PID was compared with blur at various levels for both the filter shapes to observe the correlation between SSIM and human perception. It is noticed from the results that there is no correlation between MSSIM and PID. The image quality differences between SSIM and human perception were obtained in this thesis. From the results it is noticed that SSIM cannot detect the filter shape difference where as humans perceived the difference for letter images with blur in our experiments. The Probability of Identification for Gaussian is lower than the Exponential filter shape which is explained by the edge energies analysis. It is observed that the results of tank images and letter images with blur and noise were similar where humans and MSSIM cannot distinguish between filter shapes

    Image Matching via Saliency Region Correspondences

    Get PDF
    We introduce the notion of co-saliency for image matching. Our matching algorithm combines the discriminative power of feature correspondences with the descriptive power of matching segments. Co-saliency matching score favors correspondences that are consistent with ’soft’ image segmentation as well as with local point feature matching. We express the matching model via a joint image graph (JIG) whose edge weights represent intra- as well as inter-image relations. The dominant spectral components of this graph lead to simultaneous pixel-wise alignment of the images and saliency-based synchronization of ’soft’ image segmentation. The co-saliency score function, which characterizes these spectral components, can be directly used as a similarity metric as well as a positive feedback for updating and establishing new point correspondences. We present experiments showing the extraction of matching regions and pointwise correspondences, and the utility of the global image similarity in the context of place recognition
    • …
    corecore