80,966 research outputs found
No-reference blur estimation based on the average cone ratio in the wavelet domain
We propose a wavelet based metric of blurriness in the digital images named CogACR – Center of gravity of the Average Cone Ratio. The metric is highly robust to noise and able to distinguish between a great range of blurriness. To automate the CogACR estimation of blur in a no-reference scenario, we introduce a novel method for image classification based on edge content similarity. Our results indicate high accuracy of the CogACR metric for a range of natural scene images distorted with the out-of-focus blur. Within the considered range of blur radius of 0 to 10 pixels, varied in steps of 0.25 pixels, the proposed metric estimates the blur radius with an absolute error of up to 1 pixel in 80 to 90% of the images
Person re-identification via efficient inference in fully connected CRF
In this paper, we address the problem of person re-identification problem,
i.e., retrieving instances from gallery which are generated by the same person
as the given probe image. This is very challenging because the person's
appearance usually undergoes significant variations due to changes in
illumination, camera angle and view, background clutter, and occlusion over the
camera network. In this paper, we assume that the matched gallery images should
not only be similar to the probe, but also be similar to each other, under
suitable metric. We express this assumption with a fully connected CRF model in
which each node corresponds to a gallery and every pair of nodes are connected
by an edge. A label variable is associated with each node to indicate whether
the corresponding image is from target person. We define unary potential for
each node using existing feature calculation and matching techniques, which
reflect the similarity between probe and gallery image, and define pairwise
potential for each edge in terms of a weighed combination of Gaussian kernels,
which encode appearance similarity between pair of gallery images. The specific
form of pairwise potential allows us to exploit an efficient inference
algorithm to calculate the marginal distribution of each label variable for
this dense connected CRF. We show the superiority of our method by applying it
to public datasets and comparing with the state of the art.Comment: 7 pages, 4 figure
Comparison between Structural Similarity Index Metric and Human Perception
This thesis examines the image quality assessment using Structural Similarity Index Metric (SSIM). The performance of Structural Similarity Index Metric was evaluated by comparing Mean Structural Similarity Index (MSSIM) index values with the Probability of Identification (PID) values. The perception experiments were designed for letter images with blur and letter images with blur and noise to obtain the PID values from an ensemble of observers. The other set of images used in this study were tank images for which PID data existed. All the images used in the experiment belong to Gaussian and Exponential filter shapes at various blur levels. All images at a specific blur level and specific filter shape were compared and MSSIM was obtained. MSSIM was compared with blur and PID was compared with blur at various levels for both the filter shapes to observe the correlation between SSIM and human perception. It is noticed from the results that there is no correlation between MSSIM and PID. The image quality differences between SSIM and human perception were obtained in this thesis. From the results it is noticed that SSIM cannot detect the filter shape difference where as humans perceived the difference for letter images with blur in our experiments. The Probability of Identification for Gaussian is lower than the Exponential filter shape which is explained by the edge energies analysis. It is observed that the results of tank images and letter images with blur and noise were similar where humans and MSSIM cannot distinguish between filter shapes
Image Matching via Saliency Region Correspondences
We introduce the notion of co-saliency for image matching. Our matching algorithm combines the discriminative power of feature correspondences with the descriptive power of matching segments. Co-saliency matching score favors correspondences that are consistent with ’soft’ image segmentation as well as with local point feature matching. We express the matching model via a joint image graph (JIG) whose edge weights represent intra- as well as inter-image relations. The dominant spectral components of this graph lead to simultaneous pixel-wise alignment of the images and saliency-based synchronization of ’soft’ image segmentation. The co-saliency score function, which characterizes these spectral components, can be directly used as a similarity metric as well as a positive feedback for updating and establishing new point correspondences. We present experiments showing the extraction of matching regions and pointwise correspondences, and the utility of the global image similarity in the context of place recognition
Recommended from our members
An automated method for comparing motion artifacts in cine four-dimensional computed tomography images.
The aim of this study is to develop an automated method to objectively compare motion artifacts in two four-dimensional computed tomography (4D CT) image sets, and identify the one that would appear to human observers with fewer or smaller artifacts. Our proposed method is based on the difference of the normalized correlation coefficients between edge slices at couch transitions, which we hypothesize may be a suitable metric to identify motion artifacts. We evaluated our method using ten pairs of 4D CT image sets that showed subtle differences in artifacts between images in a pair, which were identifiable by human observers. One set of 4D CT images was sorted using breathing traces in which our clinically implemented 4D CT sorting software miscalculated the respiratory phase, which expectedly led to artifacts in the images. The other set of images consisted of the same images; however, these were sorted using the same breathing traces but with corrected phases. Next we calculated the normalized correlation coefficients between edge slices at all couch transitions for all respiratory phases in both image sets to evaluate for motion artifacts. For nine image set pairs, our method identified the 4D CT sets sorted using the breathing traces with the corrected respiratory phase to result in images with fewer or smaller artifacts, whereas for one image pair, no difference was noted. Two observers independently assessed the accuracy of our method. Both observers identified 9 image sets that were sorted using the breathing traces with corrected respiratory phase as having fewer or smaller artifacts. In summary, using the 4D CT data of ten pairs of 4D CT image sets, we have demonstrated proof of principle that our method is able to replicate the results of two human observers in identifying the image set with fewer or smaller artifacts
- …