285 research outputs found

    Image blur estimation based on the average cone of ratio in the wavelet domain

    Get PDF
    In this paper, we propose a new algorithm for objective blur estimation using wavelet decomposition. The central idea of our method is to estimate blur as a function of the center of gravity of the average cone ratio (ACR) histogram. The key properties of ACR are twofold: it is powerful in estimating local edge regularity, and it is nearly insensitive to noise. We use these properties to estimate the blurriness of the image, irrespective of the level of noise. In particular, the center of gravity of the ACR histogram is a blur metric. The method is applicable both in case where the reference image is available and when there is no reference. The results demonstrate a consistent performance of the proposed metric for a wide class of natural images and in a wide range of out of focus blurriness. Moreover, the proposed method shows a remarkable insensitivity to noise compared to other wavelet domain methods

    Fast bilateral-space stereo for synthetic defocus

    Full text link

    Image quality assessment for iris biometric

    Get PDF
    Iris recognition, the ability to recognize and distinguish individuals by their iris pattern, is the most reliable biometric in terms of recognition and identification performance. However, performance of these systems is affected by poor quality imaging. In this work, we extend previous research efforts on iris quality assessment by analyzing the effect of seven quality factors: defocus blur, motion blur, off-angle, occlusion, specular reflection, lighting, and pixel-counts on the performance of traditional iris recognition system. We have concluded that defocus blur, motion blur, and off-angle are the factors that affect recognition performance the most. We further designed a fully automated iris image quality evaluation block that operates in two steps. First each factor is estimated individually, then the second step involves fusing the estimated factors by using Dempster-Shafer theory approach to evidential reasoning. The designed block is tested on two datasets, CASIA 1.0 and a dataset collected at WVU. (Abstract shortened by UMI.)

    Structural similarity loss for learning to fuse multi-focus images

    Get PDF
    © 2020 by the authors. Licensee MDPI, Basel, Switzerland. Convolutional neural networks have recently been used for multi-focus image fusion. However, some existing methods have resorted to adding Gaussian blur to focused images, to simulate defocus, thereby generating data (with ground-truth) for supervised learning. Moreover, they classify pixels as ‘focused’ or ‘defocused’, and use the classified results to construct the fusion weight maps. This then necessitates a series of post-processing steps. In this paper, we present an end-to-end learning approach for directly predicting the fully focused output image from multi-focus input image pairs. The suggested approach uses a CNN architecture trained to perform fusion, without the need for ground truth fused images. The CNN exploits the image structural similarity (SSIM) to calculate the loss, a metric that is widely accepted for fused image quality evaluation. What is more, we also use the standard deviation of a local window of the image to automatically estimate the importance of the source images in the final fused image when designing the loss function. Our network can accept images of variable sizes and hence, we are able to utilize real benchmark datasets, instead of simulated ones, to train our network. The model is a feed-forward, fully convolutional neural network that can process images of variable sizes during test time. Extensive evaluation on benchmark datasets show that our method outperforms, or is comparable with, existing state-of-the-art techniques on both objective and subjective benchmarks

    2D Iterative MAP Detection: Principles and Applications in Image Restoration

    Get PDF
    The paper provides a theoretical framework for the two-dimensional iterative maximum a posteriori detection. This generalization is based on the concept of detection algorithms BCJR and SOVA, i.e., the classical (one-dimensional) iterative detectors used in telecommunication applications. We generalize the one-dimensional detection problem considering the spatial ISI kernel as a two-dimensional finite state machine (2D FSM) representing a network of the spatially concatenated elements. The cellular structure topology defines the design of the 2D Iterative decoding network, where each cell is a general combination-marginalization statistical element (SISO module) exchanging discrete probability density functions (information metrics) with neighboring cells. In this paper, we statistically analyse the performance of various topologies with respect to their application in the field of image restoration. The iterative detection algorithm was applied on the task of binarization of images taken from a CCD camera. The reconstruction includes suppression of the defocus caused by the lens, CCD sensor noise suppression and interpolation (demosaicing). The simulations prove that the algorithm provides satisfactory results even in the case of an input image that is under-sampled due to the Bayer mask

    Spatial Stimuli Gradient Based Multifocus Image Fusion Using Multiple Sized Kernels

    Get PDF
    Multi-focus image fusion technique extracts the focused areas from all the source images and combines them into a new image which contains all focused objects. This paper proposes a spatial domain fusion scheme for multi-focus images by using multiple size kernels. Firstly, source images are pre-processed with a contrast enhancement step and then the soft and hard decision maps are generated by employing a sliding window technique using multiple sized kernels on the gradient images. Hard decision map selects the accurate focus information from the source images, whereas, the soft decision map selects the basic focus information and contains minimum falsely detected focused/unfocused regions. These decision maps are further processed to compute the final focus map. Gradient images are constructed through state-ofthe-art edge detection technique, spatial stimuli gradient sketch model, which computes the local stimuli from perceived brightness and hence enhances the essential structural and edge information. Detailed experiment results demonstrate that the proposed multi-focus image fusion algorithm performs better than the other well known state-of-the-art multifocus image fusion methods, in terms of subjective visual perception and objective quality evaluation metrics

    Estimation of image quality factors for face recognition

    Get PDF
    Over the past few years, verification and identification of humans using biometric has gained attention of researchers and of the public in general. Face recognition systems are used by the public and the government and are applied in different facets of life including security, identification of criminals and identification of terrorists. Because of the importance of these applications, it is of great necessity that face recognition systems be as accurate as possible. Some research has shown that image quality degrades the performance of face recognition systems. Most previous research has focused on designing algorithms for face recognition that deal or compensate a single effect such as blur, lighting conditions, pose, and emotions. In this thesis we identify a number of factors influencing recognition performance and conduct an extensive study of the effects of image quality factors on recognition performance and discuss methods to estimate this quality factors
    corecore