50,382 research outputs found

    Multisensor image fusion approach utilizing hybrid pre-enhancement and double nonsubsampled contourlet transform

    Get PDF
    A multisensor image fusion approach established on the hybrid-domain image enhancement and double nonsubsampled contourlet transform (NSCT) is proposed. The hybrid-domain pre-enhancement algorithm can promote the contrast of the visible color image. Different fusion rules are, respectively, selected and applied to obtain fusion results. The double NSCT framework is introduced to obtain better fusion performance than the general single NSCT framework. Experimental outcomes in fused images and performance results demonstrate that the presented approach is apparently more advantageous

    Color space analysis for iris recognition

    Get PDF
    This thesis investigates issues related to the processing of multispectral and color infrared images of the iris. When utilizing the color bands of the electromagnetic spectrum, the eye color and the components of texture (luminosity and chromaticity) must be considered. This work examines the effect of eye color on texture-based iris recognition in both the near-IR and visible bands. A novel score level fusion algorithm for multispectral iris recognition is presented in this regard. The fusion algorithm - based on evidence that matching performance of a texture-based encoding scheme is impacted by the quality of texture within the original image - ranks the spectral bands of the image based on texture quality and designs a fusion rule based on these rankings. Color space analysis, to determine an optimal representation scheme, is also examined in this thesis. Color images are transformed from the sRGB color space to the CIE Lab, YCbCr, CMYK and HSV color spaces prior to encoding and matching. Also, enhancement methods to increase the contrast of the texture within the iris, without altering the chromaticity of the image, are discussed. Finally, cross-band matching is performed to illustrate the correlation between eye color and specific bands of the color image

    Perceiving Unknown in Dark from Perspective of Cell Vibration

    Full text link
    Low light very likely leads to the degradation of image quality and even causes visual tasks' failure. Existing image enhancement technologies are prone to over-enhancement or color distortion, and their adaptability is fairly limited. In order to deal with these problems, we utilise the mechanism of biological cell vibration to interpret the formation of color images. In particular, we here propose a simple yet effective cell vibration energy (CVE) mapping method for image enhancement. Based on a hypothetical color-formation mechanism, our proposed method first uses cell vibration and photoreceptor correction to determine the photon flow energy for each color channel, and then reconstructs the color image with the maximum energy constraint of the visual system. Photoreceptor cells can adaptively adjust the feedback from the light intensity of the perceived environment. Based on this understanding, we here propose a new Gamma auto-adjustment method to modify Gamma values according to individual images. Finally, a fusion method, combining CVE and Gamma auto-adjustment (CVE-G), is proposed to reconstruct the color image under the constraint of lightness. Experimental results show that the proposed algorithm is superior to six state of the art methods in avoiding over-enhancement and color distortion, restoring the textures of dark areas and reproducing natural colors. The source code will be released at https://github.com/leixiaozhou/CVE-G-Resource-Base.Comment: 13 pages, 17 figure

    Multi-Modal Enhancement Techniques for Visibility Improvement of Digital Images

    Get PDF
    Image enhancement techniques for visibility improvement of 8-bit color digital images based on spatial domain, wavelet transform domain, and multiple image fusion approaches are investigated in this dissertation research. In the category of spatial domain approach, two enhancement algorithms are developed to deal with problems associated with images captured from scenes with high dynamic ranges. The first technique is based on an illuminance-reflectance (I-R) model of the scene irradiance. The dynamic range compression of the input image is achieved by a nonlinear transformation of the estimated illuminance based on a windowed inverse sigmoid transfer function. A single-scale neighborhood dependent contrast enhancement process is proposed to enhance the high frequency components of the illuminance, which compensates for the contrast degradation of the mid-tone frequency components caused by dynamic range compression. The intensity image obtained by integrating the enhanced illuminance and the extracted reflectance is then converted to a RGB color image through linear color restoration utilizing the color components of the original image. The second technique, named AINDANE, is a two step approach comprised of adaptive luminance enhancement and adaptive contrast enhancement. An image dependent nonlinear transfer function is designed for dynamic range compression and a multiscale image dependent neighborhood approach is developed for contrast enhancement. Real time processing of video streams is realized with the I-R model based technique due to its high speed processing capability while AINDANE produces higher quality enhanced images due to its multi-scale contrast enhancement property. Both the algorithms exhibit balanced luminance, contrast enhancement, higher robustness, and better color consistency when compared with conventional techniques. In the transform domain approach, wavelet transform based image denoising and contrast enhancement algorithms are developed. The denoising is treated as a maximum a posteriori (MAP) estimator problem; a Bivariate probability density function model is introduced to explore the interlevel dependency among the wavelet coefficients. In addition, an approximate solution to the MAP estimation problem is proposed to avoid the use of complex iterative computations to find a numerical solution. This relatively low complexity image denoising algorithm implemented with dual-tree complex wavelet transform (DT-CWT) produces high quality denoised images

    Investigation on advanced image search techniques

    Get PDF
    Content-based image search for retrieval of images based on the similarity in their visual contents, such as color, texture, and shape, to a query image is an active research area due to its broad applications. Color, for example, provides powerful information for image search and classification. This dissertation investigates advanced image search techniques and presents new color descriptors for image search and classification and robust image enhancement and segmentation methods for iris recognition. First, several new color descriptors have been developed for color image search. Specifically, a new oRGB-SIFT descriptor, which integrates the oRGB color space and the Scale-Invariant Feature Transform (SIFT), is proposed for image search and classification. The oRGB-SIFT descriptor is further integrated with other color SIFT features to produce the novel Color SIFT Fusion (CSF), the Color Grayscale SIFT Fusion (CGSF), and the CGSF+PHOG descriptors for image category search with applications to biometrics. Image classification is implemented using a novel EFM-KNN classifier, which combines the Enhanced Fisher Model (EFM) and the K Nearest Neighbor (KNN) decision rule. Experimental results on four large scale, grand challenge datasets have shown that the proposed oRGB-SIFT descriptor improves recognition performance upon other color SIFT descriptors, and the CSF, the CGSF, and the CGSF+PHOG descriptors perform better than the other color SIFT descriptors. The fusion of both Color SIFT descriptors (CSF) and Color Grayscale SIFT descriptor (CGSF) shows significant improvement in the classification performance, which indicates that various color-SIFT descriptors and grayscale-SIFT descriptor are not redundant for image search. Second, four novel color Local Binary Pattern (LBP) descriptors are presented for scene image and image texture classification. Specifically, the oRGB-LBP descriptor is derived in the oRGB color space. The other three color LBP descriptors, namely, the Color LBP Fusion (CLF), the Color Grayscale LBP Fusion (CGLF), and the CGLF+PHOG descriptors, are obtained by integrating the oRGB-LBP descriptor with some additional image features. Experimental results on three large scale, grand challenge datasets have shown that the proposed descriptors can improve scene image and image texture classification performance. Finally, a new iris recognition method based on a robust iris segmentation approach is presented for improving iris recognition performance. The proposed robust iris segmentation approach applies power-law transformations for more accurate detection of the pupil region, which significantly reduces the candidate limbic boundary search space for increasing detection accuracy and efficiency. As the limbic circle, which has a center within a close range of the pupil center, is selectively detected, the eyelid detection approach leads to improved iris recognition performance. Experiments using the Iris Challenge Evaluation (ICE) database show the effectiveness of the proposed method

    ANALYSIS OF IMAGE ENHANCEMENT ALGORITHMS FOR HYPERSPECTRAL IMAGES

    Get PDF
    This thesis presents an application of image enhancement techniques for color and panchromatic imagery to hyperspectral imagery. In this thesis, a combination of previously used algorithms for multi-channel images are used in a novel way to incorporate multiple bands within a single hyperspectral image. The steps of the image enhancement include image degradation, image correlation grouping, low-resolution image fusion, and fused image interpolation. Image degradation is accomplished through a Gaussian noise addition in each band along with image down-sampling. Image grouping is done through the use of two-dimensional correlation coefficients to match bands within the hyperspectral image. For image fusion, a discrete wavelet frame transform (DWFT) is used. For the interpolation, three methods are used to increase the resolution of the image: linear minimum mean squared error (LMMSE), a maximum entropy algorithm, and a regularized algorithm. These algorithms are then used in combination with a principal component analysis (PCA). The use of PCA is used for data compression. This saves time at the expense of increasing the error between the true image and the estimated hyperspectral image after PCA. Finally, a cost function is used to find the optimal level of compression to minimize the error while also decreasing computational time.Lieutenant Junior Grade, United States NavyApproved for public release. Distribution is unlimited
    • …
    corecore