402 research outputs found

    Enhancement of Single and Composite Images Based on Contourlet Transform Approach

    Get PDF
    Image enhancement is an imperative step in almost every image processing algorithms. Numerous image enhancement algorithms have been developed for gray scale images despite their absence in many applications lately. This thesis proposes hew image enhancement techniques of 8-bit single and composite digital color images. Recently, it has become evident that wavelet transforms are not necessarily best suited for images. Therefore, the enhancement approaches are based on a new 'true' two-dimensional transform called contourlet transform. The proposed enhancement techniques discussed in this thesis are developed based on the understanding of the working mechanisms of the new multiresolution property of contourlet transform. This research also investigates the effects of using different color space representations for color image enhancement applications. Based on this investigation an optimal color space is selected for both single image and composite image enhancement approaches. The objective evaluation steps show that the new method of enhancement not only superior to the commonly used transformation method (e.g. wavelet transform) but also to various spatial models (e.g. histogram equalizations). The results found are encouraging and the enhancement algorithms have proved to be more robust and reliable

    Toward reduction of artifacts in fused images

    Get PDF
    Most fusion satellite image methodologies at pixel-level introduce false spatial details, i.e.artifacts, in the resulting fusedimages. In many cases, these artifacts appears because image fusion methods do not consider the differences in roughness or textural characteristics between different land covers. They only consider the digital values associated with single pixels. This effect increases as the spatial resolution image increases. To minimize this problem, we propose a new paradigm based on local measurements of the fractal dimension (FD). Fractal dimension maps (FDMs) are generated for each of the source images (panchromatic and each band of the multi-spectral images) with the box-counting algorithm and by applying a windowing process. The average of source image FDMs, previously indexed between 0 and 1, has been used for discrimination of different land covers present in satellite images. This paradigm has been applied through the fusion methodology based on the discrete wavelet transform (DWT), using the à trous algorithm (WAT). Two different scenes registered by optical sensors on board FORMOSAT-2 and IKONOS satellites were used to study the behaviour of the proposed methodology. The implementation of this approach, using the WAT method, allows adapting the fusion process to the roughness and shape of the regions present in the image to be fused. This improves the quality of the fusedimages and their classification results when compared with the original WAT metho

    An efficient adaptive fusion scheme for multifocus images in wavelet domain using statistical properties of neighborhood

    Get PDF
    In this paper we present a novel fusion rule which can efficiently fuse multifocus images in wavelet domain by taking weighted average of pixels. The weights are adaptively decided using the statistical properties of the neighborhood. The main idea is that the eigen value of unbiased estimate of the covariance matrix of an image block depends on the strength of edges in the block and thus makes a good choice for weight to be given to the pixel, giving more weightage to pixel with sharper neighborhood. The performance of the proposed method have been extensively tested on several pairs of multifocus images and also compared quantitatively with various existing methods with the help of well known parameters including Petrovic and Xydeas image fusion metric. Experimental results show that performance evaluation based on entropy, gradient, contrast or deviation, the criteria widely used for fusion analysis, may not be enough. This work demonstrates that in some cases, these evaluation criteria are not consistent with the ground truth. It also demonstrates that Petrovic and Xydeas image fusion metric is a more appropriate criterion, as it is in correlation with ground truth as well as visual quality in all the tested fused images. The proposed novel fusion rule significantly improves contrast information while preserving edge information. The major achievement of the work is that it significantly increases the quality of the fused image, both visually and in terms of quantitative parameters, especially sharpness with minimum fusion artifacts

    Blending of Images Using Discrete Wavelet Transform

    Get PDF
    The project presents multi focus image fusion using discrete wavelet transform with local directional pattern and spatial frequency analysis. Multi focus image fusion in wireless visual sensor networks is a process of blending two or more images to get a new one which has a more accurate description of the scene than the individual source images. In this project, the proposed model utilizes the multi scale decomposition done by discrete wavelet transform for fusing the images in its frequency domain. It decomposes an image into two different components like structural and textural information. It doesn’t down sample the image while transforming into frequency domain. So it preserves the edge texture details while reconstructing image from its frequency domain. It is used to reduce the problems like blocking, ringing artifacts occurs because of DCT and DWT. The low frequency sub-band coefficients are fused by selecting coefficient having maximum spatial frequency. It indicates the overall active level of an image. The high frequency sub-band coefficients are fused by selecting coefficients having maximum LDP code value LDP computes the edge response values in all eight directions at each pixel position and generates a code from the relative strength magnitude. Finally, fused two different frequency sub-bands are inverse transformed to reconstruct fused image. The system performance will be evaluated by using the parameters such as Peak signal to noise ratio, correlation and entrop

    The Nonsubsampled Contourlet Transform Based Statistical Medical Image Fusion Using Generalized Gaussian Density

    Get PDF
    We propose a novel medical image fusion scheme based on the statistical dependencies between coefficients in the nonsubsampled contourlet transform (NSCT) domain, in which the probability density function of the NSCT coefficients is concisely fitted using generalized Gaussian density (GGD), as well as the similarity measurement of two subbands is accurately computed by Jensen-Shannon divergence of two GGDs. To preserve more useful information from source images, the new fusion rules are developed to combine the subbands with the varied frequencies. That is, the low frequency subbands are fused by utilizing two activity measures based on the regional standard deviation and Shannon entropy and the high frequency subbands are merged together via weight maps which are determined by the saliency values of pixels. The experimental results demonstrate that the proposed method significantly outperforms the conventional NSCT based medical image fusion approaches in both visual perception and evaluation indices
    corecore