43 research outputs found

    A New Technique for Multispectral and Panchromatic Image Fusion

    Get PDF
    AbstractIn this paper, a technique is presented for the fusion of Panchromatic (PAN) and low spatial resolution multispectral (MS) images to get high spatial resolution of the latter. In this technique, we apply PCA transformation to the MS image to obtain the principal component (PC) images. A NSCT transformation to PAN and each PC images for N level of decomposition. We use FOCC as criterion to select PC. And then, we use the relative entropy as criterion to reconstruct high-frequency detailed images. Finally, we apply inverse NSCT to selected PC's low-frequency approximate image and reconstructed high- frequency detailed images to obtain high spatial resolution MS image. The experimental results obtained by applying the proposed image fusion method indicate some improvements in the fusion performance

    Toward reduction of artifacts in fused images

    Get PDF
    Most fusion satellite image methodologies at pixel-level introduce false spatial details, i.e.artifacts, in the resulting fusedimages. In many cases, these artifacts appears because image fusion methods do not consider the differences in roughness or textural characteristics between different land covers. They only consider the digital values associated with single pixels. This effect increases as the spatial resolution image increases. To minimize this problem, we propose a new paradigm based on local measurements of the fractal dimension (FD). Fractal dimension maps (FDMs) are generated for each of the source images (panchromatic and each band of the multi-spectral images) with the box-counting algorithm and by applying a windowing process. The average of source image FDMs, previously indexed between 0 and 1, has been used for discrimination of different land covers present in satellite images. This paradigm has been applied through the fusion methodology based on the discrete wavelet transform (DWT), using the à trous algorithm (WAT). Two different scenes registered by optical sensors on board FORMOSAT-2 and IKONOS satellites were used to study the behaviour of the proposed methodology. The implementation of this approach, using the WAT method, allows adapting the fusion process to the roughness and shape of the regions present in the image to be fused. This improves the quality of the fusedimages and their classification results when compared with the original WAT metho

    A parallel fusion method of remote sensing image based on NSCT

    Get PDF
    Remote sensing image fusion is very important for playing the advantages of a variety of remote sensing data. However, remote sensing image fusion is large in computing capacity and time consuming. In this paper, in order to fuse remote sensing images accurately and quickly, a parallel fusion algorithm of remote sensing image based on NSCT (nonsubsampled contourlet transform) is proposed. In the method, two important kinds of remote sensing image, multispectral image and panchromatic image are used, and the advantages of parallel computing in high performance computing and the advantages of NSCT in information processing are combined. In the method, based on parallel computing, some processes with large amount of calculation including IHS (Intensity, Hue, Saturation) transform, NSCT, inverse NSCT, inverse IHS transform, etc., are done. To realize the method, multispectral image is processed with IHS transform, and the three components, I, H, and S are gotten. The component I and the panchromatic image are decomposed with NSCT. The obtained low frequency components of NSCT are fused with the fusion rule based on the neighborhood energy feature matching, and the obtained high frequency components are fused with the fusion rule based on the subregion variance. Then the low frequency components and the high frequency components after fusion are processed with the inverse NSCT, and the fused component is gotten. Finally, the fused component, the component H and the component S are processed with the inverse IHS transform, and the fusion image is obtained. The experiment results show that the proposed method can get better fusion results and faster computing speed for multispectral image and panchromatic image.The work was supported in part supported by (1) the Fund Project of National Natural Science of China(U1204402), (2) the Foundation Project(21AT-2016-13) supported by the twenty-first century Aerospace technology Co., Ltd., China, (3) the Natural Science Research Program Project (18A520001) supported by the Department of Education in Henan Province, China

    An Efficient Algorithm For Satellite Images Fusion Based On Contourlet Transform

    Get PDF
    This paper proposes a new fusion method for multiespectral (MULTI) and panchromatic (PAN) images that uses a highly anisotropic and redundant representation of images. This methodology join the simplicity of the Wavelet transform, calculated using the à trous algorithm, with the benefits of multidirectional transforms like Contourlet Transform. That has permitted an adequate extraction of information from the source images, in order to obtain fused images with high spatial and spectral quality simultaneously. The new method has been implemented through a directional low pass filter bank with low computational complexity. The source images correspond to those captured by the IKONOS satellite (panchromatic and multispectral). The influence of the filter bank parameters in the global quality of the fused images has been investigated. The results obtained indicate that the proposed methodology provides an objective control of the spatial and spectral quality trade-off of the fused images by the determination of an appropriate set of filter bank parameters

    Fusion of Infrared and Visible Images Based on Non-subsample Contourlet Transform

    Get PDF
    For the single spectrum image could not fully express the target feature information, this paper proposed a multispectral image fusion method based on non-subsample contourlet transform (NSCT). For the low frequency coefficients decomposed, fourth-order correlation coefficient is used to calculate the correlation between each low frequency coefficients, averaging fusion for the higher correlation coefficient, weight phase congruency fusion for the low correlation coefficient. For high frequency coefficients, Gaussian weight sum modified Laplace method is used for fusing, to retain more local structure details. Simulation results show that the method effectively retain the image structure information and more local details, and increase the image contrast

    FUSION OF LANDSAT- 8 THERMAL INFRARED AND VISIBLE BANDS WITH MULTI-RESOLUTION ANALYSIS CONTOURLET METHODS

    Get PDF
    Land surface temperature image is an important product in many lithosphere and atmosphere applications. This image is retrieved from the thermal infrared bands. These bands have lower spatial resolution than the visible and near infrared data. Therefore, the details of temperature variation can't be clearly identified in land surface temperature images. The aim of this study is to enhance spatial information in thermal infrared bands. Image fusion is one of the efficient methods that are employed to enhance spatial resolution of the thermal bands by fusing these data with high spatial resolution visible bands. Multi-resolution analysis is an effective pixel level image fusion approach. In this paper, we use contourlet, non-subsampled contourlet and sharp frequency localization contourlet transform in fusion due to their advantages, high directionality and anisotropy. The absolute average difference and RMSE values show that with small distortion in the thermal content, the spatial information of the thermal infrared and the land surface temperature images is enhanced

    Evaluation of Pan-Sharpening Techniques Using Lagrange Optimization

    Get PDF
    Earth’s observation satellites, such as IKONOS, provide simultaneously multispectral and panchromatic images. A multispectral image comes with a lower spatial and higher spectral resolution in contrast to a panchromatic image which usually has a high spatial and a low spectral resolution. Pan-sharpening represents a fusion of these two complementary images to provide an output image that has both spatial and spectral high resolutions. The objective of this paper is to propose a new method of pan-sharpening based on pixel-level image manipulation and to compare it with several state-of-art pansharpening methods using different evaluation criteria.  The paper presents an image fusion method based on pixel-level optimization using the Lagrange multiplier. Two cases are discussed: (a) the maximization of spectral consistency and (b) the minimization of the variance difference between the original data and the computed data. The paper compares the results of the proposed method with several state-of-the-art pan-sharpening methods. The performance of the pan-sharpening methods is evaluated qualitatively and quantitatively using evaluation criteria, such as the Chi-square test, RMSE, SNR, SD, ERGAS, and RASE. Overall, the proposed method is shown to outperform all the existing methods
    corecore