11 research outputs found

    Fusion of Infrared and Visible Images Based on Non-subsample Contourlet Transform

    Get PDF
    For the single spectrum image could not fully express the target feature information, this paper proposed a multispectral image fusion method based on non-subsample contourlet transform (NSCT). For the low frequency coefficients decomposed, fourth-order correlation coefficient is used to calculate the correlation between each low frequency coefficients, averaging fusion for the higher correlation coefficient, weight phase congruency fusion for the low correlation coefficient. For high frequency coefficients, Gaussian weight sum modified Laplace method is used for fusing, to retain more local structure details. Simulation results show that the method effectively retain the image structure information and more local details, and increase the image contrast

    A parallel fusion method of remote sensing image based on NSCT

    Get PDF
    Remote sensing image fusion is very important for playing the advantages of a variety of remote sensing data. However, remote sensing image fusion is large in computing capacity and time consuming. In this paper, in order to fuse remote sensing images accurately and quickly, a parallel fusion algorithm of remote sensing image based on NSCT (nonsubsampled contourlet transform) is proposed. In the method, two important kinds of remote sensing image, multispectral image and panchromatic image are used, and the advantages of parallel computing in high performance computing and the advantages of NSCT in information processing are combined. In the method, based on parallel computing, some processes with large amount of calculation including IHS (Intensity, Hue, Saturation) transform, NSCT, inverse NSCT, inverse IHS transform, etc., are done. To realize the method, multispectral image is processed with IHS transform, and the three components, I, H, and S are gotten. The component I and the panchromatic image are decomposed with NSCT. The obtained low frequency components of NSCT are fused with the fusion rule based on the neighborhood energy feature matching, and the obtained high frequency components are fused with the fusion rule based on the subregion variance. Then the low frequency components and the high frequency components after fusion are processed with the inverse NSCT, and the fused component is gotten. Finally, the fused component, the component H and the component S are processed with the inverse IHS transform, and the fusion image is obtained. The experiment results show that the proposed method can get better fusion results and faster computing speed for multispectral image and panchromatic image.The work was supported in part supported by (1) the Fund Project of National Natural Science of China(U1204402), (2) the Foundation Project(21AT-2016-13) supported by the twenty-first century Aerospace technology Co., Ltd., China, (3) the Natural Science Research Program Project (18A520001) supported by the Department of Education in Henan Province, China

    A Rigid Image Registration Based on the Nonsubsampled Contourlet Transform and Genetic Algorithms

    Get PDF
    Image registration is a fundamental task used in image processing to match two or more images taken at different times, from different sensors or from different viewpoints. The objective is to find in a huge search space of geometric transformations, an acceptable accurate solution in a reasonable time to provide better registered images. Exhaustive search is computationally expensive and the computational cost increases exponentially with the number of transformation parameters and the size of the data set. In this work, we present an efficient image registration algorithm that uses genetic algorithms within a multi-resolution framework based on the Non-Subsampled Contourlet Transform (NSCT). An adaptable genetic algorithm for registration is adopted in order to minimize the search space. This approach is used within a hybrid scheme applying the two techniques fitness sharing and elitism. Two NSCT based methods are proposed for registration. A comparative study is established between these methods and a wavelet based one. Because the NSCT is a shift-invariant multidirectional transform, the second method is adopted for its search speeding up property. Simulation results clearly show that both proposed techniques are really promising methods for image registration compared to the wavelet approach, while the second technique has led to the best performance results of all. Moreover, to demonstrate the effectiveness of these methods, these registration techniques have been successfully applied to register SPOT, IKONOS and Synthetic Aperture Radar (SAR) images. The algorithm has been shown to work perfectly well for multi-temporal satellite images as well, even in the presence of noise

    Image Fusion Algorithm Based on Spatial Frequency-Motivated Pulse Coupled Neural Networks in Nonsubsampled Contourlet Transform Domain

    Get PDF
    Nonsubsampled contourlet transform (NSCT) provides °exible multiresolution, anisotropy and directional expansion for images. Compared with the original contourlet transform, it is shift-invariant and can overcome the pseudo-Gibbs phenomena around singularities. Pulse Coupled Neural Networks (PCNN) is a visual cortex-inspired neural network and characterized by the global coupling and pulse synchronization of neurons. It has been proven suitable for image processing and successfully employed in image fusion. In this paper, NSCT is associated with PCNN and employed in image fusion to make full use of the characteristics of them. Spatial frequency in NSCT domain is input to motivate PCNN and coe±cients in NSCT domain with large firing times are selected as coe±cients of the fused image. Experimental results demonstrate that the proposed algorithm outperforms typical wavelet-based, contourlet-based, PCNN-based and contourlet-PCNN-based fusion algorithms in term of objective criteria and visual appearance.Supported by Navigation Science Foundation of P. R. China (05F07001) and National Natural Science Foundation of P. R. China (60472081

    Multisource Remote Sensing Imagery Fusion Scheme Based on Bidimensional Empirical Mode Decomposition (BEMD) and Its Application to the Extraction of Bamboo Forest

    Get PDF
    Most bamboo forests grow in humid climates in low-latitude tropical or subtropical monsoon areas, and they are generally located in hilly areas. Bamboo trunks are very straight and smooth, which means that bamboo forests have low structural diversity. These features are beneficial to synthetic aperture radar (SAR) microwave penetration and they provide special information in SAR imagery. However, some factors (e.g., foreshortening) can compromise the interpretation of SAR imagery. The fusion of SAR and optical imagery is considered an effective method with which to obtain information on ground objects. However, most relevant research has been based on two types of remote sensing image. This paper proposes a new fusion scheme, which combines three types of image simultaneously, based on two fusion methods: bidimensional empirical mode decomposition (BEMD) and the Gram-Schmidt transform. The fusion of panchromatic and multispectral images based on the Gram-Schmidt transform can enhance spatial resolution while retaining multispectral information. BEMD is an adaptive decomposition method that has been applied widely in the analysis of nonlinear signals and to the nonstable signal of SAR. The fusion of SAR imagery with fused panchromatic and multispectral imagery using BEMD is based on the frequency information of the images. It was established that the proposed fusion scheme is an effective remote sensing image interpretation method, and that the value of entropy and the spatial frequency of the fused images were improved in comparison with other techniques such as the discrete wavelet, à-trous, and non-subsampled contourlet transform methods. Compared with the original image, information entropy of the fusion image based on BEMD improves about 0.13–0.38. Compared with the other three methods it improves about 0.06–0.12. The average gradient of BEMD is 4%–6% greater than for other methods. BEMD maintains spatial frequency 3.2–4.0 higher than other methods. The experimental results showed the proposed fusion scheme could improve the accuracy of bamboo forest classification. Accuracy increased by 12.1%, and inaccuracy was reduced by 11.0%

    Image Simulation in Remote Sensing

    Get PDF
    Remote sensing is being actively researched in the fields of environment, military and urban planning through technologies such as monitoring of natural climate phenomena on the earth, land cover classification, and object detection. Recently, satellites equipped with observation cameras of various resolutions were launched, and remote sensing images are acquired by various observation methods including cluster satellites. However, the atmospheric and environmental conditions present in the observed scene degrade the quality of images or interrupt the capture of the Earth's surface information. One method to overcome this is by generating synthetic images through image simulation. Synthetic images can be generated by using statistical or knowledge-based models or by using spectral and optic-based models to create a simulated image in place of the unobtained image at a required time. Various proposed methodologies will provide economical utility in the generation of image learning materials and time series data through image simulation. The 6 published articles cover various topics and applications central to Remote sensing image simulation. Although submission to this Special Issue is now closed, the need for further in-depth research and development related to image simulation of High-spatial and spectral resolution, sensor fusion and colorization remains.I would like to take this opportunity to express my most profound appreciation to the MDPI Book staff, the editorial team of Applied Sciences journal, especially Ms. Nimo Lang, the assistant editor of this Special Issue, talented authors, and professional reviewers

    A Novel Metric Approach Evaluation For The Spatial Enhancement Of Pan-Sharpened Images

    Full text link
    Various and different methods can be used to produce high-resolution multispectral images from high-resolution panchromatic image (PAN) and low-resolution multispectral images (MS), mostly on the pixel level. The Quality of image fusion is an essential determinant of the value of processing images fusion for many applications. Spatial and spectral qualities are the two important indexes that used to evaluate the quality of any fused image. However, the jury is still out of fused image's benefits if it compared with its original images. In addition, there is a lack of measures for assessing the objective quality of the spatial resolution for the fusion methods. So, an objective quality of the spatial resolution assessment for fusion images is required. Therefore, this paper describes a new approach proposed to estimate the spatial resolution improve by High Past Division Index (HPDI) upon calculating the spatial-frequency of the edge regions of the image and it deals with a comparison of various analytical techniques for evaluating the Spatial quality, and estimating the colour distortion added by image fusion including: MG, SG, FCC, SD, En, SNR, CC and NRMSE. In addition, this paper devotes to concentrate on the comparison of various image fusion techniques based on pixel and feature fusion technique.Comment: arXiv admin note: substantial text overlap with arXiv:1110.497
    corecore