11 research outputs found
Fusion of Infrared and Visible Images Based on Non-subsample Contourlet Transform
For the single spectrum image could not fully express the target feature information, this paper proposed a multispectral image fusion method based on non-subsample contourlet transform (NSCT). For the low frequency coefficients decomposed, fourth-order correlation coefficient is used to calculate the correlation between each low frequency coefficients, averaging fusion for the higher correlation coefficient, weight phase congruency fusion for the low correlation coefficient. For high frequency coefficients, Gaussian weight sum modified Laplace method is used for fusing, to retain more local structure details. Simulation results show that the method effectively retain the image structure information and more local details, and increase the image contrast
A parallel fusion method of remote sensing image based on NSCT
Remote sensing image fusion is very important for playing the advantages of a variety of remote sensing data. However, remote sensing image fusion is large in computing capacity and time consuming. In this paper, in order to fuse remote sensing images accurately and quickly, a parallel fusion algorithm of remote sensing image based on NSCT (nonsubsampled contourlet transform) is proposed. In the method, two important kinds of remote sensing image, multispectral image and panchromatic image are used, and the advantages of parallel computing in high performance computing and the advantages of NSCT in information processing are combined. In the method, based on parallel computing, some processes with large amount of calculation including IHS (Intensity, Hue, Saturation) transform, NSCT, inverse NSCT, inverse IHS transform, etc., are done. To realize the method, multispectral image is processed with IHS transform, and the three components, I, H, and S are gotten. The component I and the panchromatic image are decomposed with NSCT. The obtained low frequency components of NSCT are fused with the fusion rule based on the neighborhood energy feature matching, and the obtained high frequency components are fused with the fusion rule based on the subregion variance. Then the low frequency components and the high frequency components after fusion are processed with the inverse NSCT, and the fused component is gotten. Finally, the fused component, the component H and the component S are processed with the inverse IHS transform, and the fusion image is obtained. The experiment results show that the proposed method can get better fusion results and faster computing speed for multispectral image and panchromatic image.The work was supported in part supported by (1) the Fund Project of National Natural Science of China(U1204402), (2) the Foundation Project(21AT-2016-13)
supported by the twenty-first century Aerospace technology Co., Ltd., China, (3) the Natural Science Research Program Project (18A520001) supported by the Department of Education in Henan Province, China
A Rigid Image Registration Based on the Nonsubsampled Contourlet Transform and Genetic Algorithms
Image registration is a fundamental task used in image processing to match two or more images taken at different times, from different sensors or from different viewpoints. The objective is to find in a huge search space of geometric transformations, an acceptable accurate solution in a reasonable time to provide better registered images. Exhaustive search is computationally expensive and the computational cost increases exponentially with the number of transformation parameters and the size of the data set. In this work, we present an efficient image registration algorithm that uses genetic algorithms within a multi-resolution framework based on the Non-Subsampled Contourlet Transform (NSCT). An adaptable genetic algorithm for registration is adopted in order to minimize the search space. This approach is used within a hybrid scheme applying the two techniques fitness sharing and elitism. Two NSCT based methods are proposed for registration. A comparative study is established between these methods and a wavelet based one. Because the NSCT is a shift-invariant multidirectional transform, the second method is adopted for its search speeding up property. Simulation results clearly show that both proposed techniques are really promising methods for image registration compared to the wavelet approach, while the second technique has led to the best performance results of all. Moreover, to demonstrate the effectiveness of these methods, these registration techniques have been successfully applied to register SPOT, IKONOS and Synthetic Aperture Radar (SAR) images. The algorithm has been shown to work perfectly well for multi-temporal satellite images as well, even in the presence of noise
Image Fusion Algorithm Based on Spatial Frequency-Motivated Pulse Coupled Neural Networks in Nonsubsampled Contourlet Transform Domain
Nonsubsampled contourlet transform (NSCT) provides °exible multiresolution, anisotropy and directional expansion for images. Compared with the original contourlet transform, it is shift-invariant and can overcome the pseudo-Gibbs phenomena around singularities. Pulse Coupled Neural Networks (PCNN) is a visual cortex-inspired neural network and characterized by the global coupling and pulse synchronization of neurons. It has been proven suitable for image processing and successfully employed in image fusion. In this paper, NSCT is associated with PCNN and employed in image fusion to make full use of the characteristics of them. Spatial frequency in NSCT domain is input to motivate PCNN and coe±cients in NSCT domain with large firing times are selected as coe±cients of the fused image. Experimental results demonstrate that the proposed algorithm outperforms typical wavelet-based, contourlet-based, PCNN-based and contourlet-PCNN-based fusion algorithms in term of objective criteria and visual appearance.Supported by Navigation Science Foundation of P. R. China (05F07001) and National Natural Science Foundation of P. R. China
(60472081
Multisource Remote Sensing Imagery Fusion Scheme Based on Bidimensional Empirical Mode Decomposition (BEMD) and Its Application to the Extraction of Bamboo Forest
Most bamboo forests grow in humid climates in low-latitude tropical or subtropical monsoon areas, and they are generally located in hilly areas. Bamboo trunks are very straight and smooth, which means that bamboo forests have low structural diversity. These features are beneficial
to synthetic aperture radar (SAR) microwave penetration and they provide special information in SAR imagery. However, some factors (e.g., foreshortening) can compromise the interpretation of SAR imagery. The fusion of SAR and optical imagery is considered an effective method with which to
obtain information on ground objects. However, most relevant research has been based on two types
of remote sensing image. This paper proposes a new fusion scheme, which combines three types of
image simultaneously, based on two fusion methods: bidimensional empirical mode decomposition
(BEMD) and the Gram-Schmidt transform. The fusion of panchromatic and multispectral images
based on the Gram-Schmidt transform can enhance spatial resolution while retaining multispectral
information. BEMD is an adaptive decomposition method that has been applied widely in the
analysis of nonlinear signals and to the nonstable signal of SAR. The fusion of SAR imagery with
fused panchromatic and multispectral imagery using BEMD is based on the frequency information
of the images. It was established that the proposed fusion scheme is an effective remote sensing
image interpretation method, and that the value of entropy and the spatial frequency of the fused
images were improved in comparison with other techniques such as the discrete wavelet, à-trous,
and non-subsampled contourlet transform methods. Compared with the original image, information
entropy of the fusion image based on BEMD improves about 0.13–0.38. Compared with the other
three methods it improves about 0.06–0.12. The average gradient of BEMD is 4%–6% greater
than for other methods. BEMD maintains spatial frequency 3.2–4.0 higher than other methods.
The experimental results showed the proposed fusion scheme could improve the accuracy of bamboo
forest classification. Accuracy increased by 12.1%, and inaccuracy was reduced by 11.0%
Image Simulation in Remote Sensing
Remote sensing is being actively researched in the fields of environment, military and urban planning through technologies such as monitoring of natural climate phenomena on the earth, land cover classification, and object detection. Recently, satellites equipped with observation cameras of various resolutions were launched, and remote sensing images are acquired by various observation methods including cluster satellites. However, the atmospheric and environmental conditions present in the observed scene degrade the quality of images or interrupt the capture of the Earth's surface information. One method to overcome this is by generating synthetic images through image simulation. Synthetic images can be generated by using statistical or knowledge-based models or by using spectral and optic-based models to create a simulated image in place of the unobtained image at a required time. Various proposed methodologies will provide economical utility in the generation of image learning materials and time series data through image simulation. The 6 published articles cover various topics and applications central to Remote sensing image simulation. Although submission to this Special Issue is now closed, the need for further in-depth research and development related to image simulation of High-spatial and spectral resolution, sensor fusion and colorization remains.I would like to take this opportunity to express my most profound appreciation to the MDPI Book staff, the editorial team of Applied Sciences journal, especially Ms. Nimo Lang, the assistant editor of this Special Issue, talented authors, and professional reviewers
A Novel Metric Approach Evaluation For The Spatial Enhancement Of Pan-Sharpened Images
Various and different methods can be used to produce high-resolution
multispectral images from high-resolution panchromatic image (PAN) and
low-resolution multispectral images (MS), mostly on the pixel level. The
Quality of image fusion is an essential determinant of the value of processing
images fusion for many applications. Spatial and spectral qualities are the two
important indexes that used to evaluate the quality of any fused image.
However, the jury is still out of fused image's benefits if it compared with
its original images. In addition, there is a lack of measures for assessing the
objective quality of the spatial resolution for the fusion methods. So, an
objective quality of the spatial resolution assessment for fusion images is
required. Therefore, this paper describes a new approach proposed to estimate
the spatial resolution improve by High Past Division Index (HPDI) upon
calculating the spatial-frequency of the edge regions of the image and it deals
with a comparison of various analytical techniques for evaluating the Spatial
quality, and estimating the colour distortion added by image fusion including:
MG, SG, FCC, SD, En, SNR, CC and NRMSE. In addition, this paper devotes to
concentrate on the comparison of various image fusion techniques based on pixel
and feature fusion technique.Comment: arXiv admin note: substantial text overlap with arXiv:1110.497
Recommended from our members
Deep learning assisted MRI guided attenuation correction in PET
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University LondonPositron emission tomography (PET) is a unique imaging modality that provides physiological
and functional details of the tissue at the molecular level. However, the acquired PET images
have some limitations such as the attenuation. PET attenuation correction is an essential step to
obtain the full potential of PET quantification. With the wide use of hybrid PET/MR scanners,
magnetic resonance (MR) images are used to address the problem of PET attenuation correction.
The MR images segmentation is one simple and robust approach to create pseudo computed
tomography (CT) images, which are used to generate attenuation coefficient maps to correct the
PET attenuation. Recently, deep learning has been proposed and used as a promising technique
to efficiently perform MR and various medical images segmentation.
In this research work, deep learning guided segmentation approaches have been proposed
to enhance the bone class segmentation of MR brain images in order to generate accurate
pseudo-CT images. The first approach has introduced the combination of handcrafted features
with deep learning features to enrich the set of features. Multiresolution analysis techniques,
which generate multiscale and multidirectional coefficients of an image such as contourlet and
shearlet transforms, are applied and combined with deep convolutional neural network (CNN)
features. Different experiments have been conducted to investigate the number of selected
coefficients and the insertion location of the handcrafted features.
The second approach aims at reducing the segmentation algorithm’s complexity while
maintaining the segmentation performance. An attention based convolutional encode-decoder
network has been proposed to adaptively recalibrate the deep network features. This attention based
network consists of two different squeeze and excitation blocks that excite the features
spatially and channel wise. The two blocks are combined sequentially to decrease the number
of network’s parameters and reduces the model complexity. The third approach has been focuses on the application of transfer learning from different MR sequences such as T1 weighted (T1-w) and T2 weighted (T2-w) images. A
pretrained model with T1-w MR sequences is fine tuned to perform the segmentation of T2-w
images. Multiple fine tuning approaches and experiments have been conducted to study the best
fine tuning mechanism that is able to build an efficient segmentation model for both T1-w and
T2-w segmentation. Clinical datasets of fifty patients with different conditions and diagnosis have been
used to carry an objective evaluation to measure the segmentation performance of the results
obtained by the three proposed methods. The first and second approaches have been validated
with other studies in the literature that applied deep network based segmentation technique to
perform MR based attenuation correction for PET images. The proposed methods have shown
an enhancement in the bone segmentation with an increase of dice similarity coefficient (DSC)
from 0.6179 to 0.6567 using an ensemble of CNNs with an improvement percentage of 6.3%.
The proposed excitation-based CNN has decreased the model complexity by decreasing the
number of trainable parameters by more than 46% where less computing resources are required
to train the model. The proposed hybrid transfer learning method has shown its superiority to
build a multi-sequences (T1-w and T2-w) segmentation approach compared to other applied
transfer learning methods especially with the bone class where the DSC is increased from 0.3841
to 0.5393. Moreover, the hybrid transfer learning approach requires less computing time than
transfer learning using open and conservative fine tuning