32 research outputs found

    The Nonsubsampled Contourlet Transform Based Statistical Medical Image Fusion Using Generalized Gaussian Density

    Get PDF
    We propose a novel medical image fusion scheme based on the statistical dependencies between coefficients in the nonsubsampled contourlet transform (NSCT) domain, in which the probability density function of the NSCT coefficients is concisely fitted using generalized Gaussian density (GGD), as well as the similarity measurement of two subbands is accurately computed by Jensen-Shannon divergence of two GGDs. To preserve more useful information from source images, the new fusion rules are developed to combine the subbands with the varied frequencies. That is, the low frequency subbands are fused by utilizing two activity measures based on the regional standard deviation and Shannon entropy and the high frequency subbands are merged together via weight maps which are determined by the saliency values of pixels. The experimental results demonstrate that the proposed method significantly outperforms the conventional NSCT based medical image fusion approaches in both visual perception and evaluation indices

    Structural Similarity based Anatomical and Functional Brain Imaging Fusion

    Full text link
    Multimodal medical image fusion helps in combining contrasting features from two or more input imaging modalities to represent fused information in a single image. One of the pivotal clinical applications of medical image fusion is the merging of anatomical and functional modalities for fast diagnosis of malignant tissues. In this paper, we present a novel end-to-end unsupervised learning-based Convolutional Neural Network (CNN) for fusing the high and low frequency components of MRI-PET grayscale image pairs, publicly available at ADNI, by exploiting Structural Similarity Index (SSIM) as the loss function during training. We then apply color coding for the visualization of the fused image by quantifying the contribution of each input image in terms of the partial derivatives of the fused image. We find that our fusion and visualization approach results in better visual perception of the fused image, while also comparing favorably to previous methods when applying various quantitative assessment metrics.Comment: Accepted at MICCAI-MBIA 201

    An Efficient Algorithm for Multimodal Medical Image Fusion based on Feature Selection and PCA Using DTCWT (FSPCA-DTCWT)

    Get PDF
    Background: During the two past decades, medical image fusion has become an essential part ofmodern medicine due to the availability of numerous imaging modalities (e.g., MRI, CT, SPECT,etc.). This paper presents a new medical image fusion algorithm based on PCA and DTCWT,which uses different fusion rules to obtain a new image containing more information than any ofthe input images.Methods: A new image fusion algorithm improves the visual quality of the fused image, based onfeature selection and Principal Component Analysis (PCA) in the Dual-Tree Complex WaveletTransform (DTCWT) domain. It is called Feature Selection with Principal Component Analysisand Dual-Tree Complex Wavelet Transform (FSPCA-DTCWT). Using different fusion rules in asingle algorithm result in correctly reconstructed image (fused image), this combination willproduce a new technique, which employs the advantages of each method separately. The DTCWTpresents good directionality since it considers the edge information in six directions and providesapproximate shift invariant. The main goal of PCA is to extract the most significant characteristics(represented by the wavelet coefficients) in order to improve the spatial resolution. The proposedalgorithm fuses the detailed wavelet coefficients of input images using features selection rule.Results: Several experiments have been conducted over different sets of multimodal medicalimages such as CT/MRI, MRA/T1-MRI. However, due to pages-limit on a paper, only results ofthree sets have been presented. The FSPCA-DTCWT algorithm is compared to recent fusionmethods presented in the literature (eight methods) in terms of visual quality and quantitativelyusing well-known fusion performance metrics (five metrics). Results showed that the proposedalgorithm outperforms the existing ones regarding visual and quantitative evaluations.Conclusion: This paper focuses on medical image fusion of different modalities. A novel imagefusion algorithm based on DTCWT to merge multimodal medical images has been proposed.Experiments have been performed using two different sets of multimodal medical images. Theresults show that the proposed fusion method significantly outperforms the recent fusiontechniques reported in the literature

    Generative Adversarial Network (GAN) for Medical Image Synthesis and Augmentation

    Get PDF
    Medical image processing aided by artificial intelligence (AI) and machine learning (ML) significantly improves medical diagnosis and decision making. However, the difficulty to access well-annotated medical images becomes one of the main constraints on further improving this technology. Generative adversarial network (GAN) is a DNN framework for data synthetization, which provides a practical solution for medical image augmentation and translation. In this study, we first perform a quantitative survey on the published studies on GAN for medical image processing since 2017. Then a novel adaptive cycle-consistent adversarial network (Ad CycleGAN) is proposed. We respectively use a malaria blood cell dataset (19,578 images) and a COVID-19 chest X-ray dataset (2,347 images) to test the new Ad CycleGAN. The quantitative metrics include mean squared error (MSE), root mean squared error (RMSE), peak signal-to-noise ratio (PSNR), universal image quality index (UIQI), spatial correlation coefficient (SCC), spectral angle mapper (SAM), visual information fidelity (VIF), Frechet inception distance (FID), and the classification accuracy of the synthetic images. The CycleGAN and variant autoencoder (VAE) are also implemented and evaluated as comparison. The experiment results on malaria blood cell images indicate that the Ad CycleGAN generates more valid images compared to CycleGAN or VAE. The synthetic images by Ad CycleGAN or CycleGAN have better quality than those by VAE. The synthetic images by Ad CycleGAN have the highest accuracy of 99.61%. In the experiment on COVID-19 chest X-ray, the synthetic images by Ad CycleGAN or CycleGAN have higher quality than those generated by variant autoencoder (VAE). However, the synthetic images generated through the homogenous image augmentation process have better quality than those synthesized through the image translation process. The synthetic images by Ad CycleGAN have higher accuracy of 95.31% compared to the accuracy of the images by CycleGAN of 93.75%. In conclusion, the proposed Ad CycleGAN provides a new path to synthesize medical images with desired diagnostic or pathological patterns. It is considered a new approach of conditional GAN with effective control power upon the synthetic image domain. The findings offer a new path to improve the deep neural network performance in medical image processing
    corecore