7 research outputs found

    DTCWTASODCNN: DTCWT based Weighted Fusion Model for Multimodal Medical Image Quality Improvement with ASO Technique & DCNN

    Get PDF
    Medical image fusion approaches are sub-categorized as single-mode as well as multimodal fusion strategies. The limitations of single-mode fusion approaches can be resolved by introducing a multimodal fusion approach. Multimodal medical image fusion approach is formed by integrating two or more medical images of similar or dissimilar modalities aims to enhance the image quality and to preserve the image information. Hence, this paper introduced a new way to meld multimodal medical images via utilizing developed weighted fusion model relied on Dual Tree Complex Wavelet Transform (DTCWT) for fusing the multimodal medical image. Here, the two medical images are considered for image fusion process and we have implied DTCWT to the medical images for generating four sub-bands partition of the source medical images. The Renyientropy-based weighted fusion model is used to combine the weighted coefficient of DTCWT of images. The final fusion process is carried out using Atom Search Sine Cosine Algorithm (ASSCA)-based Deep Convolutional Neural Network (DCNN). Moreover, the simulation work output demonstrated for developed fusion model gained the superior outcomes relied on key indicators named as Mutual Information i.e. MI, Peak Signal to Noise Ratio abbreviated as PSNR as well as Root Mean Square Error, in short RMSE with the values of 1.554, 40.45 dB as well as 5.554, correspondingly

    WeAbDeepCNN: Weighted Average Model and ASSCA based Two Level Fusion Scheme For Multi-Focus Images

    Get PDF
    Fusion of images is a strategy that merges various moderately focused images or non-focused images of a single scene to generate a fully focused, clear and sharp image. The goal of this research is to discover the focused regions and further combination of focused regions of different source images into solitary image. However, there exist several issues in image fusion that involves contrast reduction, block artifacts, and artificial edges. To solve this issue, a two level fusion scheme has been devised, which involves weighted average model along with Atom Search Sine Cosine algorithm-based Deep Convolutional Neural Network (ASSCA-based Deep CNN) and may be abbreviated as “WeAbDeepCNN” i.e. weighted average model and ASSCA based Deep CNN. In the study two images are fed to initial fusion module, which is performed using weighted average model. The fusion score are generated whose values are determined in an optimal manner. Thus, final fusion is performed using proposed ASSCA-based Deep CNN. The Deep CNN training is carried out with proposed ASSCA, which is devised by combining Sine Cosine Algorithm, abbreviated as SCA, as well as atom search optimization (ASO). The proposed ASSCA-based Deep CNN offers improved performance in contrast to current state of the art techniques with a highest value 1.52 of mutual information (MI), with a highest value of 32.55 dB of maximum Peak Signal to Noise Ratio i.e. PSNR as well as  value of 7.59 of Minimum Root Mean Square Error (RMSE)

    Haar Adaptive Taylor-ASSCA-DCNN: A Novel Fusion Model for Image Quality Enhancement

    No full text
    568-578In medical imaging, image fusion has a prominent exposure in extracting complementary information out of varying medical image modalities. The utilization of different medical image modality had imperatively improved treatment information. Each kind of modality contains specific data regarding subject being imaged. Various techniques are devised for solving the issue of fusion, but the major issue of these techniques is key features loss in fused image, which also leads to unwanted artefacts. This paper devises an Adaptive optimization driven deep model fusing for medical images to obtain the essential information for diagnosis and research purpose. Through our proposed fusion scheme based on Haar wavelet and Adaptive Taylor ASSCA Deep CNN we have developed fusion rules to amalgamate pairs of Magnetic Resonance Imaging i.e. MRI like T1, T2. Through experimental analysis our proposed method shown for preserving edge as well as component related information moreover tumour detection efficiency has also been increased. Here, as input, two MRI images have been considered. Then Haar wavelet is adapted on both MRI images for transformation of images in low as well as high frequency sub-groups. Then, the fusion is done with correlation-based weighted model. After fusion, produced output is imposed to final fusion, which is executed through Deep Convolution Neural Network (DCNN). The Deep CNN is trained here utilizing Adaptive Taylor Atom Search Sine Cosine Algorithm (Adaptive Taylor ASSCA). Here, the Adaptive Taylor ASSCA is obtained by integrating adaptive concept in Taylor ASSCA. The highest MI of 1.672532 have been attained using db2 wavelet for image pair 1, highest PSNR 42.20993dB using db 2 wavelet for image pair 5 and lowest RMSE 5.204896 using sym 2 wavelet for image pair 5, have been shown proposed Adaptive Taylor ASO + SCA-based Deep CNN
    corecore