4,807 research outputs found

    Multi-Modal Medical Image Fusion using Multi-Resolution Discrete Sine Transform

    Get PDF
    Quick advancement in high innovation and current medical instrumentations, medical imaging has turned into a fundamental part in many applications such as in diagnosis, research and treatment. Images from multimodal imaging devices usually provide complementary and sometime conflicting information. Information from one image may not be adequate to give exact clinical prerequisites to the specialist or doctor. Of-late, Multi-Model medical image fusion playing a challenging role in current research areas. There are many theories and techniques developed to fuse the multimodal images by researchers. In this paper, introducing a new algorithm called as Multi Resolution Discrete Sine Transform which is used for Multi-Model image fusion in medical applications. Performance and evaluation of this algorithm is presented. The main intention of this paper is to apply DST which is easy to understand and demonstrated method to process image fusion techniques. The proposed MDST based image fusion algorithm performance is compared with that of the well-known wavelet based image fusion algorithm. From the results it is observed that the performance of image fusion using MDST is almost similar to that of wavelet based image fusion algorithm. The proposed MDST based image fusion techniques are computationally very simple and it is suitable. The proposed MDST based image fusion algorithms are computationally, exceptionally basic and it is appropriate for real time medical diagnosis applications

    Wavelet-based medical image fusion via a non-linear operator

    Get PDF
    Medical image fusion has been extensively used to aid medical diagnosis by combining images of various modalities such as Computed Tomography (CT) and Magnetic Resonance Image (MRI) into a single output image that contains salient features from both inputs. This paper proposes a novel fusion algorithm through the use of a non-linear fusion operator, based on the low sub-band coefficients of the Discrete Wavelet Transform (DWT). Rather than employing the conventional mean rule for approximation sub-bands, a modified approach is taken by the introduction of a non-linear fusion rule that exploits the multimodal nature of the image inputs by prioritizing the stronger coefficients. Performance evaluation of CT-MRI image fusion datasets based on a range of wavelet filter banks shows that the algorithm boasts improved scores of up to 92% as compared to established methods. Overall, the non-linear fusion rule holds strong potential to help improve image fusion applications in medicine and indeed other fields

    DTCWTASODCNN: DTCWT based Weighted Fusion Model for Multimodal Medical Image Quality Improvement with ASO Technique & DCNN

    Get PDF
    Medical image fusion approaches are sub-categorized as single-mode as well as multimodal fusion strategies. The limitations of single-mode fusion approaches can be resolved by introducing a multimodal fusion approach. Multimodal medical image fusion approach is formed by integrating two or more medical images of similar or dissimilar modalities aims to enhance the image quality and to preserve the image information. Hence, this paper introduced a new way to meld multimodal medical images via utilizing developed weighted fusion model relied on Dual Tree Complex Wavelet Transform (DTCWT) for fusing the multimodal medical image. Here, the two medical images are considered for image fusion process and we have implied DTCWT to the medical images for generating four sub-bands partition of the source medical images. The Renyientropy-based weighted fusion model is used to combine the weighted coefficient of DTCWT of images. The final fusion process is carried out using Atom Search Sine Cosine Algorithm (ASSCA)-based Deep Convolutional Neural Network (DCNN). Moreover, the simulation work output demonstrated for developed fusion model gained the superior outcomes relied on key indicators named as Mutual Information i.e. MI, Peak Signal to Noise Ratio abbreviated as PSNR as well as Root Mean Square Error, in short RMSE with the values of 1.554, 40.45 dB as well as 5.554, correspondingly

    A new framework for the integrative analytics of intravascular ultrasound and optical coherence tomography images

    Get PDF
    Abstract:The integrative analysis of multimodal medical images plays an important role in the diagnosis of coronary artery disease by providing additional comprehensive information that cannot be found in an individual source image. Intravascular ultrasound (IVUS) and optical coherence tomography (IV-OCT) are two imaging modalities that have been widely used in the medical practice for the assessment of arterial health and the detection of vascular lumen lesions. IV-OCT has a high resolution and poor penetration, while IVUS has a low resolution and high detection depth. This paper proposes a new approach for the fusion of intravascular ultrasound and optical coherence tomography pullbacks to significantly improve the use of those two types of medical images. It also presents a new two-phase multimodal fusion framework using a coarse-to-fine registration and a wavelet fusion method. In the coarse-registration process, we define a set of new feature points to match the IVUS image and IV-OCT image. Then, the improved quality image is obtained based on the integration of the mutual information of two types of images. Finally, the matched registered images are fused with an approach based on the new proposed wavelet algorithm. The experimental results demonstrate the performance of the proposed new approach for significantly enhancing both the precision and computational stability. The proposed approach is shown to be promising for providing additional information to enhance the diagnosis and enable a deeper understanding of atherosclerosis

    Medical Image Segmentation Based on Multi-Modal Convolutional Neural Network: Study on Image Fusion Schemes

    Full text link
    Image analysis using more than one modality (i.e. multi-modal) has been increasingly applied in the field of biomedical imaging. One of the challenges in performing the multimodal analysis is that there exist multiple schemes for fusing the information from different modalities, where such schemes are application-dependent and lack a unified framework to guide their designs. In this work we firstly propose a conceptual architecture for the image fusion schemes in supervised biomedical image analysis: fusing at the feature level, fusing at the classifier level, and fusing at the decision-making level. Further, motivated by the recent success in applying deep learning for natural image analysis, we implement the three image fusion schemes above based on the Convolutional Neural Network (CNN) with varied structures, and combined into a single framework. The proposed image segmentation framework is capable of analyzing the multi-modality images using different fusing schemes simultaneously. The framework is applied to detect the presence of soft tissue sarcoma from the combination of Magnetic Resonance Imaging (MRI), Computed Tomography (CT) and Positron Emission Tomography (PET) images. It is found from the results that while all the fusion schemes outperform the single-modality schemes, fusing at the feature level can generally achieve the best performance in terms of both accuracy and computational cost, but also suffers from the decreased robustness in the presence of large errors in any image modalities.Comment: Zhe Guo and Xiang Li contribute equally to this wor
    corecore