An improved approach for medical image fusion using sparse representation and Siamese convolutional neural network

Abstract

Multimodal image fusion is a contemporary branch of medical imaging that aims to increase the accuracy of clinical diagnosis of the disease stage development. The fusion of different image modalities can be a viable medical imaging approach. It combines the best features to produce a composite image with higher quality than its predecessors and can significantly improve medical diagnosis. Recently, sparse representation (SR) and Siamese Convolutional Neural Network (SCNN) methods have been introduced independently for image fusion. However, some of the results from these approaches have recorded defects, such as edge blur, less visibility, and blocking artifacts. To remedy these deficiencies, in this paper, a smart blending approach based on a combination of SR and SCNN is introduced for image fusion, which comprises three steps as follows. Firstly, entire source images are fed into the classical orthogonal matching pursuit (OMP), where the SR-fused image is obtained using the max-rule that aims to improve pixel localization. Secondly, a novel scheme of SCNN-based K-SVD dictionary learning is re-employed for each source image. The method has shown good non-linearity behavior, contributing to increasing the fused output's sparsity characteristics and demonstrating better extraction and transfer of image details to the output fused image. Lastly, the fusion rule step employs a linear combination between steps 1 and 2 to obtain the final fused image. The results depict that the proposed method is advantageous, compared to other previous methods, notably by suppressing the artifacts produced by the traditional SR and SCNN model

    Similar works