3 research outputs found
An improved approach for medical image fusion using sparse representation and Siamese convolutional neural network
Multimodal image fusion is a contemporary branch of medical imaging that aims to increase the accuracy of clinical diagnosis of the disease stage development. The fusion of different image modalities can be a viable medical imaging approach. It combines the best features to produce a composite image with higher quality than its predecessors and can significantly improve medical diagnosis. Recently, sparse representation (SR) and Siamese Convolutional Neural Network (SCNN) methods have been introduced independently for image fusion. However, some of the results from these approaches have recorded defects, such as edge blur, less visibility, and blocking artifacts. To remedy these deficiencies, in this paper, a smart blending approach based on a combination of SR and SCNN is introduced for image fusion, which comprises three steps as follows. Firstly, entire source images are fed into the classical orthogonal matching pursuit (OMP), where the SR-fused image is obtained using the max-rule that aims to improve pixel localization. Secondly, a novel scheme of SCNN-based K-SVD dictionary learning is re-employed for each source image. The method has shown good non-linearity behavior, contributing to increasing the fused output's sparsity characteristics and demonstrating better extraction and transfer of image details to the output fused image. Lastly, the fusion rule step employs a linear combination between steps 1 and 2 to obtain the final fused image. The results depict that the proposed method is advantageous, compared to other previous methods, notably by suppressing the artifacts produced by the traditional SR and SCNN model
DESIGN OF COMPACT AND DISCRIMINATIVE DICTIONARIES
The objective of this research work is to design compact and discriminative dictionaries
for e�ective classi�cation. The motivation stems from the fact that dictionaries
inherently contain redundant dictionary atoms. This is because the aim of dictionary
learning is reconstruction, not classi�cation. In this thesis, we propose methods to obtain
minimum number discriminative dictionary atoms for e�ective classi�cation and
also reduced computational time.
First, we propose a classi�cation scheme where an example is assigned to a class
based on the weight assigned to both maximum projection and minimum reconstruction
error. Here, the input data is learned by K-SVD dictionary learning which alternates
between sparse coding and dictionary update. For sparse coding, orthogonal
matching pursuit (OMP) is used and for dictionary update, singular value decomposition
is used. This way of classi�cation though e�ective, still there is a scope to
improve dictionary learning by removing redundant atoms because our goal is not reconstruction.
In order to remove such redundant atoms, we propose two approaches
based on information theory to obtain compact discriminative dictionaries. In the
�rst approach, we remove redundant atoms from the dictionary while maintaining
discriminative information. Speci�cally, we propose a constraint optimization problem
which minimizes the mutual information between optimized dictionary and initial
dictionary while maximizing mutual information between class labels and optimized
dictionary. This helps to determine information loss between before and after the
dictionary optimization. To compute information loss, we use Jensen-Shannon diver-
gence with adaptive weights to compare class distributions of each dictionary atom.
The advantage of Jensen-Shannon divergence is its computational e�ciency rather
than calculating information loss from mutual information