4 research outputs found

    Multi-modal Medical Neurological Image Fusion using Wavelet Pooled Edge Preserving Autoencoder

    Full text link
    Medical image fusion integrates the complementary diagnostic information of the source image modalities for improved visualization and analysis of underlying anomalies. Recently, deep learning-based models have excelled the conventional fusion methods by executing feature extraction, feature selection, and feature fusion tasks, simultaneously. However, most of the existing convolutional neural network (CNN) architectures use conventional pooling or strided convolutional strategies to downsample the feature maps. It causes the blurring or loss of important diagnostic information and edge details available in the source images and dilutes the efficacy of the feature extraction process. Therefore, this paper presents an end-to-end unsupervised fusion model for multimodal medical images based on an edge-preserving dense autoencoder network. In the proposed model, feature extraction is improved by using wavelet decomposition-based attention pooling of feature maps. This helps in preserving the fine edge detail information present in both the source images and enhances the visual perception of fused images. Further, the proposed model is trained on a variety of medical image pairs which helps in capturing the intensity distributions of the source images and preserves the diagnostic information effectively. Substantial experiments are conducted which demonstrate that the proposed method provides improved visual and quantitative results as compared to the other state-of-the-art fusion methods.Comment: 8 pages, 5 figures, 6 table

    A New Multimodal Medical Image Fusion based on Laplacian Autoencoder with Channel Attention

    Full text link
    Medical image fusion combines the complementary information of multimodal medical images to assist medical professionals in the clinical diagnosis of patients' disorders and provide guidance during preoperative and intra-operative procedures. Deep learning (DL) models have achieved end-to-end image fusion with highly robust and accurate fusion performance. However, most DL-based fusion models perform down-sampling on the input images to minimize the number of learnable parameters and computations. During this process, salient features of the source images become irretrievable leading to the loss of crucial diagnostic edge details and contrast of various brain tissues. In this paper, we propose a new multimodal medical image fusion model is proposed that is based on integrated Laplacian-Gaussian concatenation with attention pooling (LGCA). We prove that our model preserves effectively complementary information and important tissue structures.Comment: 10 pages, 6 figures, % table

    Optimized CT-MR neurological image fusion framework using biologically inspired spiking neural model in hybrid β„“1βˆ’β„“0\ell_1-\ell_0 layer decomposition domain

    Full text link
    Medical image fusion plays an important role in the clinical diagnosis of several critical neurological diseases by merging complementary information available in multimodal images. In this paper, a novel CT-MR neurological image fusion framework is proposed using an optimized biologically inspired feedforward neural model in twoscale hybrid β„“1βˆ’β„“0\ell_1-\ell_0 decomposition domain using gray wolf optimization to preserve the structural as well as texture information present in source CT and MR images. Initially, the source images are subjected to two-scale β„“1βˆ’β„“0\ell_1-\ell_0 decomposition with optimized parameters, giving a scale-1 detail layer, a scale-2 detail layer and a scale2 base layer. Two detail layers at scale-1 and 2 are fused using an optimized biologically inspired neural model and weighted average scheme based on local energy and modified spatial frequency to maximize the preservation of edges and local textures, respectively, while the scale-2 base layer gets fused using choose max rule to preserve the background information. To optimize the hyper-parameters of hybrid β„“1βˆ’β„“0\ell_1-\ell_0 decomposition and biologically inspired neural model, a fitness function is evaluated based on spatial frequency and edge index of the resultant fused image obtained by adding all the fused components. The fusion performance is analyzed by conducting extensive experiments on different CT-MR neurological images. Experimental results indicate that the proposed method provides better-fused images and outperforms the other state-of-the-art fusion methods in both visual and quantitative assessments

    Large Multicentric Synchronous Extra-Abdominal Fibromatosis of the Leg and Foot: A Case Report

    No full text
    Extra-abdominal fibromatosis is an uncommon, benign locally aggressive fibrous soft-tissue tumor that usually occurs in the shoulders, chest wall, back, thigh, and head and neck affecting the young adult population. It is commonly located in the subcutaneous tissue and may infiltrate the adjacent skeletal muscles. We hereby report a rare case of a large extra-abdominal fibromatosis of the leg and foot in a 38-year-old woman. The patient presented with a large voluminous lesion clinically and on imaging, which was difficult to diagnose. Magnetic resonance imaging (MRI) was very helpful in diagnosing the lesion. It revealed a large relatively well-defined, lobulated hypointense mass in the posterior compartment of the leg with extension into the lower thigh and foot with local infiltration into the gastrocnemius and soleus muscles. An incisional biopsy was performed, and the mass was diagnosed on pathological examination as a spindle-shaped fibroblast proliferation suggesting extra-abdominal fibromatosis
    corecore