6,350 research outputs found
Multi-modal Medical Neurological Image Fusion using Wavelet Pooled Edge Preserving Autoencoder
Medical image fusion integrates the complementary diagnostic information of
the source image modalities for improved visualization and analysis of
underlying anomalies. Recently, deep learning-based models have excelled the
conventional fusion methods by executing feature extraction, feature selection,
and feature fusion tasks, simultaneously. However, most of the existing
convolutional neural network (CNN) architectures use conventional pooling or
strided convolutional strategies to downsample the feature maps. It causes the
blurring or loss of important diagnostic information and edge details available
in the source images and dilutes the efficacy of the feature extraction
process. Therefore, this paper presents an end-to-end unsupervised fusion model
for multimodal medical images based on an edge-preserving dense autoencoder
network. In the proposed model, feature extraction is improved by using wavelet
decomposition-based attention pooling of feature maps. This helps in preserving
the fine edge detail information present in both the source images and enhances
the visual perception of fused images. Further, the proposed model is trained
on a variety of medical image pairs which helps in capturing the intensity
distributions of the source images and preserves the diagnostic information
effectively. Substantial experiments are conducted which demonstrate that the
proposed method provides improved visual and quantitative results as compared
to the other state-of-the-art fusion methods.Comment: 8 pages, 5 figures, 6 table
Medical Diagnosis with Multimodal Image Fusion Techniques
Image Fusion is an effective approach utilized to draw out all the significant information from the source images, which supports experts in evaluation and quick decision making. Multi modal medical image fusion produces a composite fused image utilizing various sources to improve quality and extract complementary information. It is extremely challenging to gather every piece of information needed using just one imaging method. Therefore, images obtained from different modalities are fused Additional clinical information can be gleaned through the fusion of several types of medical image pairings. This study's main aim is to present a thorough review of medical image fusion techniques which also covers steps in fusion process, levels of fusion, various imaging modalities with their pros and cons, and the major scientific difficulties encountered in the area of medical image fusion. This paper also summarizes the quality assessments fusion metrics. The various approaches used by image fusion algorithms that are presently available in the literature are classified into four broad categories i) Spatial fusion methods ii) Multiscale Decomposition based methods iii) Neural Network based methods and iv) Fuzzy Logic based methods. the benefits and pitfalls of the existing literature are explored and Future insights are suggested. Moreover, this study is anticipated to create a solid platform for the development of better fusion techniques in medical applications
A Novel Fusion Framework Based on Adaptive PCNN in NSCT Domain for Whole-Body PET and CT Images
The PET and CT fusion images, combining the anatomical and functional information, have important clinical meaning. This paper proposes a novel fusion framework based on adaptive pulse-coupled neural networks (PCNNs) in nonsubsampled contourlet transform (NSCT) domain for fusing whole-body PET and CT images. Firstly, the gradient average of each pixel is chosen as the linking strength of PCNN model to implement self-adaptability. Secondly, to improve the fusion performance, the novel sum-modified Laplacian (NSML) and energy of edge (EOE) are extracted as the external inputs of the PCNN models for low- and high-pass subbands, respectively. Lastly, the rule of max region energy is adopted as the fusion rule and different energy templates are employed in the low- and high-pass subbands. The experimental results on whole-body PET and CT data (239 slices contained by each modality) show that the proposed framework outperforms the other six methods in terms of the seven commonly used fusion performance metrics
Medical imaging analysis with artificial neural networks
Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging
- …