22 research outputs found

    New Technique for Image Fusion Using DDWT and PSO In Medical field

    Get PDF
    Image fusion is a process where multiple images (more than one) from different or same images of object are combined to form a single resultant fused image. This fused image is more informative, descriptive and qualitative as compared to its original input images or than individual images. The fusion technique in medical images is useful for resourceful disease diagnosis purpose and for doctors having varying experiences. This paper illustrates multimodality medical image fusion techniques and their results evaluated and analyzed with six quantitative metrics specified in last section of the paper. In this paper, a multimodality image fusion algorithm based on Dual tree discrete wavelet transform and particle swarm optimization (PSO) is proposed. Firstly, the source images are divided into low-frequency coefficients and high-frequency coefficients by the dual-tree discrete wavelet transform (DDWT) as separate parallel branch of band. PSO is used to determine to obtain proper fusion weight parameter from high-frequency coefficients from segmented images by DDWT. Also PSO is used to determine ? parameter called scalar weight. Finally, the fused image is reconstructed by the inverse DDWT. Thus quality of fused image is measured with PSNR, SNR, OCE, MI, Entropy DOI: 10.17762/ijritcc2321-8169.160410

    Survey on wavelet based image fusion techniques

    Get PDF
    Image fusion is the process of combining multiple images into a single image without distortion or loss of information. The techniques related to image fusion are broadly classified as spatial and transform domain methods. In which, the transform domain based wavelet fusion techniques are widely used in different domains like medical, space and military for the fusion of multimodality or multi-focus images. In this paper, an overview of different wavelet transform based methods and its applications for image fusion are discussed and analysed

    Comparative Analysis and Fusion of MRI and PET Images based on Wavelets for Clinical Diagnosis

    Get PDF
    Nowadays, Medical imaging modalities like Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), Single Photon Emission Tomography (SPECT), and Computed Tomography (CT) play a crucial role in clinical diagnosis and treatment planning. The images obtained from each of these modalities contain complementary information of the organ imaged. Image fusion algorithms are employed to bring all of this disparate information together into a single image, allowing doctors to diagnose disorders quickly. This paper proposes a novel technique for the fusion of MRI and PET images based on YUV color space and wavelet transform. Quality assessment based on entropy showed that the method can achieve promising results for medical image fusion. The paper has done a comparative analysis of the fusion of MRI and PET images using different wavelet families at various decomposition levels for the detection of brain tumors as well as Alzheimer’s disease. The quality assessment and visual analysis showed that the Dmey wavelet at decomposition level 3 is optimum for the fusion of MRI and PET images. This paper also compared the results of several fusion rules such as average, maximum, and minimum, finding that the maximum fusion rule outperformed the other two

    A Review on Data Fusion of Multidimensional Medical and Biomedical Data

    Get PDF
    Data fusion aims to provide a more accurate description of a sample than any one source of data alone. At the same time, data fusion minimizes the uncertainty of the results by combining data from multiple sources. Both aim to improve the characterization of samples and might improve clinical diagnosis and prognosis. In this paper, we present an overview of the advances achieved over the last decades in data fusion approaches in the context of the medical and biomedical fields. We collected approaches for interpreting multiple sources of data in different combinations: image to image, image to biomarker, spectra to image, spectra to spectra, spectra to biomarker, and others. We found that the most prevalent combination is the image-to-image fusion and that most data fusion approaches were applied together with deep learning or machine learning methods

    A Novel Multimodal Image Fusion Method Using Hybrid Wavelet-based Contourlet Transform

    Full text link
    Various image fusion techniques have been studied to meet the requirements of different applications such as concealed weapon detection, remote sensing, urban mapping, surveillance and medical imaging. Combining two or more images of the same scene or object produces a better application-wise visible image. The conventional wavelet transform (WT) has been widely used in the field of image fusion due to its advantages, including multi-scale framework and capability of isolating discontinuities at object edges. However, the contourlet transform (CT) has been recently adopted and applied to the image fusion process to overcome the drawbacks of WT with its own advantages. Based on the experimental studies in this dissertation, it is proven that the contourlet transform is more suitable than the conventional wavelet transform in performing the image fusion. However, it is important to know that the contourlet transform also has major drawbacks. First, the contourlet transform framework does not provide shift-invariance and structural information of the source images that are necessary to enhance the fusion performance. Second, unwanted artifacts are produced during the image decomposition process via contourlet transform framework, which are caused by setting some transform coefficients to zero for nonlinear approximation. In this dissertation, a novel fusion method using hybrid wavelet-based contourlet transform (HWCT) is proposed to overcome the drawbacks of both conventional wavelet and contourlet transforms, and enhance the fusion performance. In the proposed method, Daubechies Complex Wavelet Transform (DCxWT) is employed to provide both shift-invariance and structural information, and Hybrid Directional Filter Bank (HDFB) is used to achieve less artifacts and more directional information. DCxWT provides shift-invariance which is desired during the fusion process to avoid mis-registration problem. Without the shift-invariance, source images are mis-registered and non-aligned to each other; therefore, the fusion results are significantly degraded. DCxWT also provides structural information through its imaginary part of wavelet coefficients; hence, it is possible to preserve more relevant information during the fusion process and this gives better representation of the fused image. Moreover, HDFB is applied to the fusion framework where the source images are decomposed to provide abundant directional information, less complexity, and reduced artifacts. The proposed method is applied to five different categories of the multimodal image fusion, and experimental study is conducted to evaluate the performance of the proposed method in each multimodal fusion category using suitable quality metrics. Various datasets, fusion algorithms, pre-processing techniques and quality metrics are used for each fusion category. From every experimental study and analysis in each fusion category, the proposed method produced better fusion results than the conventional wavelet and contourlet transforms; therefore, its usefulness as a fusion method has been validated and its high performance has been verified

    Automated Segmentation of Cerebral Aneurysm Using a Novel Statistical Multiresolution Approach

    Get PDF
    Cerebral Aneurysm (CA) is a vascular disease that threatens the lives of many adults. It a ects almost 1:5 - 5% of the general population. Sub- Arachnoid Hemorrhage (SAH), resulted by a ruptured CA, has high rates of morbidity and mortality. Therefore, radiologists aim to detect it and diagnose it at an early stage, by analyzing the medical images, to prevent or reduce its damages. The analysis process is traditionally done manually. However, with the emerging of the technology, Computer-Aided Diagnosis (CAD) algorithms are adopted in the clinics to overcome the traditional process disadvantages, as the dependency of the radiologist's experience, the inter and intra observation variability, the increase in the probability of error which increases consequently with the growing number of medical images to be analyzed, and the artifacts added by the medical images' acquisition methods (i.e., MRA, CTA, PET, RA, etc.) which impedes the radiologist' s work. Due to the aforementioned reasons, many research works propose di erent segmentation approaches to automate the analysis process of detecting a CA using complementary segmentation techniques; but due to the challenging task of developing a robust reproducible reliable algorithm to detect CA regardless of its shape, size, and location from a variety of the acquisition methods, a diversity of proposed and developed approaches exist which still su er from some limitations. This thesis aims to contribute in this research area by adopting two promising techniques based on the multiresolution and statistical approaches in the Two-Dimensional (2D) domain. The rst technique is the Contourlet Transform (CT), which empowers the segmentation by extracting features not apparent in the normal image scale. While the second technique is the Hidden Markov Random Field model with Expectation Maximization (HMRF-EM), which segments the image based on the relationship of the neighboring pixels in the contourlet domain. The developed algorithm reveals promising results on the four tested Three- Dimensional Rotational Angiography (3D RA) datasets, where an objective and a subjective evaluation are carried out. For the objective evaluation, six performance metrics are adopted which are: accuracy, Dice Similarity Index (DSI), False Positive Ratio (FPR), False Negative Ratio (FNR), speci city, and sensitivity. As for the subjective evaluation, one expert and four observers with some medical background are involved to assess the segmentation visually. Both evaluations compare the segmented volumes against the ground truth data
    corecore