6 research outputs found

    A Novel Fusion Framework Based on Adaptive PCNN in NSCT Domain for Whole-Body PET and CT Images

    Get PDF
    The PET and CT fusion images, combining the anatomical and functional information, have important clinical meaning. This paper proposes a novel fusion framework based on adaptive pulse-coupled neural networks (PCNNs) in nonsubsampled contourlet transform (NSCT) domain for fusing whole-body PET and CT images. Firstly, the gradient average of each pixel is chosen as the linking strength of PCNN model to implement self-adaptability. Secondly, to improve the fusion performance, the novel sum-modified Laplacian (NSML) and energy of edge (EOE) are extracted as the external inputs of the PCNN models for low- and high-pass subbands, respectively. Lastly, the rule of max region energy is adopted as the fusion rule and different energy templates are employed in the low- and high-pass subbands. The experimental results on whole-body PET and CT data (239 slices contained by each modality) show that the proposed framework outperforms the other six methods in terms of the seven commonly used fusion performance metrics

    An Efficient Algorithm for Multimodal Medical Image Fusion based on Feature Selection and PCA Using DTCWT (FSPCA-DTCWT)

    Get PDF
    Background: During the two past decades, medical image fusion has become an essential part ofmodern medicine due to the availability of numerous imaging modalities (e.g., MRI, CT, SPECT,etc.). This paper presents a new medical image fusion algorithm based on PCA and DTCWT,which uses different fusion rules to obtain a new image containing more information than any ofthe input images.Methods: A new image fusion algorithm improves the visual quality of the fused image, based onfeature selection and Principal Component Analysis (PCA) in the Dual-Tree Complex WaveletTransform (DTCWT) domain. It is called Feature Selection with Principal Component Analysisand Dual-Tree Complex Wavelet Transform (FSPCA-DTCWT). Using different fusion rules in asingle algorithm result in correctly reconstructed image (fused image), this combination willproduce a new technique, which employs the advantages of each method separately. The DTCWTpresents good directionality since it considers the edge information in six directions and providesapproximate shift invariant. The main goal of PCA is to extract the most significant characteristics(represented by the wavelet coefficients) in order to improve the spatial resolution. The proposedalgorithm fuses the detailed wavelet coefficients of input images using features selection rule.Results: Several experiments have been conducted over different sets of multimodal medicalimages such as CT/MRI, MRA/T1-MRI. However, due to pages-limit on a paper, only results ofthree sets have been presented. The FSPCA-DTCWT algorithm is compared to recent fusionmethods presented in the literature (eight methods) in terms of visual quality and quantitativelyusing well-known fusion performance metrics (five metrics). Results showed that the proposedalgorithm outperforms the existing ones regarding visual and quantitative evaluations.Conclusion: This paper focuses on medical image fusion of different modalities. A novel imagefusion algorithm based on DTCWT to merge multimodal medical images has been proposed.Experiments have been performed using two different sets of multimodal medical images. Theresults show that the proposed fusion method significantly outperforms the recent fusiontechniques reported in the literature

    A novel multispectral and 2.5D/3D image fusion camera system for enhanced face recognition

    Get PDF
    The fusion of images from the visible and long-wave infrared (thermal) portions of the spectrum produces images that have improved face recognition performance under varying lighting conditions. This is because long-wave infrared images are the result of emitted, rather than reflected, light and are therefore less sensitive to changes in ambient light. Similarly, 3D and 2.5D images have also improved face recognition under varying pose and lighting. The opacity of glass to long-wave infrared light, however, means that the presence of eyeglasses in a face image reduces the recognition performance. This thesis presents the design and performance evaluation of a novel camera system which is capable of capturing spatially registered visible, near-infrared, long-wave infrared and 2.5D depth video images via a common optical path requiring no spatial registration between sensors beyond scaling for differences in sensor sizes. Experiments using a range of established face recognition methods and multi-class SVM classifiers show that the fused output from our camera system not only outperforms the single modality images for face recognition, but that the adaptive fusion methods used produce consistent increases in recognition accuracy under varying pose, lighting and with the presence of eyeglasses
    corecore