16 research outputs found

    Deep Learning Network with Spatial Attention Module for Detecting Acute Bilirubin Encephalopathy in Newborns Based on Multimodal MRI

    No full text
    Background: Acute bilirubin encephalopathy (ABE) is a significant cause of neonatal mortality and disability. Early detection and treatment of ABE can prevent the further development of ABE and its long-term complications. Due to the limited classification ability of single-modal magnetic resonance imaging (MRI), this study aimed to validate the classification performance of a new deep learning model based on multimodal MRI images. Additionally, the study evaluated the effect of a spatial attention module (SAM) on improving the model’s diagnostic performance in distinguishing ABE. Methods: This study enrolled a total of 97 neonates diagnosed with ABE and 80 neonates diagnosed with hyperbilirubinemia (HB, non-ABE). Each patient underwent three types of multimodal imaging, which included T1-weighted imaging (T1WI), T2-weighted imaging (T2WI), and an apparent diffusion coefficient (ADC) map. A multimodal MRI classification model based on the ResNet18 network with spatial attention modules was built to distinguish ABE from non-ABE. All combinations of the three types of images were used as inputs to test the model’s classification performance, and we also analyzed the prediction performance of models with SAMs through comparative experiments. Results: The results indicated that the diagnostic performance of the multimodal image combination was better than any single-modal image, and the combination of T1WI and T2WI achieved the best classification performance (accuracy = 0.808 ± 0.069, area under the curve = 0.808 ± 0.057). The ADC images performed the worst among the three modalities’ images. Adding spatial attention modules significantly improved the model’s classification performance. Conclusion: Our experiment showed that a multimodal image classification network with spatial attention modules significantly improved the accuracy of ABE classification

    Image Fusion of CT and MR with Sparse Representation in NSST Domain

    No full text
    Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST) domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR-) based approach. And the dynamic group sparsity recovery (DGSR) algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation

    A Novel Methodology for Extracting Colon's Lumen from Colonoscopic Images

    No full text
    Recently, computer assisted diagnosis on colonoscopic images is getting more and more attention by many researchers in the world, while the colon's lumen is the most important feature during the process. In this paper, a novel methodology for extracting colon's lumen from colonoscopic image is presented. At first, in order to eliminate the background at the outside of colonoscopic images, an effective and easy method, which is similar to the Hough transform is used to detect the preliminary region of interest (pROI). Then the original image is segmented through two steps: relaxation process and tightening process. The relaxation process is realized by finding the all valleys from the histogram of a defined homogeneity function to produce as many homogenous regions as possible, while tightening process is subsequently employed to merge the unnecessary regions according to the color difference between them in CIE (L* a* b*) color space. After a series of postprocessing procedure, the lumen is successfully extracted. An extensive set of endoscopic images is tested to demonstrate the effectiveness of the proposed approach
    corecore