8 research outputs found
A research on medical image segmentation with deep learning
Medical image segmentation plays a critical role in computer-aided diagnosis, image quantification, and surgical planning, which identifies the pixels of homogenous regions including in organs and lesions and provides important information about the shapes and volumes of the organs and lesions. However, it could be one of the most difficult and tedious tasks to be performed by humans consistently. Therefore, there has been a good amount of research to propose various semi- or automatic segmentation methods, which depend mainly on conventional image processing and machine learning methods. However, these methods may be vulnerable to variations in image acquisition, anatomy, and disease. Due to the above problems faced in conventional image segmentation methods, many scholars continue to seek more robust medical image segmentation methods.
In recent years, the deep learning model has been widely applied and popularized in computer vision. This success has been rapidly applied in the area of medical imaging. In particular, deep learning has achieved a leap in precision and robustness with regard to variations of anatomy and disease. Several deep convolutional neural network (CNN) models have been proposed such as Residual Net, Visual Geometry Group (VGG), fully convolutional network (FCN) and U-Net. These models provide not only state-of-the-art performance for image classification, segmentation, object detection and tracking tasks but also a new perspective on image processing. Therefore, at present, deep learning can assist radiologists and surgeons to segment various anatomic structures as a reference and multiple abnormalities in computed tomography (CT) or magnetic resonance imaging (MRI) images.
In this research, we conducted various experiments to find and evaluated adequate deep learning based semantic segmentation models in medical images from the viewpoint of their accuracy in the clinical context. The aim of this study was two-fold as follows: 1) identifying and/or developing a deep learning based semantic segmentation model and the properties of an imaging modality that are adequate for the clinical context. 2) Solving specific tasks including smart labeling with humans in the loop, fine-tuning the models with different label levels in imbalanced datasets, and comparing deep learning and human segmentation where these models are developed and applied. For achieving these tasks or meeting these objectives, we proposed a fully automatic segmentation network with various kinds of CNN models considering organ-, image modality-, and image reconstruction-specific variations. Toward this, segmentation of a glioblastoma and acute stroke infarct in brain MRI, mandible and maxillary sinus in cone-beam computed tomography (CBCT), breast and other tissues in MRI, and pancreas cancer in contrast-enhanced CT were all performed in actual clinical settings. Basically, in case of slices with more thickness, 2D semantic segmentation shows better performances. Additionally, pre-processing is sensitive to developing robust segmentation that needs image normalization and various augmentations. However, because the modern graphics processing unit (GPU) lacks memory for 3D semantic segmentation, cascaded semantic segmentation or patch-based semantic segmentation gives better results. An anatomic variation could be easily trained by semantic segmentation, but disease variation of cancer is hard to be trained. Further, size-invariant semantic segmentation could be one of the important issues in medical image segmentation. Variation of contrast agent uptake may be vulnerable to the overall performance of semantic segmentation. For multi-center evaluation, subtle variation including variations in vendors’ image protocols and high noise levels at different centers may cause problems to train robust semantic segmentation. Furthermore, as labeling of semantic segmentation is very tedious and time-consuming, deep learning based smart labeling is needed.
Based on theses issues, we have developed and evaluated various applications with semantic segmentation in medical images including smart labeling, robust radiomics analysis and disease pattern segmentation, and automated segmentation.
We concluded that adequate semantic segmentation with deep learning in medical images can improve the segmentation quality, which can be helpful in computer-aided diagnosis (CAD), image quantification, and surgical planning in actual clinical settings. Medical image segmentation and its application may be sufficient to provide practical utility to many physicians and patients who do not need to learn sectional anatomy.Docto
INFLUENCES OF PACKING MATERIALS, APPLIED VOLTAGE, GAS COMPOSITION AND VOLTAGE POLARITY ON THE DECOMPOSITION OF TOLUENE AND POWER DELIVERY IN A DIELECTRIC BARRIER PLASMA REACTOR
22kc
Machine learning approach for differentiating cytomegalovirus esophagitis from herpes simplex virus esophagitis
Diffusion and perfusion MRI radiomics obtained from deep learning segmentation provides reproducible and comparable diagnostic model to human in post-treatment glioblastoma
Objectives Deep learning-based automatic segmentation (DLAS) helps the reproducibility of radiomics features, but its effect on radiomics modeling is unknown. We therefore evaluated whether DLAS can robustly extract anatomical and physiological MRI features, thereby assisting in the accurate assessment of treatment response in glioblastoma patients. Methods A DLAS model was trained on 238 glioblastomas and validated on an independent set of 98 pre- and 86 post-treatment glioblastomas from two tertiary hospitals. A total of 1618 radiomics features from contrast-enhanced T1-weighted images (CE-T1w) and histogram features from apparent diffusion coefficient (ADC) and cerebral blood volume (CBV) mapping were extracted. The diagnostic performance of radiomics features and ADC and CBV parameters for identifying treatment response was tested using area under the curve (AUC) from receiver operating characteristics analysis. Feature reproducibility was tested using a 0.80 cutoff for concordance correlation coefficients. Results Reproducibility was excellent for ADC and CBV features (ICC, 0.82-0.99) and first-order features (pre- and post-treatment, 100% and 94.1% remained), but lower for texture (79.0% and 69.1% remained) and wavelet-transformed (81.8% and 74.9% remained) features of CE-T1w. DLAS-based radiomics showed similar performance to human-performed segmentations in internal validation (AUC, 0.81 [95% CI, 0.64-0.99] vs. AUC, 0.81 [0.60-1.00], p = 0.80), but slightly lower performance in external validation (AUC, 0.78 [0.61-0.95] vs. AUC, 0.65 [0.46-0.84], p = 0.23). Conclusion DLAS-based feature extraction showed high reproducibility for first-order features from anatomical and physiological MRI, and comparable diagnostic performance to human manual segmentations in the identification of pseudoprogression, supporting the utility of DLAS in quantitative MRI analysis
