1,972 research outputs found

    Leveraging Longitudinal Data in Chest Radiography Pathology Detection

    Get PDF

    Weakly Supervised Learning with Automated Labels from Radiology Reports for Glioma Change Detection

    Full text link
    Gliomas are the most frequent primary brain tumors in adults. Glioma change detection aims at finding the relevant parts of the image that change over time. Although Deep Learning (DL) shows promising performances in similar change detection tasks, the creation of large annotated datasets represents a major bottleneck for supervised DL applications in radiology. To overcome this, we propose a combined use of weak labels (imprecise, but fast-to-create annotations) and Transfer Learning (TL). Specifically, we explore inductive TL, where source and target domains are identical, but tasks are different due to a label shift: our target labels are created manually by three radiologists, whereas our source weak labels are generated automatically from radiology reports via NLP. We frame knowledge transfer as hyperparameter optimization, thus avoiding heuristic choices that are frequent in related works. We investigate the relationship between model size and TL, comparing a low-capacity VGG with a higher-capacity ResNeXt model. We evaluate our models on 1693 T2-weighted magnetic resonance imaging difference maps created from 183 patients, by classifying them into stable or unstable according to tumor evolution. The weak labels extracted from radiology reports allowed us to increase dataset size more than 3-fold, and improve VGG classification results from 75% to 82% AUC. Mixed training from scratch led to higher performance than fine-tuning or feature extraction. To assess generalizability, we ran inference on an open dataset (BraTS-2015: 15 patients, 51 difference maps), reaching up to 76% AUC. Overall, results suggest that medical imaging problems may benefit from smaller models and different TL strategies with respect to computer vision datasets, and that report-generated weak labels are effective in improving model performances. Code, in-house dataset and BraTS labels are released.Comment: This work has been submitted as Original Paper to a Journa

    Development and validation of a deep learning algorithm for longitudinal change detection in sequential chest X-ray images

    Get PDF
    ์ตœ๊ทผ ๊ทธ๋ž˜ํ”ฝ ์ฒ˜๋ฆฌ ์žฅ์น˜ ๋ฐ ๋น…๋ฐ์ดํ„ฐ๊ฐ€ ๋ฐœ์ „ํ•˜๋ฉด์„œ, ์˜๋ฃŒ ์˜์ƒ์ฒ˜๋ฆฌ ๋ถ„์•ผ์— ์ธ๊ณต์ง€๋Šฅ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ ‘๋ชฉ์‹œ์ผœ ์ฃผ์š” ์งˆ๋ณ‘์„ ์ง„๋‹จ ๋ฐ ๊ฒ€์ถœํ•˜๋Š” ์—ฐ๊ตฌ๊ฐ€ ํ™œ๋ฐœํžˆ ์ง„ํ–‰๋˜๊ณ  ์žˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ํ˜„์žฌ๊นŒ์ง€ ์ œ์•ˆ๋œ ์ธ๊ณต์ง€๋Šฅ ๊ธฐ๋ฐ˜์˜ ๋ถ„๋ฅ˜ ๋ชจ๋ธ๋“ค์€ ์ฃผ์–ด์ง„ ๋‹จ์ผ ์˜์ƒ๋งŒ์„ ๋…๋ฆฝ์ ์œผ๋กœ ์ด์šฉํ•˜์—ฌ ๊ฒฐ๊ณผ๋ฅผ ๋„์ถœํ•œ๋‹ค. ์ฆ‰, ํ˜„์žฌ ์ดฌ์˜๋œ ์˜์ƒ์€ ์ด์ „ ๊ธฐ๋ก๊ณผ ์ž ์žฌ์ ์œผ๋กœ ๊ด€๋ จ์ด ์žˆ์Œ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ , ์‚ฌ์ „์— ์ •์˜๋œ ๋น„์ •์ƒ ๋ฒ”์ฃผ๋งŒ์„ ์˜ˆ์ธกํ•˜๋Š” ํšก๋‹จ๋ฉด์  ๋ถ„์„์„ ์‹œํ–‰ํ•˜๋Š” ๊ฒƒ์ด๋‹ค. ๋ถ„๋ฅ˜ ์„ฑ๋Šฅ์€ ์˜์ƒ์˜ํ•™ ์ž„์ƒ์˜์˜ ์ˆ˜์ค€์— ๊ทผ์ ‘ํ•˜์˜€์ง€๋งŒ, ๋ณ‘๋ณ€์˜ ๊ตฌ์ฒด์ ์ธ ๋ณ€ํ™”์— ๋Œ€์‘ํ•˜์ง€ ๋ชปํ•œ๋‹ค. ์ด๋Š” ํ™˜์ž๊ฐ€ ์ด์ „์— ์ดฌ์˜ํ•œ ์˜์ƒ์„ ๋ถ„๋ฅ˜ํ•˜๋”๋ผ๋„ ๋‹จ์ˆœ ์งˆ๋ณ‘์˜ ์ถœํ˜„ ์œ ๋ฌด๋กœ๋Š” ๋ณ‘๋ณ€์˜ ๋ณ€ํ™”๋ฅผ ํŒŒ์•…ํ•˜๊ธฐ๊ฐ€ ์–ด๋ ต๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ํŠนํžˆ ์ผ๋ถ€ ์ฃผ์š” ์งˆ๋ณ‘์˜ ๊ฒฝ์šฐ, ๋™์ผํ•œ ์งˆ๋ณ‘ ๋‚ด์—์„œ๋„ ๊ทธ ํŒจํ„ด์˜ ์ข…๋ฅ˜๊ฐ€ ๋‹ค์–‘ํ•  ๋ฟ ๋งŒ ์•„๋‹ˆ๋ผ, ์žฅ๊ธฐ๊ฐ„ ๋˜๋Š” ๊ธ‰์„ฑ ๋ณ€ํ™” ๋“ฑ ๋ณ€ํ™” ์–‘์ƒ์ด ํ™˜์ž์˜ ์ž„์ƒ๊ธฐ๋ก์— ๋”ฐ๋ผ ๋งค์šฐ ๋‹ค๋ฅด๋‹ค. ๋”ฐ๋ผ์„œ ํšก๋‹จ๋ฉด์  ๋ถ„์„๋งŒ์œผ๋กœ๋Š” ์‹œ๊ฐ„์— ๋”ฐ๋ฅธ ํŠน์ • ๋ณ€ํ™”๋ฅผ ๊ฒ€์ถœํ•˜๋Š” ๊ฒƒ์€ ๋ถˆ๊ฐ€๋Šฅํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ข…๋‹จ๋ฉด์  ๋ถ„์„์ด ํ•จ๊ป˜ ์š”๊ตฌ๋œ๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋Š” ์ฃผ์–ด์ง„ ๋‘ ์˜์ƒ(์ „,ํ›„) ๊ฐ„์˜ ๋ณ‘๋ณ€์˜ ํŠน์ • ๋ณ€ํ™”๋ฅผ ๊ฐ์ง€ํ•˜๋Š” ์ƒˆ๋กœ์šด ์ธ๊ณต์ง€๋Šฅ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ œ์•ˆํ•œ๋‹ค. ๋ณธ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ํ•ต์‹ฌ ๊ธฐ๋ฒ•์€ ์ •ํ•ฉ๋˜์ง€ ์•Š์€ ๋‘ ์˜์ƒ์˜ ๊ธฐํ•˜ ์ƒ๊ด€๊ด€๊ณ„๋„๋ฅผ ๊ตฌํ•˜์—ฌ ์˜์ƒ ๊ฐ„ ๋ณ€ํ™” ์œ ๋ฌด์— ๋”ฐ๋ฅธ ๊ธฐํ•˜ ์ƒ๊ด€๊ด€๊ณ„๋„ ๋ณ€ํ™” ํŒจํ„ด์„ ํŒŒ์•…ํ•˜๊ณ , ๋ณ€ํ™”์œ ๋ฌด๋ฅผ ์ด์ง„ ๋ถ„๋ฅ˜ํ•˜๋Š” ๊ฒƒ์ด๋‹ค. ํ˜„์žฌ๊นŒ์ง€ ์ข…๋‹จ๋ฉด์  ๋ถ„์„์„ ์œ„ํ•œ ๊ธฐ๊ณ„ํ•™์Šต์šฉ ์ฐธ์กฐํ‘œ์ค€ ๋ฐ์ดํ„ฐ๋ฒ ์ด์Šค๊ฐ€ ๊ณต๊ฐœ๋œ ๊ฒƒ์ด ์—†๊ธฐ ๋•Œ๋ฌธ์—, ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์˜์ƒ ํŒ๋…๋ฌธ์„ ๋ถ„์„ํ•˜์—ฌ ๋ณ‘๋ณ€์˜ ๋ณ€ํ™”๊ธฐ์ค€์„ ํ™•๋ฆฝํ•˜๊ณ , ์งˆํ™˜์˜ ์ข…๋ฅ˜, ๊ฒฝ๊ณผ์‹œ๊ฐ„, ๋ณ€ํ™” ํ˜•ํƒœ ๋“ฑ์— ๋”ฐ๋ฅธ ๋ฐ์ดํ„ฐ ๋ถ„๋ฅ˜ ์ฒด๊ณ„๋ฅผ ๊ตฌ์ถ•ํ•˜์—ฌ ์ˆœ์ฐจ์  ํ‰๋ถ€ X-์„  ์˜์ƒ์— ๋Œ€ํ•œ ์ฐธ์กฐํ‘œ์ค€ ๋ฐ์ดํ„ฐ๋ฒ ์ด์Šค๋ฅผ ์ž์ฒด์ ์œผ๋กœ ํ™•๋ณดํ•˜์˜€๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋Š” ์•Œ๊ณ ๋ฆฌ์ฆ˜ ์„ฑ๋Šฅ์„ ๊ฐ๊ด€์ ์œผ๋กœ ๋ถ„์„ํ•˜๊ธฐ ์œ„ํ•˜์—ฌ ์ˆ˜์‹ ์ž์กฐ์ž‘ํŠน์„ฑ(ROC)์˜ ํ•˜์˜ ๋ฉด์ (AUC)์„ ์‚ฐ์ถœํ•˜๊ณ , ๊ธฐ์กด ๊ฐœ๋ฐœ๋œ ์•Œ๊ณ ๋ฆฌ์ฆ˜ ๋ฐ ์œ ์‚ฌ ์—ฐ๊ตฌ์™€ ์ •๋Ÿ‰์ ์œผ๋กœ ๋น„๊ตํ•˜์˜€๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ ์ œ์•ˆํ•˜๋Š” ๊ธฐํ•˜ ์ƒ๊ด€๊ด€๊ณ„๋„๋ฅผ ์ด์šฉํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์ด AUC=0.89 (95% ์‹ ๋ขฐ๊ตฌ๊ฐ„, 0.86-0.92) ๋ฐ Youden's index์—์„œ์˜ ๋ฏผ๊ฐ๋„=0.83, ํŠน์ด๋„=0.82์œผ๋กœ ๊ฐ€์žฅ ๋›ฐ์–ด๋‚œ ์„ฑ๋Šฅ์„ ๋ณด์˜€๋‹ค. ๋˜ํ•œ ์ฃผ์–ด์ง„ ๋‘ ์˜์ƒ์—์„œ ํŠน์ • ๋ณ‘๋ณ€์˜ ๋ณ€ํ™”์— ๋”ฐ๋ฅธ ๊ธฐํ•˜ ์ƒ๊ด€๊ด€๊ณ„๋„๋ฅผ ์ •์„ฑ์ ์œผ๋กœ ๋ถ„์„ํ•จ์œผ๋กœ์จ ์‹ค์ œ๋กœ ํ•ด๋‹น ๋ณ€ํ™”๊ฐ€ ๋ฐœ์ƒํ•œ ์œ„์น˜๋ฅผ ์—ญ์ถ”์  ๋ฐ ์„ค๋ช…ํ•  ์ˆ˜ ์žˆ๋Š” ๊ฐ€๋Šฅ์„ฑ์„ ์ œ์‹œํ•˜์˜€๋‹ค.The diagnostic decision for chest X-ray image generally considers a probable change in a lesion, compared to the previous examination. We propose a novel algorithm to detect the change in longitudinal chest X-ray images. We extract feature maps from a pair of input images through two streams of convolutional neural networks. Next, we generate the geometric correlation map computing matching scores for every possible match of local descriptors in two feature maps. This correlation map is fed into a binary classifier to detect specific patterns of the map representing the change in the lesion. Since no public dataset offers proper information to train the proposed network, we also build our own dataset by analyzing reports in examinations at a tertiary hospital. Experimental results show our approach outperforms previous methods in quantitative comparison. We also provide various case examples visualizing the effect of the proposed geometric correlation map.1. ์„œ๋ก  7 1.1. ๋ฐฐ๊ฒฝ 7 1.2. ์—ฐ๊ตฌ์˜ ๋ชฉ์  9 2. ๋ณธ๋ก  12 2.1. ์•Œ๊ณ ๋ฆฌ์ฆ˜ ๊ตฌ์กฐ 12 2.2.1. ํŠน์ง• ์ถ”์ถœ 14 2.2.2. ๊ธฐํ•˜ ์ƒ๊ด€๊ด€๊ณ„๋„ 14 2.2.3. ์ด์ง„ ๋ถ„๋ฅ˜๊ธฐ 16 2.2. ์ฐธ์กฐํ‘œ์ค€ ๋ฐ์ดํ„ฐ๋ฒ ์ด์Šค 17 3. ๊ฒฐ๊ณผ ๋ฐ ๋ถ„์„ 21 3.1. ํŒ๋…๋ฌธ ๊ฐ€๊ณต 21 3.2. ์•Œ๊ณ ๋ฆฌ์ฆ˜ ์„ฑ๋Šฅ 22 4. ๊ณ ์ฐฐ 28 4.1. ์‹คํ—˜ ๊ฒฐ๊ณผ ๊ณ ์ฐฐ 28 4.2. ์•Œ๊ณ ๋ฆฌ์ฆ˜ ๊ณ ์ฐฐ 28 5. ๊ฒฐ๋ก  30 ์ฐธ๊ณ  ๋ฌธํ—Œ 31 Abstract 33์„

    Quantitative Evaluation of Pulmonary Emphysema Using Magnetic Resonance Imaging and x-ray Computed Tomography

    Get PDF
    Chronic obstructive pulmonary disease (COPD) is a leading cause of morbidity and mortality affecting at least 600 million people worldwide. The most widely used clinical measurements of lung function such as spirometry and plethysmography are generally accepted for diagnosis and monitoring of the disease. However, these tests provide only global measures of lung function and they are insensitive to early disease changes. Imaging tools that are currently available have the potential to provide regional information about lung structure and function but at present are mainly used for qualitative assessment of disease and disease progression. In this thesis, we focused on the application of quantitative measurements of lung structure derived from 1H magnetic resonance imaging (MRI) and high resolution computed tomography (CT) in subjects diagnosed with COPD by a physician. Our results showed that significant and moderately strong relationship exists between 1H signal intensity (SI) and 3He apparent diffusion coefficient (ADC), as well as between 1H SI and CT measurements of emphysema. This suggests that these imaging methods may be quantifying the same tissue changes in COPD, and that pulmonary 1H SI may be used effectively to monitor emphysema as a complement to CT and noble gas MRI. Additionally, our results showed that objective multi-threshold analysis of CT images for emphysema scoring that takes into account the frequency distribution of each Hounsfield unit (HU) threshold was effective in correctly classifying the patient into COPD and healthy subgroups. Finally, we found a significant correlation between whole lung average subjective and objective emphysema scores with high inter-observer agreement. It is concluded that 1H MRI and high resolution CT can be used to quantitatively evaluate lung tissue alterations in COPD subjects

    Multiscale quantification of damage in composite structures

    Get PDF

    BiomedJourney: Counterfactual Biomedical Image Generation by Instruction-Learning from Multimodal Patient Journeys

    Full text link
    Rapid progress has been made in instruction-learning for image editing with natural-language instruction, as exemplified by InstructPix2Pix. In biomedicine, such methods can be applied to counterfactual image generation, which helps differentiate causal structure from spurious correlation and facilitate robust image interpretation for disease progression modeling. However, generic image-editing models are ill-suited for the biomedical domain, and counterfactual biomedical image generation is largely underexplored. In this paper, we present BiomedJourney, a novel method for counterfactual biomedical image generation by instruction-learning from multimodal patient journeys. Given a patient with two biomedical images taken at different time points, we use GPT-4 to process the corresponding imaging reports and generate a natural language description of disease progression. The resulting triples (prior image, progression description, new image) are then used to train a latent diffusion model for counterfactual biomedical image generation. Given the relative scarcity of image time series data, we introduce a two-stage curriculum that first pretrains the denoising network using the much more abundant single image-report pairs (with dummy prior image), and then continues training using the counterfactual triples. Experiments using the standard MIMIC-CXR dataset demonstrate the promise of our method. In a comprehensive battery of tests on counterfactual medical image generation, BiomedJourney substantially outperforms prior state-of-the-art methods in instruction image editing and medical image generation such as InstructPix2Pix and RoentGen. To facilitate future study in counterfactual medical generation, we plan to release our instruction-learning code and pretrained models.Comment: Project page & demo: https://aka.ms/biomedjourne

    Pulmonary Image Segmentation and Registration Algorithms: Towards Regional Evaluation of Obstructive Lung Disease

    Get PDF
    Pulmonary imaging, including pulmonary magnetic resonance imaging (MRI) and computed tomography (CT), provides a way to sensitively and regionally measure spatially heterogeneous lung structural-functional abnormalities. These unique imaging biomarkers offer the potential for better understanding pulmonary disease mechanisms, monitoring disease progression and response to therapy, and developing novel treatments for improved patient care. To generate these regional lung structure-function measurements and enable broad clinical applications of quantitative pulmonary MRI and CT biomarkers, as a first step, accurate, reproducible and rapid lung segmentation and registration methods are required. In this regard, we first developed a 1H MRI lung segmentation algorithm that employs complementary hyperpolarized 3He MRI functional information for improved lung segmentation. The 1H-3He MRI joint segmentation algorithm was formulated as a coupled continuous min-cut model and solved through convex relaxation, for which a dual coupled continuous max-flow model was proposed and a max-flow-based efficient numerical solver was developed. Experimental results on a clinical dataset of 25 chronic obstructive pulmonary disease (COPD) patients ranging in disease severity demonstrated that the algorithm provided rapid lung segmentation with high accuracy, reproducibility and diminished user interaction. We then developed a general 1H MRI left-right lung segmentation approach by exploring the left-to-right lung volume proportion prior. The challenging volume proportion-constrained multi-region segmentation problem was approximated through convex relaxation and equivalently represented by a max-flow model with bounded flow conservation conditions. This gave rise to a multiplier-based high performance numerical implementation based on convex optimization theories. In 20 patients with mild- to-moderate and severe asthma, the approach demonstrated high agreement with manual segmentation, excellent reproducibility and computational efficiency. Finally, we developed a CT-3He MRI deformable registration approach that coupled the complementary CT-1H MRI registration. The joint registration problem was solved by exploring optical-flow techniques, primal-dual analyses and convex optimization theories. In a diverse group of patients with asthma and COPD, the registration approach demonstrated lower target registration error than single registration and provided fast regional lung structure-function measurements that were strongly correlated with a reference method. Collectively, these lung segmentation and registration algorithms demonstrated accuracy, reproducibility and workflow efficiency that all may be clinically-acceptable. All of this is consistent with the need for broad and large-scale clinical applications of pulmonary MRI and CT

    Correlation of Acute Radiation Dermatitis to Tissue Oxygenation in Radiation Therapy treated Breast Cancer Subjects

    Get PDF
    Over 95% of radiation therapy (RT) treated breast cancer subjects undergo an adverse skin reaction known as radiation dermatitis (RD). Assessment of severity or grading of RD is clinically visual and hence subjective. Our objective is to determine sub-clinical tissue oxygenation (StO2) changes in response to RT treatment in breast cancer subjects using near-infrared spectroscopic imaging and correlate these changes to RD grading. A WIRB approved 6-8 week longitudinal pilot study was carried out on 10 RT-treated subjects at Miami Cancer Institute. Significant changes (p \u3c 0.05) in StO2 of irradiated and contralateral chest wall and axilla regions with weeks of treatment were observed. The overall drop in StO2 was higher in irradiated regions compared to its contralateral region. This drop was negatively correlated to RD scaling. Pre-RT assessment of StO2 also related to severity in RD. The long-term goal is physiological based prediction of RD severity via tissue oxygenation measurements

    A Survey of the Impact of Self-Supervised Pretraining for Diagnostic Tasks with Radiological Images

    Full text link
    Self-supervised pretraining has been observed to be effective at improving feature representations for transfer learning, leveraging large amounts of unlabelled data. This review summarizes recent research into its usage in X-ray, computed tomography, magnetic resonance, and ultrasound imaging, concentrating on studies that compare self-supervised pretraining to fully supervised learning for diagnostic tasks such as classification and segmentation. The most pertinent finding is that self-supervised pretraining generally improves downstream task performance compared to full supervision, most prominently when unlabelled examples greatly outnumber labelled examples. Based on the aggregate evidence, recommendations are provided for practitioners considering using self-supervised learning. Motivated by limitations identified in current research, directions and practices for future study are suggested, such as integrating clinical knowledge with theoretically justified self-supervised learning methods, evaluating on public datasets, growing the modest body of evidence for ultrasound, and characterizing the impact of self-supervised pretraining on generalization.Comment: 32 pages, 6 figures, a literature survey submitted to BMC Medical Imagin
    • โ€ฆ
    corecore