839 research outputs found

    PadChest: A large chest x-ray image dataset with multi-label annotated reports

    Get PDF
    We present a labeled large-scale, high resolution chest x-ray dataset for the automated exploration of medical images along with their associated reports. This dataset includes more than 160,000 images obtained from 67,000 patients that were interpreted and reported by radiologists at Hospital San Juan Hospital (Spain) from 2009 to 2017, covering six different position views and additional information on image acquisition and patient demography. The reports were labeled with 174 different radiographic findings, 19 differential diagnoses and 104 anatomic locations organized as a hierarchical taxonomy and mapped onto standard Unified Medical Language System (UMLS) terminology. Of these reports, 27% were manually annotated by trained physicians and the remaining set was labeled using a supervised method based on a recurrent neural network with attention mechanisms. The labels generated were then validated in an independent test set achieving a 0.93 Micro-F1 score. To the best of our knowledge, this is one of the largest public chest x-ray database suitable for training supervised models concerning radiographs, and the first to contain radiographic reports in Spanish. The PadChest dataset can be downloaded from http://bimcv.cipf.es/bimcv-projects/padchest/

    Novel autosegmentation spatial similarity metrics capture the time required to correct segmentations better than traditional metrics in a thoracic cavity segmentation workflow

    Get PDF
    Automated segmentation templates can save clinicians time compared to de novo segmentation but may still take substantial time to review and correct. It has not been thoroughly investigated which automated segmentation-corrected segmentation similarity metrics best predict clinician correction time. Bilateral thoracic cavity volumes in 329 CT scans were segmented by a UNet-inspired deep learning segmentation tool and subsequently corrected by a fourth-year medical student. Eight spatial similarity metrics were calculated between the automated and corrected segmentations and associated with correction times using Spearman\u27s rank correlation coefficients. Nine clinical variables were also associated with metrics and correction times using Spearman\u27s rank correlation coefficients or Mann-Whitney U tests. The added path length, false negative path length, and surface Dice similarity coefficient correlated better with correction time than traditional metrics, including the popular volumetric Dice similarity coefficient (respectively ρ = 0.69, ρ = 0.65, ρ =  - 0.48 versus ρ =  - 0.25; correlation p values \u3c 0.001). Clinical variables poorly represented in the autosegmentation tool\u27s training data were often associated with decreased accuracy but not necessarily with prolonged correction time. Metrics used to develop and evaluate autosegmentation tools should correlate with clinical time saved. To our knowledge, this is only the second investigation of which metrics correlate with time saved. Validation of our findings is indicated in other anatomic sites and clinical workflows. Novel spatial similarity metrics may be preferable to traditional metrics for developing and evaluating autosegmentation tools that are intended to save clinicians time

    The lung cancers: staging and response, CT, 18F-FDG PET/CT, MRI, DWI: review and new perspectives.

    Get PDF
    Lung cancer is the most commonly diagnosed cancer and the leading cause of cancer deaths in both sexes combined. Recent years have seen major advances in the diagnostic and treatment options for patients with non-small-cell lung cancer (NSCLC), including the routine use of 2-deoxy-2[18F]-fluoro-D-glucose positron emission tomography/computed tomography (18F-FDG PET/CT) in staging and response evaluation, minimally invasive endoscopic biopsy, targeted radiotherapy, minimally invasive surgery, and molecular and immunotherapies. In this review, the central roles of CT and 18F-FDG PET/CT in staging and response in both NSCLC and malignant pleural mesothelioma (MPM) are critically assessed. The Tumour Node Metastases (TNM-8) staging systems for NSCLC and MPM are presented with critical appraisal of the strengths and pitfalls of imaging. Overviews of the Response Evaluation Criteria in Solid Tumours (RECIST 1.1) for NSCLC and the modified RECIST criteria for MPM are provided, together with discussion of the benefits and limitations of these anatomical-based tools. Metabolic response assessment (not evaluated by RECIST 1.1) will be explored. We introduce the Positron Emission Tomography Response Criteria in Solid Tumours (PERCIST 1.0) to include its advantages and challenges. The limitations of both anatomical and metabolic assessment criteria when applied to NSCLC treated with immunotherapy and the important concept of pseudoprogression are addressed with reference to immune RECIST (iRECIST). Separate consideration is given to the diagnosis and follow up of solitary pulmonary nodules with reference to the British Thoracic Society guidelines and Fleischner guidelines and use of the Brock (CT-based) and Herder (addition of 18F-FDG PET/CT) models for assessing malignant potential. We discuss how these models inform decisions by the multidisciplinary team, including referral of suspicious nodules for non-surgical management in patients unsuitable for surgery. We briefly outline current lung screening systems being used in the UK, Europe and North America. Emerging roles for MRI in lung cancer imaging are reviewed. The use of whole-body MRI in diagnosing and staging NSCLC is discussed with reference to the recent multicentre Streamline L trial. The potential use of diffusion-weighted MRI to distinguish tumour from radiotherapy-induced lung toxicity is discussed. We briefly summarise the new PET-CT radiotracers being developed to evaluate specific aspects of cancer biology, other than glucose uptake. Finally, we describe how CT, MRI and 18F-FDG PET/CT are moving from primarily diagnostic tools for lung cancer towards having utility in prognostication and personalised medicine with the agency of artificial intelligence

    Controllable Chest X-Ray Report Generation from Longitudinal Representations

    Full text link
    Radiology reports are detailed text descriptions of the content of medical scans. Each report describes the presence/absence and location of relevant clinical findings, commonly including comparison with prior exams of the same patient to describe how they evolved. Radiology reporting is a time-consuming process, and scan results are often subject to delays. One strategy to speed up reporting is to integrate automated reporting systems, however clinical deployment requires high accuracy and interpretability. Previous approaches to automated radiology reporting generally do not provide the prior study as input, precluding comparison which is required for clinical accuracy in some types of scans, and offer only unreliable methods of interpretability. Therefore, leveraging an existing visual input format of anatomical tokens, we introduce two novel aspects: (1) longitudinal representation learning -- we input the prior scan as an additional input, proposing a method to align, concatenate and fuse the current and prior visual information into a joint longitudinal representation which can be provided to the multimodal report generation model; (2) sentence-anatomy dropout -- a training strategy for controllability in which the report generator model is trained to predict only sentences from the original report which correspond to the subset of anatomical regions given as input. We show through in-depth experiments on the MIMIC-CXR dataset how the proposed approach achieves state-of-the-art results while enabling anatomy-wise controllable report generation.Comment: Accepted to the Findings of EMNLP 202

    Chest X-Ray Image Classification on Common Thorax Diseases using GLCM and AlexNet Deep Features

    Get PDF
    Image processing has been progressing far in medical as it is one of the main techniques used in the development of medical imaging diagnosis system. Some of the medical imaging modalities are the Magnetic Resonance Imaging (MRI), Computed Tomography (CT) Scan, X-Ray and Ultrasound. The output from all of these modalities would later be reviewed by the expert for an accurate result. Ensemble methods in machine learning are able to provide an automatic detection that can be used in the development of computer aided diagnosis system which can aid the experts in making their diagnosis. This paper presents the investigation on the classification of fourteen thorax diseases using chest x-ray image from ChestX-Ray8 database using Grey Level Co-occurrence Matrix (GLCM) and AlexNet feature extraction which are process using supervised classifiers: Zero R, k-NN, Naïve Bayes, PART, and J48 Tree. The classification accuracy result indicates that k-NN classifier gave the highest accuracy compare to the other classifiers with 47.51% accuracy for GLCM feature extraction method and 47.18% for AlexNet feature extraction method. The result shows that number of data by class and multilabelled data will influence the classifcation method. Data using GLCM feature extraction method has higher classification accuracy compared to AlexNet and required less processing step

    A Multiclass Radiomics Method-Based WHO Severity Scale for Improving COVID-19 Patient Assessment and Disease Characterization From CT Scans.

    Get PDF
    OBJECTIVES The aim of this study was to evaluate the severity of COVID-19 patients' disease by comparing a multiclass lung lesion model to a single-class lung lesion model and radiologists' assessments in chest computed tomography scans. MATERIALS AND METHODS The proposed method, AssessNet-19, was developed in 2 stages in this retrospective study. Four COVID-19-induced tissue lesions were manually segmented to train a 2D-U-Net network for a multiclass segmentation task followed by extensive extraction of radiomic features from the lung lesions. LASSO regression was used to reduce the feature set, and the XGBoost algorithm was trained to classify disease severity based on the World Health Organization Clinical Progression Scale. The model was evaluated using 2 multicenter cohorts: a development cohort of 145 COVID-19-positive patients from 3 centers to train and test the severity prediction model using manually segmented lung lesions. In addition, an evaluation set of 90 COVID-19-positive patients was collected from 2 centers to evaluate AssessNet-19 in a fully automated fashion. RESULTS AssessNet-19 achieved an F1-score of 0.76 ± 0.02 for severity classification in the evaluation set, which was superior to the 3 expert thoracic radiologists (F1 = 0.63 ± 0.02) and the single-class lesion segmentation model (F1 = 0.64 ± 0.02). In addition, AssessNet-19 automated multiclass lesion segmentation obtained a mean Dice score of 0.70 for ground-glass opacity, 0.68 for consolidation, 0.65 for pleural effusion, and 0.30 for band-like structures compared with ground truth. Moreover, it achieved a high agreement with radiologists for quantifying disease extent with Cohen κ of 0.94, 0.92, and 0.95. CONCLUSIONS A novel artificial intelligence multiclass radiomics model including 4 lung lesions to assess disease severity based on the World Health Organization Clinical Progression Scale more accurately determines the severity of COVID-19 patients than a single-class model and radiologists' assessment
    corecore