105 research outputs found

    A Computationally Efficient U-Net Architecture for Lung Segmentation in Chest Radiographs

    Get PDF
    Lung segmentation plays a crucial role in computer-aided diagnosis using Chest Radiographs (CRs). We implement a U-Net architecture for lung segmentation in CRs across multiple publicly available datasets. We utilize a private dataset with 160 CRs provided by the Riverain Medical Group for training purposes. A publicly available dataset provided by the Japanese Radiological Scientific Technology (JRST) is used for testing. The active shape model-based results would serve as the ground truth for both these datasets. In addition, we also study the performance of our algorithm on a publicly available Shenzhen dataset which contains 566 CRs with manually segmented lungs (ground truth). Our overall performance in terms of pixel-based classification is about 98.3% and 95.6% for a set of 100 CRs in Shenzhen dataset and 140 CRs in JRST dataset. We also achieve an intersection over union value of 0.95 at a computation time of 8 seconds for the entire suite of Shenzhen testing cases

    Lung Segmentation from Chest X-rays using Variational Data Imputation

    Full text link
    Pulmonary opacification is the inflammation in the lungs caused by many respiratory ailments, including the novel corona virus disease 2019 (COVID-19). Chest X-rays (CXRs) with such opacifications render regions of lungs imperceptible, making it difficult to perform automated image analysis on them. In this work, we focus on segmenting lungs from such abnormal CXRs as part of a pipeline aimed at automated risk scoring of COVID-19 from CXRs. We treat the high opacity regions as missing data and present a modified CNN-based image segmentation network that utilizes a deep generative model for data imputation. We train this model on normal CXRs with extensive data augmentation and demonstrate the usefulness of this model to extend to cases with extreme abnormalities.Comment: Accepted to be presented at the first Workshop on the Art of Learning with Missing Values (Artemiss) hosted by the 37th International Conference on Machine Learning (ICML). Source code, training data and the trained models are available here: https://github.com/raghavian/lungVAE

    CheXmask: a large-scale dataset of anatomical segmentation masks for multi-center chest x-ray images

    Full text link
    The development of successful artificial intelligence models for chest X-ray analysis relies on large, diverse datasets with high-quality annotations. While several databases of chest X-ray images have been released, most include disease diagnosis labels but lack detailed pixel-level anatomical segmentation labels. To address this gap, we introduce an extensive chest X-ray multi-center segmentation dataset with uniform and fine-grain anatomical annotations for images coming from six well-known publicly available databases: CANDID-PTX, ChestX-ray8, Chexpert, MIMIC-CXR-JPG, Padchest, and VinDr-CXR, resulting in 676,803 segmentation masks. Our methodology utilizes the HybridGNet model to ensure consistent and high-quality segmentations across all datasets. Rigorous validation, including expert physician evaluation and automatic quality control, was conducted to validate the resulting masks. Additionally, we provide individualized quality indices per mask and an overall quality estimation per dataset. This dataset serves as a valuable resource for the broader scientific community, streamlining the development and assessment of innovative methodologies in chest X-ray analysis. The CheXmask dataset is publicly available at: \url{https://physionet.org/content/chexmask-cxr-segmentation-data/}.Comment: The CheXmask dataset is publicly available at https://physionet.org/content/chexmask-cxr-segmentation-data

    Leveraging Anatomical Constraints with Uncertainty for Pneumothorax Segmentation

    Full text link
    Pneumothorax is a medical emergency caused by abnormal accumulation of air in the pleural space - the potential space between the lungs and chest wall. On 2D chest radiographs, pneumothorax occurs within the thoracic cavity and outside of the mediastinum and we refer to this area as "lung+ space". While deep learning (DL) has increasingly been utilized to segment pneumothorax lesions in chest radiographs, many existing DL models employ an end-to-end approach. These models directly map chest radiographs to clinician-annotated lesion areas, often neglecting the vital domain knowledge that pneumothorax is inherently location-sensitive. We propose a novel approach that incorporates the lung+ space as a constraint during DL model training for pneumothorax segmentation on 2D chest radiographs. To circumvent the need for additional annotations and to prevent potential label leakage on the target task, our method utilizes external datasets and an auxiliary task of lung segmentation. This approach generates a specific constraint of lung+ space for each chest radiograph. Furthermore, we have incorporated a discriminator to eliminate unreliable constraints caused by the domain shift between the auxiliary and target datasets. Our results demonstrated significant improvements, with average performance gains of 4.6%, 3.6%, and 3.3% regarding Intersection over Union (IoU), Dice Similarity Coefficient (DSC), and Hausdorff Distance (HD). Our research underscores the significance of incorporating medical domain knowledge about the location-specific nature of pneumothorax to enhance DL-based lesion segmentation

    A multi-stage GAN for multi-organ chest X-ray image generation and segmentation

    Full text link
    Multi-organ segmentation of X-ray images is of fundamental importance for computer aided diagnosis systems. However, the most advanced semantic segmentation methods rely on deep learning and require a huge amount of labeled images, which are rarely available due to both the high cost of human resources and the time required for labeling. In this paper, we present a novel multi-stage generation algorithm based on Generative Adversarial Networks (GANs) that can produce synthetic images along with their semantic labels and can be used for data augmentation. The main feature of the method is that, unlike other approaches, generation occurs in several stages, which simplifies the procedure and allows it to be used on very small datasets. The method has been evaluated on the segmentation of chest radiographic images, showing promising results. The multistage approach achieves state-of-the-art and, when very few images are used to train the GANs, outperforms the corresponding single-stage approach

    Automatic volumetry on MR brain images can support diagnostic decision making.

    Get PDF
    Background: Diagnostic decisions in clinical imaging currently rely almost exclusively on visual image interpretation. This can lead to uncertainty, for example in dementia disease, where some of the changes resemble those of normal ageing. We hypothesized that extracting volumetric data from patients MR brain images, relating them to reference data and presenting the results as a colour overlay on the grey scale data would aid diagnostic readers in classifying dementia disease versus normal ageing. Methods: A proof-of-concept forced-choice reader study was designed using MR brain images from 36 subjects. Images were segmented into 43 regions using an automatic atlas registration-based label propagation procedure. Seven subjects had clinically probable AD, the remaining 29 of a similar age range were used as controls. Seven of the control subject data sets were selected at random to be presented along with the seven AD datasets to two readers, who were blinded to all clinical and demographic information except age and gender. Readers were asked to review the grey scale MR images and to record their choice of diagnosis (AD or non-AD) along with their confidence in this decision. Afterwards, readers were given the option to switch on a false-colour overlay representing the relative size of the segmented structures. Colorization was based on the size rank of the test subject when compared with a reference group consisting of the 22 control subjects who were not used as review subjects. The readers were then asked to record whether and how the additional information had an impact on their diagnostic confidence. Results: The size rank colour overlays were useful in 18 of 28 diagnoses, as determined by their impact on readers diagnostic confidence. A not useful result was found in 6 of 28 cases. The impact of the additional information on diagnostic confidence was significant (p < 0.02). Conclusion: Volumetric anatomical information extracted from brain images using automatic segmentation and presented as colour overlays can support diagnostic decision making. © 2008 Heckemann et al; licensee BioMed Central Ltd.Published versio
    • …
    corecore