553 research outputs found
Deep learning model for thorax diseases detection
Despite the availability of radiology devices in some health care centers, thorax diseases are considered as one of the most common health problems, especially in rural areas. By exploiting the power of the Internet of things and specific platforms to analyze a large volume of medical data, the health of a patient could be improved earlier. In this paper, the proposed model  is based on pre-trained ResNet-50  for diagnosing thorax diseases. Chest x-ray images are cropped to extract the rib cage part from the chest radiographs. ResNet-50 was re-train on Chest x-ray14 dataset where a chest radiograph images are inserted into the model to determine if the person is healthy or not. In the case of an unhealthy patient, the model can classify the disease into one of the fourteen chest diseases. The results show the ability of ResNet-50 in achieving impressive performance in classifying thorax diseases
Recommended from our members
CLASSIFICATION OF THORAX DISEASES FROM CHEST X-RAY IMAGES
Chest X-ray images are crucial for medical decisions and patient care. However, their manual interpretation is time-consuming and prone to human error. This project aims to create an automated system that uses deep learning techniques to classify thorax disease from chest X-ray images. We are using the NIH Chest X-Ray Dataset, which contains many annotated images, as input data for this project. This approach uses UNet architecture as its classification layer. UNet architecture is well-known for its efficiency in image segmentation. Adding residual blocks enhances this approach\u27s ability to classify images. The goal of this project is to create a robust and accurate classification model that uses UNet’s unique capabilities for feature representation and extraction. This would allow accurate discrimination between different forms of thorax diseases with high precision. This project shows the effectiveness of UNet architecture with residual block for accurately classifying thorax disease types. These techniques combined produced superior results to many other architectures for medical image analysis, underscoring their importance
CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison
Large, labeled datasets have driven deep learning methods to achieve
expert-level performance on a variety of medical imaging tasks. We present
CheXpert, a large dataset that contains 224,316 chest radiographs of 65,240
patients. We design a labeler to automatically detect the presence of 14
observations in radiology reports, capturing uncertainties inherent in
radiograph interpretation. We investigate different approaches to using the
uncertainty labels for training convolutional neural networks that output the
probability of these observations given the available frontal and lateral
radiographs. On a validation set of 200 chest radiographic studies which were
manually annotated by 3 board-certified radiologists, we find that different
uncertainty approaches are useful for different pathologies. We then evaluate
our best model on a test set composed of 500 chest radiographic studies
annotated by a consensus of 5 board-certified radiologists, and compare the
performance of our model to that of 3 additional radiologists in the detection
of 5 selected pathologies. On Cardiomegaly, Edema, and Pleural Effusion, the
model ROC and PR curves lie above all 3 radiologist operating points. We
release the dataset to the public as a standard benchmark to evaluate
performance of chest radiograph interpretation models.
The dataset is freely available at
https://stanfordmlgroup.github.io/competitions/chexpert .Comment: Published in AAAI 201
Anatomy X-Net: A Semi-Supervised Anatomy Aware Convolutional Neural Network for Thoracic Disease Classification
Thoracic disease detection from chest radiographs using deep learning methods
has been an active area of research in the last decade. Most previous methods
attempt to focus on the diseased organs of the image by identifying spatial
regions responsible for significant contributions to the model's prediction. In
contrast, expert radiologists first locate the prominent anatomical structures
before determining if those regions are anomalous. Therefore, integrating
anatomical knowledge within deep learning models could bring substantial
improvement in automatic disease classification. This work proposes an
anatomy-aware attention-based architecture named Anatomy X-Net, that
prioritizes the spatial features guided by the pre-identified anatomy regions.
We leverage a semi-supervised learning method using the JSRT dataset containing
organ-level annotation to obtain the anatomical segmentation masks (for lungs
and heart) for the NIH and CheXpert datasets. The proposed Anatomy X-Net uses
the pre-trained DenseNet-121 as the backbone network with two corresponding
structured modules, the Anatomy Aware Attention (AAA) and Probabilistic
Weighted Average Pooling (PWAP), in a cohesive framework for anatomical
attention learning. Our proposed method sets new state-of-the-art performance
on the official NIH test set with an AUC score of 0.8439, proving the efficacy
of utilizing the anatomy segmentation knowledge to improve the thoracic disease
classification. Furthermore, the Anatomy X-Net yields an averaged AUC of 0.9020
on the Stanford CheXpert dataset, improving on existing methods that
demonstrate the generalizability of the proposed framework
- …