4,138 research outputs found

    Deep Learning Paradigms for Existing and Imminent Lung Diseases Detection: A Review

    Get PDF
    Diagnosis of lung diseases like asthma, chronic obstructive pulmonary disease, tuberculosis, cancer, etc., by clinicians rely on images taken through various means like X-ray and MRI. Deep Learning (DL) paradigm has magnified growth in the medical image field in current years. With the advancement of DL, lung diseases in medical images can be efficiently identified and classified. For example, DL can detect lung cancer with an accuracy of 99.49% in supervised models and 95.3% in unsupervised models. The deep learning models can extract unattended features that can be effortlessly combined into the DL network architecture for better medical image examination of one or two lung diseases. In this review article, effective techniques are reviewed under the elementary DL models, viz. supervised, semi-supervised, and unsupervised Learning to represent the growth of DL in lung disease detection with lesser human intervention. Recent techniques are added to understand the paradigm shift and future research prospects. All three techniques used Computed Tomography (C.T.) images datasets till 2019, but after the pandemic period, chest radiographs (X-rays) datasets are more commonly used. X-rays help in the economically early detection of lung diseases that will save lives by providing early treatment. Each DL model focuses on identifying a few features of lung diseases. Researchers can explore the DL to automate the detection of more lung diseases through a standard system using datasets of X-ray images. Unsupervised DL has been extended from detection to prediction of lung diseases, which is a critical milestone to seek out the odds of lung sickness before it happens. Researchers can work on more prediction models identifying the severity stages of multiple lung diseases to reduce mortality rates and the associated cost. The review article aims to help researchers explore Deep Learning systems that can efficiently identify and predict lung diseases at enhanced accuracy

    Revisiting Computer-Aided Tuberculosis Diagnosis

    Full text link
    Tuberculosis (TB) is a major global health threat, causing millions of deaths annually. Although early diagnosis and treatment can greatly improve the chances of survival, it remains a major challenge, especially in developing countries. Recently, computer-aided tuberculosis diagnosis (CTD) using deep learning has shown promise, but progress is hindered by limited training data. To address this, we establish a large-scale dataset, namely the Tuberculosis X-ray (TBX11K) dataset, which contains 11,200 chest X-ray (CXR) images with corresponding bounding box annotations for TB areas. This dataset enables the training of sophisticated detectors for high-quality CTD. Furthermore, we propose a strong baseline, SymFormer, for simultaneous CXR image classification and TB infection area detection. SymFormer incorporates Symmetric Search Attention (SymAttention) to tackle the bilateral symmetry property of CXR images for learning discriminative features. Since CXR images may not strictly adhere to the bilateral symmetry property, we also propose Symmetric Positional Encoding (SPE) to facilitate SymAttention through feature recalibration. To promote future research on CTD, we build a benchmark by introducing evaluation metrics, evaluating baseline models reformed from existing detectors, and running an online challenge. Experiments show that SymFormer achieves state-of-the-art performance on the TBX11K dataset. The data, code, and models will be released.Comment: 14 page

    PadChest: A large chest x-ray image dataset with multi-label annotated reports

    Get PDF
    We present a labeled large-scale, high resolution chest x-ray dataset for the automated exploration of medical images along with their associated reports. This dataset includes more than 160,000 images obtained from 67,000 patients that were interpreted and reported by radiologists at Hospital San Juan Hospital (Spain) from 2009 to 2017, covering six different position views and additional information on image acquisition and patient demography. The reports were labeled with 174 different radiographic findings, 19 differential diagnoses and 104 anatomic locations organized as a hierarchical taxonomy and mapped onto standard Unified Medical Language System (UMLS) terminology. Of these reports, 27% were manually annotated by trained physicians and the remaining set was labeled using a supervised method based on a recurrent neural network with attention mechanisms. The labels generated were then validated in an independent test set achieving a 0.93 Micro-F1 score. To the best of our knowledge, this is one of the largest public chest x-ray database suitable for training supervised models concerning radiographs, and the first to contain radiographic reports in Spanish. The PadChest dataset can be downloaded from http://bimcv.cipf.es/bimcv-projects/padchest/

    Dimensionality Reduction in Deep Learning for Chest X-Ray Analysis of Lung Cancer

    Full text link
    Efficiency of some dimensionality reduction techniques, like lung segmentation, bone shadow exclusion, and t-distributed stochastic neighbor embedding (t-SNE) for exclusion of outliers, is estimated for analysis of chest X-ray (CXR) 2D images by deep learning approach to help radiologists identify marks of lung cancer in CXR. Training and validation of the simple convolutional neural network (CNN) was performed on the open JSRT dataset (dataset #01), the JSRT after bone shadow exclusion - BSE-JSRT (dataset #02), JSRT after lung segmentation (dataset #03), BSE-JSRT after lung segmentation (dataset #04), and segmented BSE-JSRT after exclusion of outliers by t-SNE method (dataset #05). The results demonstrate that the pre-processed dataset obtained after lung segmentation, bone shadow exclusion, and filtering out the outliers by t-SNE (dataset #05) demonstrates the highest training rate and best accuracy in comparison to the other pre-processed datasets.Comment: 6 pages, 14 figure
    corecore