51 research outputs found

    A Review on Detection of Pneumonia in Chest X-ray Images Using Neural Networks

    Get PDF
    The health organisation has suffered from the lack of diagnosis support systems and physicians in India. Further, the physicians are struggling to treat many patients, and the hospitals also have the lack of a radiologist especially in rural areas; thus, almost all cases are handled by a single physician, leading to many misdiagnoses. Computer aided diagnostic systems are being developed to address this problem. The current study aimed to review the different methods to detect pneumonia using neural networks and compare their approach and results. For the best comparisons, only papers with the same data set Chest X-ray14 are studied

    Anatomy X-Net: A Semi-Supervised Anatomy Aware Convolutional Neural Network for Thoracic Disease Classification

    Full text link
    Thoracic disease detection from chest radiographs using deep learning methods has been an active area of research in the last decade. Most previous methods attempt to focus on the diseased organs of the image by identifying spatial regions responsible for significant contributions to the model's prediction. In contrast, expert radiologists first locate the prominent anatomical structures before determining if those regions are anomalous. Therefore, integrating anatomical knowledge within deep learning models could bring substantial improvement in automatic disease classification. This work proposes an anatomy-aware attention-based architecture named Anatomy X-Net, that prioritizes the spatial features guided by the pre-identified anatomy regions. We leverage a semi-supervised learning method using the JSRT dataset containing organ-level annotation to obtain the anatomical segmentation masks (for lungs and heart) for the NIH and CheXpert datasets. The proposed Anatomy X-Net uses the pre-trained DenseNet-121 as the backbone network with two corresponding structured modules, the Anatomy Aware Attention (AAA) and Probabilistic Weighted Average Pooling (PWAP), in a cohesive framework for anatomical attention learning. Our proposed method sets new state-of-the-art performance on the official NIH test set with an AUC score of 0.8439, proving the efficacy of utilizing the anatomy segmentation knowledge to improve the thoracic disease classification. Furthermore, the Anatomy X-Net yields an averaged AUC of 0.9020 on the Stanford CheXpert dataset, improving on existing methods that demonstrate the generalizability of the proposed framework

    A novel augmented deep transfer learning for classification of COVID-19 and other thoracic diseases from X-rays

    Get PDF
    Deep learning has provided numerous breakthroughs in natural imaging tasks. However, its successful application to medical images is severely handicapped with the limited amount of annotated training data. Transfer learning is commonly adopted for the medical imaging tasks. However, a large covariant shift between the source domain of natural images and target domain of medical images results in poor transfer learning. Moreover, scarcity of annotated data for the medical imaging tasks causes further problems for effective transfer learning. To address these problems, we develop an augmented ensemble transfer learning technique that leads to significant performance gain over the conventional transfer learning. Our technique uses an ensemble of deep learning models, where the architecture of each network is modified with extra layers to account for dimensionality change between the images of source and target data domains. Moreover, the model is hierarchically tuned to the target domain with augmented training data. Along with the network ensemble, we also utilize an ensemble of dictionaries that are based on features extracted from the augmented models. The dictionary ensemble provides an additional performance boost to our method. We first establish the effectiveness of our technique with the challenging ChestXray-14 radiography data set. Our experimental results show more than 50% reduction in the error rate with our method as compared to the baseline transfer learning technique. We then apply our technique to a recent COVID-19 data set for binary and multi-class classification tasks. Our technique achieves 99.49% accuracy for the binary classification, and 99.24% for multi-class classification

    The Effectiveness of Transfer Learning Systems on Medical Images

    Get PDF
    Deep neural networks have revolutionized the performances of many machine learning tasks such as medical image classification and segmentation. Current deep learning (DL) algorithms, specifically convolutional neural networks are increasingly becoming the methodological choice for most medical image analysis. However, training these deep neural networks requires high computational resources and very large amounts of labeled data which is often expensive and laborious. Meanwhile, recent studies have shown the transfer learning (TL) paradigm as an attractive choice in providing promising solutions to challenges of shortage in the availability of labeled medical images. Accordingly, TL enables us to leverage the knowledge learned from related data to solve a new problem. The objective of this dissertation is to examine the effectiveness of TL systems on medical images. First, a comprehensive systematic literature review was performed to provide an up-to-date status of TL systems on medical images. Specifically, we proposed a novel conceptual framework to organize the review. Second, a novel DL network was pretrained on natural images and utilized to evaluate the effectiveness of TL on a very large medical image dataset, specifically Chest X-rays images. Lastly, domain adaptation using an autoencoder was evaluated on the medical image dataset and the results confirmed the effectiveness of TL through fine-tuning strategies. We make several contributions to TL systems on medical image analysis: Firstly, we present a novel survey of TL on medical images and propose a new conceptual framework to organize the findings. Secondly, we propose a novel DL architecture to improve learned representations of medical images while mitigating the problem of vanishing gradients. Additionally, we identified the optimal cut-off layer (OCL) that provided the best model performance. We found that the higher layers in the proposed deep model give a better feature representation of our medical image task. Finally, we analyzed the effect of domain adaptation by fine-tuning an autoencoder on our medical images and provide theoretical contributions on the application of the transductive TL approach. The contributions herein reveal several research gaps to motivate future research and contribute to the body of literature in this active research area of TL systems on medical image analysis

    Parallel CNN-ELM: A Multiclass Classification of Chest X-Ray Images to Identify Seventeen Lung Diseases Including COVID-19

    Get PDF
    Numerous epidemic lung diseases such as COVID-19, tuberculosis (TB), and pneumonia have spread over the world, killing millions of people. Medical specialists have experienced challenges in correctly identifying these diseases due to their subtle differences in Chest X-ray images (CXR). To assist the medical experts, this study proposed a computer-aided lung illness identification method based on the CXR images. For the first time, 17 different forms of lung disorders were considered and the study was divided into six trials with each containing two, two, three, four, fourteen, and seventeen different forms of lung disorders. The proposed framework combined robust feature extraction capabilities of a lightweight parallel convolutional neural network (CNN) with the classification abilities of the extreme learning machine algorithm named CNN-ELM. An optimistic accuracy of 90.92% and an area under the curve (AUC) of 96.93% was achieved when 17 classes were classified side by side. It also accurately identified COVID-19 and TB with 99.37% and 99.98% accuracy, respectively, in 0.996 microseconds for a single image. Additionally, the current results also demonstrated that the framework could outperform the existing state-of-the-art (SOTA) models. On top of that, a secondary conclusion drawn from this study was that the prospective framework retained its effectiveness over a range of real-world environments, including balanced-unbalanced or large-small datasets, large multiclass or simple binary class, and high- or low-resolution images. A prototype Android App was also developed to establish the potential of the framework in real-life implementation

    Deep Convolutional Networks For Oct Image Classification

    Get PDF
    In this work, OCT (optical coherence tomography) images are classified according to the present pathology into four distinct categories. Three different neural network models are used to classify images, each model is recent and we are achieving exceptional results on the testing dataset, which was unknown to the network during the training. Accuracy on the testing set is higher than 98% and only a few of images are classified into the wrong category. This makes our approach perspective for future automatic use. To further improve results, all three models are using transfer learning
    corecore