26 research outputs found

    Transfer Learning with Deep Convolutional Neural Network (CNN) for Pneumonia Detection using Chest X-ray

    Get PDF
    Pneumonia is a life-threatening disease, which occurs in the lungs caused by either bacterial or viral infection. It can be life-endangering if not acted upon in the right time and thus an early diagnosis of pneumonia is vital. The aim of this paper is to automatically detect bacterial and viral pneumonia using digital x-ray images. It provides a detailed report on advances made in making accurate detection of pneumonia and then presents the methodology adopted by the authors. Four different pre-trained deep Convolutional Neural Network (CNN)- AlexNet, ResNet18, DenseNet201, and SqueezeNet were used for transfer learning. 5247 Bacterial, viral and normal chest x-rays images underwent preprocessing techniques and the modified images were trained for the transfer learning based classification task. In this work, the authors have reported three schemes of classifications: normal vs pneumonia, bacterial vs viral pneumonia and normal, bacterial and viral pneumonia. The classification accuracy of normal and pneumonia images, bacterial and viral pneumonia images, and normal, bacterial and viral pneumonia were 98%, 95%, and 93.3% respectively. This is the highest accuracy in any scheme than the accuracies reported in the literature. Therefore, the proposed study can be useful in faster-diagnosing pneumonia by the radiologist and can help in the fast airport screening of pneumonia patients.Comment: 13 Figures, 5 tables. arXiv admin note: text overlap with arXiv:2003.1314

    HydraViT: Adaptive Multi-Branch Transformer for Multi-Label Disease Classification from Chest X-ray Images

    Full text link
    Chest X-ray is an essential diagnostic tool in the identification of chest diseases given its high sensitivity to pathological abnormalities in the lungs. However, image-driven diagnosis is still challenging due to heterogeneity in size and location of pathology, as well as visual similarities and co-occurrence of separate pathology. Since disease-related regions often occupy a relatively small portion of diagnostic images, classification models based on traditional convolutional neural networks (CNNs) are adversely affected given their locality bias. While CNNs were previously augmented with attention maps or spatial masks to guide focus on potentially critical regions, learning localization guidance under heterogeneity in the spatial distribution of pathology is challenging. To improve multi-label classification performance, here we propose a novel method, HydraViT, that synergistically combines a transformer backbone with a multi-branch output module with learned weighting. The transformer backbone enhances sensitivity to long-range context in X-ray images, while using the self-attention mechanism to adaptively focus on task-critical regions. The multi-branch output module dedicates an independent branch to each disease label to attain robust learning across separate disease classes, along with an aggregated branch across labels to maintain sensitivity to co-occurrence relationships among pathology. Experiments demonstrate that, on average, HydraViT outperforms competing attention-guided methods by 1.2%, region-guided methods by 1.4%, and semantic-guided methods by 1.0% in multi-label classification performance

    Deep Learning in Chest Radiography: From Report Labeling to Image Classification

    Get PDF
    Chest X-ray (CXR) is the most common examination performed by a radiologist. Through CXR, radiologists must correctly and immediately diagnose a patient’s thorax to avoid the progression of life-threatening diseases. Not only are certified radiologists hard to find but also stress, fatigue, and lack of experience all contribute to the quality of an examination. As a result, providing a technique to aid radiologists in reading CXRs and a tool to help bridge the gap for communities without adequate access to radiological services would yield a huge advantage for patients and patient care. This thesis considers one essential task, CXR image classification, with Deep Learning (DL) technologies from the following three aspects: understanding the intersection of CXR interpretation and DL; extracting multiple image labels from radiology reports to facilitate the training of DL classifiers; and developing CXR classifiers using DL. First, we explain the core concepts and categorize the existing data and literature for researchers entering this field for ease of reference. Using CXRs and DL for medical image diagnosis is a relatively recent field of study because large, publicly available CXR datasets have not been around for very long. Second, we contribute to labeling large datasets with multi-label image annotations extracted from CXR reports. We describe the development of a DL-based report labeler named CXRlabeler, focusing on inductive sequential transfer learning. Lastly, we explain the design of three novel Convolutional Neural Network (CNN) classifiers, i.e., MultiViewModel, Xclassifier, and CovidXrayNet, for binary image classification, multi-label image classification, and multi-class image classification, respectively. This dissertation showcases significant progress in the field of automated CXR interpretation using DL; all source code used is publicly available. It provides methods and insights that can be applied to other medical image interpretation tasks

    Chest X-ray pneumothorax segmentation using U-Net with EfficientNet and ResNet architectures

    Get PDF
    Medical imaging refers to visualizing techniques for providing valuable information about the human body’s internal structures for clinical applications, diagnosis, treatment, and scientific research. One of the essential methods for analyzing and processing medical images is segmentation, which helps doctors diagnose accurately by providing detailed information on the body’s required part. However, segmenting medical images faces several challenges, such as requiring trained medical experts and being timeconsuming and error-prone. Thus, it appears necessary for an automatic medical image segmentation system. Deep learning algorithms have recently shown outstanding performance for segmentation tasks, especially semantic segmentation networks that provide pixel-level image understanding. By introducing the first Fully Convolutional Network (FCN) for semantic image segmentation, several segmentation networks have been proposed on its basis. One of the state-of-the-art convolutional networks in the medical image field is U-Net. This paper presents a novel end-to-end semantic segmentation model, named Ens4B-UNet, for medical images as Ensembles 4 U-Net architectures with pre-trained Backbone networks. Ens4B-UNet utilizes U-Net’s success with several significant improvements by adapting powerful and robust Convolutional Neural Networks (CNNs) as backbones for U-Nets encoders and using the nearest-neighbor up-sampling in the decoders. Ens4B-UNet is designed based on the weighted average ensemble of four encoder-decoder segmentation models. The backbone networks of all ensembled models are pre-trained on the ImageNet dataset to exploit the benefit of transfer learning. For improving our models, we apply several techniques for training and predicting, including Stochastic Weight Averaging (SWA), data augmentation, Test-Time Augmentation (TTA), and different types of optimal thresholds. We evaluate and test our models on the 2019 Pneumothorax Challenge dataset, which contains 12,047 training images with 12,954 masks and 3,205 test images. Our proposed segmentation network achieves a 0.8608 mean Dice Similarity Coefficient (DSC) on the test set, which is among the top 1-percent systems in the Kaggle competition

    Artificial Intelligence in Image-Based Screening, Diagnostics, and Clinical Care of Cardiopulmonary Diseases

    Get PDF
    Cardiothoracic and pulmonary diseases are a significant cause of mortality and morbidity worldwide. The COVID-19 pandemic has highlighted the lack of access to clinical care, the overburdened medical system, and the potential of artificial intelligence (AI) in improving medicine. There are a variety of diseases affecting the cardiopulmonary system including lung cancers, heart disease, tuberculosis (TB), etc., in addition to COVID-19-related diseases. Screening, diagnosis, and management of cardiopulmonary diseases has become difficult owing to the limited availability of diagnostic tools and experts, particularly in resource-limited regions. Early screening, accurate diagnosis and staging of these diseases could play a crucial role in treatment and care, and potentially aid in reducing mortality. Radiographic imaging methods such as computed tomography (CT), chest X-rays (CXRs), and echo ultrasound (US) are widely used in screening and diagnosis. Research on using image-based AI and machine learning (ML) methods can help in rapid assessment, serve as surrogates for expert assessment, and reduce variability in human performance. In this Special Issue, “Artificial Intelligence in Image-Based Screening, Diagnostics, and Clinical Care of Cardiopulmonary Diseases”, we have highlighted exemplary primary research studies and literature reviews focusing on novel AI/ML methods and their application in image-based screening, diagnosis, and clinical management of cardiopulmonary diseases. We hope that these articles will help establish the advancements in AI

    Automated Teeth Extraction and Dental Caries Detection in Panoramic X-ray

    Get PDF
    Dental caries is one of the most chronic diseases that involves the majority of people at least once during their lifetime. This expensive disease accounts for 5-10% of the healthcare budget in developing countries. Caries lesions appear as the result of dental biofi lm metabolic activity, caused by bacteria (most prominently Streptococcus mutans) feeding on uncleaned sugars and starches in oral cavity. Also known as tooth decay, they are primarily diagnosed by general dentists solely based on clinical assessments. Since in many cases dental problems cannot be detected with simple observations, dental x-ray imaging is introduced as a standard tool for domain experts, i.e. dentists and radiologists, to distinguish dental diseases, such as proximal caries. Among different dental radiography methods, Panoramic or Orthopantomogram (OPG) images are commonly performed as the initial step toward assessment. OPG images are captured with a small dose of radiation and can depict the entire patient dentition in a single image. Dental caries can sometimes be hard to identify by general dentists relying only on their visual inspection using dental radiography. Tooth decays can easily be misinterpreted as shadows due to various reasons, such as low image quality. Besides, OPG images have poor quality and structures are not presented with strong edges due to low contrast, uneven exposure, etc. Thus, disease detection is a very challenging task using Panoramic radiography. With the recent development of Artificial Intelligence (AI) in dentistry, and with the introduction of Convolutional Neural Network (CNN) for image classification, developing medical decision support systems is becoming a topic of interest in both academia and industry. Providing more accurate decision support systems using CNNs to assist dentists can enhance their diagnosis performance, resulting in providing improved dental care assistance for patients. In the following thesis, the first automated teeth extraction system for Panoramic images, using evolutionary algorithms, is proposed. In contrast to other intraoral radiography methods, Panoramic is captured with x-ray film outside the patient mouth. Therefore, Panoramic x-rays contain regions outside of the jaw, which make teeth segmentation extremely difficult. Considering that we solely need an image of each tooth separately to build a caries detection model, segmentation of teeth from the OPG image is essential. Due to the absence of significant pixel intensity difference between different regions in OPG radiography, teeth segmentation becomes very hard to implement. Consequently, an automated system is introduced to get an OPG as input and gives images of single teeth as the output. Since only a few research studies are utilizing similar task for Panoramic radiography, there is room for improvement. A genetic algorithm is applied along with different image processing methods to perform teeth extraction by jaw extraction, jaw separation, and teeth-gap valley detection, respectively. The proposed system is compared to the state-of-the-art in teeth extraction on other image types. After teeth are segmented from each image, a model based on various untrained and pretrained CNN-based architectures is proposed to detect dental caries for each tooth. Autoencoder-based model along with famous CNN architectures are used for feature extraction, followed by capsule networks to perform classification. The dataset of Panoramic x-rays is prepared by the authors, with help from an expert radiologist to provide labels. The proposed model has demonstrated an acceptable detection rate of 86.05%, and an increase in caries detection speed. Considering the challenges of performing such task on low quality OPG images, this work is a step towards developing a fully automated efficient caries detection model to assist domain experts

    Automated Teeth Extraction and Dental Caries Detection in Panoramic X-ray

    Get PDF
    Dental caries is one of the most chronic diseases that involves the majority of people at least once during their lifetime. This expensive disease accounts for 5-10% of the healthcare budget in developing countries. Caries lesions appear as the result of dental biofi lm metabolic activity, caused by bacteria (most prominently Streptococcus mutans) feeding on uncleaned sugars and starches in oral cavity. Also known as tooth decay, they are primarily diagnosed by general dentists solely based on clinical assessments. Since in many cases dental problems cannot be detected with simple observations, dental x-ray imaging is introduced as a standard tool for domain experts, i.e. dentists and radiologists, to distinguish dental diseases, such as proximal caries. Among different dental radiography methods, Panoramic or Orthopantomogram (OPG) images are commonly performed as the initial step toward assessment. OPG images are captured with a small dose of radiation and can depict the entire patient dentition in a single image. Dental caries can sometimes be hard to identify by general dentists relying only on their visual inspection using dental radiography. Tooth decays can easily be misinterpreted as shadows due to various reasons, such as low image quality. Besides, OPG images have poor quality and structures are not presented with strong edges due to low contrast, uneven exposure, etc. Thus, disease detection is a very challenging task using Panoramic radiography. With the recent development of Artificial Intelligence (AI) in dentistry, and with the introduction of Convolutional Neural Network (CNN) for image classification, developing medical decision support systems is becoming a topic of interest in both academia and industry. Providing more accurate decision support systems using CNNs to assist dentists can enhance their diagnosis performance, resulting in providing improved dental care assistance for patients. In the following thesis, the first automated teeth extraction system for Panoramic images, using evolutionary algorithms, is proposed. In contrast to other intraoral radiography methods, Panoramic is captured with x-ray film outside the patient mouth. Therefore, Panoramic x-rays contain regions outside of the jaw, which make teeth segmentation extremely difficult. Considering that we solely need an image of each tooth separately to build a caries detection model, segmentation of teeth from the OPG image is essential. Due to the absence of significant pixel intensity difference between different regions in OPG radiography, teeth segmentation becomes very hard to implement. Consequently, an automated system is introduced to get an OPG as input and gives images of single teeth as the output. Since only a few research studies are utilizing similar task for Panoramic radiography, there is room for improvement. A genetic algorithm is applied along with different image processing methods to perform teeth extraction by jaw extraction, jaw separation, and teeth-gap valley detection, respectively. The proposed system is compared to the state-of-the-art in teeth extraction on other image types. After teeth are segmented from each image, a model based on various untrained and pretrained CNN-based architectures is proposed to detect dental caries for each tooth. Autoencoder-based model along with famous CNN architectures are used for feature extraction, followed by capsule networks to perform classification. The dataset of Panoramic x-rays is prepared by the authors, with help from an expert radiologist to provide labels. The proposed model has demonstrated an acceptable detection rate of 86.05%, and an increase in caries detection speed. Considering the challenges of performing such task on low quality OPG images, this work is a step towards developing a fully automated efficient caries detection model to assist domain experts
    corecore