263 research outputs found

    High-Throughput Classification of Radiographs Using Deep Convolutional Neural Networks.

    Get PDF
    The study aimed to determine if computer vision techniques rooted in deep learning can use a small set of radiographs to perform clinically relevant image classification with high fidelity. One thousand eight hundred eighty-five chest radiographs on 909 patients obtained between January 2013 and July 2015 at our institution were retrieved and anonymized. The source images were manually annotated as frontal or lateral and randomly divided into training, validation, and test sets. Training and validation sets were augmented to over 150,000 images using standard image manipulations. We then pre-trained a series of deep convolutional networks based on the open-source GoogLeNet with various transformations of the open-source ImageNet (non-radiology) images. These trained networks were then fine-tuned using the original and augmented radiology images. The model with highest validation accuracy was applied to our institutional test set and a publicly available set. Accuracy was assessed by using the Youden Index to set a binary cutoff for frontal or lateral classification. This retrospective study was IRB approved prior to initiation. A network pre-trained on 1.2 million greyscale ImageNet images and fine-tuned on augmented radiographs was chosen. The binary classification method correctly classified 100 % (95 % CI 99.73-100 %) of both our test set and the publicly available images. Classification was rapid, at 38 images per second. A deep convolutional neural network created using non-radiological images, and an augmented set of radiographs is effective in highly accurate classification of chest radiograph view type and is a feasible, rapid method for high-throughput annotation

    Feature Detection in Medical Images Using Deep Learning

    Get PDF
    This project explores the use of deep learning to predict age based on pediatric hand X-Rays. Data from the Radiological Society of North America’s pediatric bone age challenge were used to train and evaluate a convolutional neural network. The project used InceptionV3, a CNN developed by Google, that was pre-trained on ImageNet, a popular online image dataset. Our fine-tuned version of InceptionV3 yielded an average error of less than 10 months between predicted and actual age. This project shows the effectiveness of deep learning in analyzing medical images and the potential for even greater improvements in the future. In addition to the technological and potential clinical benefits of these methods, this project will serve as a useful pedagogical tool for introducing the challenges and applications of deep learning to the Bryant community

    Domain Specific Deep Neural Network Model for Classification of Abnormalities on Chest Radiographs

    Get PDF
    This study collected pre-processed dataset of chest radiographs formulated a deep neural network model for detecting abnormalities It also evaluated the performance of the formulated model and implemented a prototype of the formulated model This was with the view to develop a deep neural network model to automatically classify abnormalities in chest radiographs In order to achieve the overall purpose of this research a large set of chest x-ray images were sourced for and collected from the CheXpert dataset which is an online repository of annotated chest radiographs compiled by the Machine Learning Research group Stanford University The chest radiographs were preprocessed into a format that can be fed into a deep neural network The preprocessing techniques used were standardization and normalization The classification problem was formulated as a multi-label binary classification model which used convolutional neural network architecture for making decision on whether an abnormality was present or not in the chest radiographs The classification model was evaluated using specificity sensitivity and Area Under Curve AUC score as parameter A prototype of the classification model was implemented using Keras Open source deep learning framework in Python Programming Language The AUC ROC curve of the model was able to classify Atelestasis Support devices Pleural effusion Pneumonia A normal CXR no finding Pneumothorax and Consolidation However Lung opacity and Cardiomegaly had probability out of less than 0 5 and thus were classified as absent Precision recall and F1 score values were 0 78 this imply that the number of False Positive and False Negative are the same revealing some measure of label imbalance in the dataset The study concluded that the developed model is sufficient to classify abnormalities present in chest radiographs into present or absen

    방사선학적 골 소실량과 치주염 단계의 딥러닝 기반 컴퓨터 보조진단 방법: 다기기 연구

    Get PDF
    학위논문 (박사) -- 서울대학교 대학원 : 융합과학기술대학원 융합과학부(방사선융합의생명전공), 2021. 2. 이원진.Periodontal diseases, including gingivitis and periodontitis, are some of the most common diseases that humankind suffers from. The decay of alveolar bone in the oral and maxillofacial region is one of the main symptoms of periodontal disease. This leads to alveolar bone loss, tooth loss, edentulism, and masticatory dysfunction, which indirectly affects nutrition. In 2017, the American Academy of Periodontology and the European Federation of Periodontology proposed a new definition and classification criteria for periodontitis based on a staging system. Recently, computer-aided diagnosis (CAD) based on deep learning has been used extensively for solving complex problems in radiology. In my previous study, a deep learning hybrid framework was developed to automatically stage periodontitis on dental panoramic radiographs. This was a hybrid of deep learning architecture for detection and conventional CAD processing to achieve classification. The framework was proposed to automatically quantify the periodontal bone loss and classify periodontitis for each individual tooth into three stages according to the criteria that was proposed at the 2017 World Workshop. In this study, the previously developed framework was improved in order to classify periodontitis into four stages by detecting the number of missing teeth/implants using an additional convolutional neural network (CNN). A multi-device study was performed to verify the generality of the method. A total of 500 panoramic radiographs (400, 50, and 50 images for device 1, device 2, and device 3, respectively) from multiple devices were collected to train the CNN. For a baseline study, three CNNs, which were commonly used for segmentation tasks and the modified CNN from the Mask Region with CNN (R-CNN) were trained and tested to compare the detection accuracy using dental panoramic radiographs that were acquired from multiple devices. In addition, a pre-trained weight derived from the previous study was used as an initial weight to train the CNN to detect the periodontal bone level (PBL), cemento-enamel junction level (CEJL), and teeth/implants to achieve a high training efficiency. The CNN, trained with the multi-device images that had sufficient variability, can produce an accurate detection and segmentation for the input images with various aspects. When detecting the missing teeth on the panoramic radiographs, the values of the precision, recall, F1-score, and mean average precision (AP) were set to 0.88, 0.85, 0.87, and 0.86, respectively, by using CNNv4-tiny. As a result of the qualitative and quantitative evaluation for detecting the PBL, CEJL, and teeth/implants, the Mask R-CNN showed the highest dice similarity coefficients (DSC) of 0.96, 0.92, and 0.94, respectively. Next, the automatically determined stages from the framework were compared to those that were developed by three oral and maxillofacial radiologists with different levels of experience. The mean absolute difference (MAD) between the periodontitis staging that was performed by the automatic method and that by the radiologists was 0.31 overall for all the teeth in the whole jaw. The classification accuracies for the images from the multiple devices were 0.25, 0.34, and 0.35 for device 1, device 2, and device 3, respectively. The overall Pearson correlation coefficient (PCC) values between the developed method and the radiologists’ diagnoses were 0.73, 0.77, and 0.75 for the images from device 1, device 2, and device 3, respectively (p < 0.01). The final intraclass correlation coefficient (ICC) value between the developed method and the radiologists’ diagnoses for all the images was 0.76 (p < 0.01). The overall ICC values between the developed method and the radiologists’ diagnoses were 0.91, 0.94, and 0.93 for the images from device 1, device 2, and device 3, respectively (p < 0.01). The final ICC value between the developed method and the radiologists’ diagnoses for all the images was 0.93 (p < 0.01). In the Passing and Bablok analysis, the slopes were 1.176 (p > 0.05), 1.100 (p > 0.05), and 1.111 (p > 0.05) with the intersections of -0.304, -0.199, and -0.371 for the radiologists with ten, five, and three-years of experience, respectively. For the Bland and Altman analysis, the average of the difference between the mean stages that were classified by the automatic method and those diagnosed by the radiologists with ten-years, five-years, and three-years of experience were 0.007 (95 % confidence interval (CI), -0.060 ~ 0.074), -0.022 (95 % CI, -0.098 ~ 0.053), and -0.198 (95 % CI, -0.291 ~ -0.104), respectively. The developed method for classifying the periodontitis stages that combined the deep learning architecture and conventional CAD approach had a high accuracy, reliability, and generality when automatically diagnosing periodontal bone loss and the staging of periodontitis by the multi-device study. The results demonstrated that when the CNN used the training data sets with increasing variability, the performance also improved in an unseen data set.치주염과 치은염을 포함한 치주질환은 인류가 겪고 있는 가장 흔한 질환 중 하나이다. 구강 및 악안면 부위 치조골의 침하는 치주질환의 주요 증상이며, 이는 골 손실, 치아 손실, 치주염을 유발할 수 있으며, 이를 방치할 경우 저작 기능 장애로 인한 영양실조의 원인이 될 수 있다. 2017년 미국치주학회(American Academy of Periodontology)와 유럽치주학회(European Federation of Periodontology)는 공동 워크샵을 통해 치주염에 대한 새로운 정의와 단계 분류 및 진단에 관련된 기준을 발표하였다. 최근, 딥러닝을 기반으로 한 컴퓨터 보조진단 기술 (Computer-aided Diagnoses, CAD)이 의료방사선영상 분야에서 복잡한 문제를 해결하는 데 광범위하게 사용되고 있다. 선행 연구에서 저자는 파노라마방사선영상에서 치주염을 자동으로 진단하기 위한 딥러닝 하이브리드 프레임워크를 개발하였다. 이는 해부학적 구조물 분할을 위한 딥러닝 신경망 기술과 치주염의 단계 분류를 위한 컴퓨터 보조진단 기술을 융합하여 단일 프레임워크에서 치주염을 자동으로 분류, 진단하는 방법이다. 이를 통해 각 치아에서 방사선적 치조골 소실량을 자동으로 정량화하고, 2017년 워크샵에서 제안된 기준에 따라 치주염을 3단계로 분류하였다. 본 연구에서는 선행 개발된 방법을 개선하여 상실 치아와 식립된 임플란트의 수를 검출, 정량화하여 치주염을 4단계로 분류하는 방법을 개발하였다. 또한 개발된 방법의 일반화 정도를 평가하기 위해 서로 다른 기기를 통해 촬영된 영상을 이용한 다기기 연구를 수행하였다. 3개의 기기를 이용하여 총 500매의 파노라마방사선영상을 수집하여 CNN 학습을 위한 데이터셋을 구축하였다. 수집된 영상 데이터셋을 이용하여, 기존 연구에서 의료영상 분할에 일반적으로 사용되는 3개의 CNN 모델과 Mask R-CNN을 학습시킨 후, 해부학적 구조물 분할 정확도 비교 평가를 실시하였다. 또한 CNN의 높은 학습 효율성 확보와 및 다기기 영상에 대한 추가 학습을 위해 선행 연구에서 도출된 사전 훈련 가중치(pre-trained weight)를 이용한 CNN의 전이학습을 실시하였다. CNNv4-tiny를 이용하여 상실 치아를 검출한 결과, 0.88, 0.85, 0.87, 0.86, 0.85의 precision, recall, F1-score, mAP 정확도를 보였다. 해부학적 구조물 분할 결과, Mask R-CNN을 기반으로 수정된 CNN은 치조골 수준에 대해0.96, 백악법랑경계 수준에 대해 0.92, 치아에 대해 0.94의 분할정확도(DSC)를 보였다. 이어 개발된 방법을 이용하여 학습에 사용되지 않은 30매(기기 별 10매)에서 자동으로 결정된 치주염의 단계와 서로 다른 임상경험을 가진 3명의 영상치의학 전문의가 진단한 단계 간 비교 평가를 수행하였다. 평가 결과, 모든 치아에 대해 자동으로 결정된 치주염 단계와 전문의들이 진단한 단계 간 0.31의 오차(MAD)를 보였다. 또한 기기1, 2, 3의 영상에 대해 각각 0.25, 0.34, 0.35의 오차를 보였다. 개발된 방법을 이용한 결과와 방사선 전문의의 진단 사이의 PCC 값은 기기1, 2, 3의 영상에 대해 각각 0.73, 0.77, 0.75로 계산되었다 (p<0.01). 전체 영상에 대한 최종 ICC 값은 0.76 (p<0.01)로 계산되었다. 또한 개발된 방법과 방사선 전문의의 진단 사이의 ICC 값은 기기1, 2, 3의 영상에 대해 각각 0.91, 0.94, 0.93으로 계산되었다 (p <0.01). 마지막으로 최종 ICC 값은 0.93으로 계산되었다 (p<0.01). Passing 및 Bablok 분석의 경우 회귀직선의 기울기와 x축 절편은 교수, 임상강사, 전공의에 대해 각각 1.176 (p>0.05), 1.100 (p>0.05), 1.111 (p>0.05)와 -0.304, -0.199, -0.371로 나타났다. Bland와 Altman 분석의 경우 자동으로 결정된 영상 별 평균 단계와 영상치의학 전공 치과의사의 진단 결과 간 교수, 임상강사, 전공의에 대해 0.007 (95 % 신뢰 구간 (CI), -0.060 ~ 0.074), 각각 -0.022 (95 % CI, -0.098 ~ 0.053), -0.198 (95 % CI, -0.291 ~ -0.104)로 계산되었다. 결론적으로, 본 논문에서 개발된 딥러닝 하이브리드 프레임워크는 딥러닝 신경망 기술과 컴퓨터 보조 진단 기술을 융합하여 환자의 파노라마 방사선 영상에서 치주염을 4단계로 분류하였다. 본 방법은 높은 해부학적 구조물 및 상실 치아 검출 정확도를 보였으며, 자동으로 결정된 치주염 단계는 임상의의 진단 결과와 높은 일치율과 상관성을 보여주었다. 또한 다기기 연구를 통해 개발된 방법의 높은 정확성과 일반화 정도를 검증하였다.CONTENTS Abstract •••••••••••••••••••••••••••••••••••••••••••••••••••••••••• i Contents •••••••••••••••••••••••••••••••••••••••••••••••••••••••• vi List of figures ••••••••••••••••••••••••••••••••••••••••••••••••• viii List of tables •••••••••••••••••••••••••••••••••••••••••••••••••••• x List of abbreviations ••••••••••••••••••••••••••••••••••••••••• xii Introduction •••••••••••••••••••••••••••••••••••••••••••••••••••• 1 Materials and Methods •••••••••••••••••••••••••••••••••••••••• 5 Overall process for deep learning-based computer-aided diagnosis method ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 5 Data preparation of dental panoramic radiographs from multiple devices ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 7 Detection of PBL and CEJL structures and teeth using CNNs ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 10 Detection of the missing teeth using CNNs ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧14 Staging periodontitis by the conventional CAD method ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧17Evaluation of detection and classification performance ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧20 Results ••••••••••••••••••••••••••••••••••••••••••••••••••••••••• 22 Detection performance for the anatomical structures ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 22 Detection performance for the missing teeth ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 26 Classification performance for the periodontitis stages ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 30 Classification performance of correlations, regressions, and agreements between the periodontitis stages ‧‧‧‧‧‧‧ 36 Discussion ••••••••••••••••••••••••••••••••••••••••••••••••••••• 42 References ••••••••••••••••••••••••••••••••••••••••••••••••••••• 55 Abstract in Korean ••••••••••••••••••••••••••••••••••••••••••• 73Docto

    Automated X-ray image analysis for cargo security: Critical review and future promise

    Get PDF
    We review the relatively immature field of automated image analysis for X-ray cargo imagery. There is increasing demand for automated analysis methods that can assist in the inspection and selection of containers, due to the ever-growing volumes of traded cargo and the increasing concerns that customs- and security-related threats are being smuggled across borders by organised crime and terrorist networks. We split the field into the classical pipeline of image preprocessing and image understanding. Preprocessing includes: image manipulation; quality improvement; Threat Image Projection (TIP); and material discrimination and segmentation. Image understanding includes: Automated Threat Detection (ATD); and Automated Contents Verification (ACV). We identify several gaps in the literature that need to be addressed and propose ideas for future research. Where the current literature is sparse we borrow from the single-view, multi-view, and CT X-ray baggage domains, which have some characteristics in common with X-ray cargo

    Artificial Intelligence in Oral Health

    Get PDF
    This Special Issue is intended to lay the foundation of AI applications focusing on oral health, including general dentistry, periodontology, implantology, oral surgery, oral radiology, orthodontics, and prosthodontics, among others

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Effect of large-scale pre-training on full and few-shot transfer learning for natural and medical images

    Full text link
    Transfer learning aims to exploit pre-trained models for more efficient follow-up training on wide range of downstream tasks and datasets, enabling successful training also on small data. Recent line of work posits strong benefits for model generalization and transfer when model size, data size, and compute budget are increased for the pre-training. It remains however still largely unclear whether the observed transfer improvement due to increase in scale also holds when source and target data distributions are far apart from each other. In this work we conduct large-scale pre-training on large source datasets of either natural (ImageNet-21k/1k) or medical chest X-Ray images and compare full and few-shot transfer using different target datasets from both natural and medical imaging domains. Our observations provide evidence that while pre-training and transfer on closely related datasets do show clear benefit of increasing model and data size during pre-training, such benefits are not clearly visible when source and target datasets are further apart. These observations hold across both full and few-shot transfer and indicate that scaling laws pointing to improvement of generalization and transfer with increasing model and data size are incomplete and should be revised by taking into account the type and proximity of the source and target data, to correctly predict the effect of model and data scale during pre-training on transfer. Remarkably, in full shot transfer to a large X-Ray chest imaging target (PadChest), the largest model pre-trained on ImageNet-21k slightly outperforms best models pre-trained on large X-Ray chest imaging data. This indicates possibility to obtain high quality models for domain-specific transfer even without access to large domain-specific data, by pre-training instead on comparably very large, generic source data.Comment: Preprint. Under revie
    corecore