6 research outputs found

    An Attention-Guided Deep Regression Model for Landmark Detection in Cephalograms

    Full text link
    Cephalometric tracing method is usually used in orthodontic diagnosis and treatment planning. In this paper, we propose a deep learning based framework to automatically detect anatomical landmarks in cephalometric X-ray images. We train the deep encoder-decoder for landmark detection, and combine global landmark configuration with local high-resolution feature responses. The proposed frame-work is based on 2-stage u-net, regressing the multi-channel heatmaps for land-mark detection. In this framework, we embed attention mechanism with global stage heatmaps, guiding the local stage inferring, to regress the local heatmap patches in a high resolution. Besides, the Expansive Exploration strategy improves robustness while inferring, expanding the searching scope without increasing model complexity. We have evaluated our framework in the most widely-used public dataset of landmark detection in cephalometric X-ray images. With less computation and manually tuning, our framework achieves state-of-the-art results

    A fully deep learning model for the automatic identification of cephalometric landmarks

    Get PDF
    Purpose: This study aimed to propose a fully automatic landmark identification model based on a deep learning algorithm using real clinical data and to verify its accuracy considering inter-examiner variability. Materials and methods: In total, 950 lateral cephalometric images from Yonsei Dental Hospital were used. Two calibrated examiners manually identified the 13 most important landmarks to set as references. The proposed deep learning model has a 2-step structure-a region of interest machine and a detection machine-each consisting of 8 convolution layers, 5 pooling layers, and 2 fully connected layers. The distance errors of detection between 2 examiners were used as a clinically acceptable range for performance evaluation. Results: The 13 landmarks were automatically detected using the proposed model. Inter-examiner agreement for all landmarks indicated excellent reliability based on the 95% confidence interval. The average clinically acceptable range for all 13 landmarks was 1.24 mm. The mean radial error between the reference values assigned by 1 expert and the proposed model was 1.84 mm, exhibiting a successful detection rate of 36.1%. The A-point, the incisal tip of the maxillary and mandibular incisors, and ANS showed lower mean radial error than the calibrated expert variability. Conclusion: This experiment demonstrated that the proposed deep learning model can perform fully automatic identification of cephalometric landmarks and achieve better results than examiners for some landmarks. It is meaningful to consider between-examiner variability for clinical applicability when evaluating the performance of deep learning methods in cephalometric landmark identification.ope

    파노라마방사선영상에서 딥러닝 신경망을 이용한 치성 낭과 종양의 자동 진단 방법

    Get PDF
    학위논문 (박사) -- 서울대학교 대학원 : 치의학대학원 치의학과, 2021. 2. 이원진.Objective: The purpose of this study was to automatically diagnose odontogenic cysts and tumors of the jaw on panoramic radiographs using a deep convolutional neural network. A novel framework method of deep convolutional neural network was proposed with data augmentation for detection and classification of the multiple diseases. Methods: A deep convolutional neural network modified from YOLOv3 was developed for detecting and classifying odontogenic cysts and tumors of the jaw. Our dataset of 1,282 panoramic radiographs comprised 350 dentigerous cysts, 302 periapical cysts, 300 odontogenic keratocysts, 230 ameloblastomas, and 100 normal jaw with no disease. In addition, the number of radiographs was augmented 12-fold by flip, rotation, and intensity changes. The Intersection over union threshold value of 0.5 was used to obtain performance for detection and classification. The classification performance of the developed convolutional neural network was evaluated by calculating sensitivity, specificity, accuracy, and AUC (Area under the ROC curve) for diseases of the jaw. Results: The overall classification performance for the diseases improved from 78.2% sensitivity, 93.9% specificity, 91.3% accuracy, and 0.86 AUC using the convolutional neural network with unaugmented dataset to 88.9% sensitivity, 97.2% specificity, 95.6% accuracy, and 0.94 AUC using the convolutional neural network with augmented dataset. Convolutional neural network using augmented dataset had the following sensitivities, specificities, accuracies, and AUC: 91.4%, 99.2%, 97.8%, and 0.96 for dentigerous cysts, 82.8%, 99.2%, 96.2%, and 0.92 for periapical cysts, 98.4%, 92.3%, 94.0%, and 0.97 for odontogenic keratocysts, 71.7%, 100%, 94.3%, and 0.86 for ameloblastomas, and 100.0%, 95.1%, 96.0%, and 0.94 for normal jaw, respectively. Conclusion: The novel framework convolutional neural network method was developed for automatically diagnosing odontogenic cysts and tumors of the jaw on panoramic radiographs using data augmentation. The proposed convolutional neural network model showed high sensitivity, specificity, accuracy, and AUC despite the limited number of panoramic images involved.1. 목 적 구강악안면영역에서 발생하는 낭종 혹은 종양을 조기에 발견하지 못하여 적절한 치료가 이루어지지 못하고 지연되는 경우가 있다. 이러한 문제를 해결하기 위하여 인공신경망을 기반으로 하는 기계학습 기술인 딥러닝신경망(deep convolutional neural network)을 이용하는 컴퓨터 보조진단은 보다 정확하고 빠른 결과를 제공할 수 있다. 따라서 본 연구에서는 파노라마방사선영상에서 딥러닝신경망을 이용하여 구강악안면에서 자주 나타나는 4가지 질환(함치성낭, 치근단당, 치성각화낭, 법랑모세포종)을 자동으로 검출 및 진단하는 딥러닝신경망을 개발하고 그 정확성을 평가하였다. 2. 방 법 본 연구에서는 파노라마방사선영상에서 악골에 발생한 치성 낭과 종양을 검출하고 진단하기 위하여 YoLoV3를 기반으로 한 딥러닝신경망을 구축하였다. 1999년부터 2017년까지 서울대학교치과병원에서 조직병리학적으로 확진된 함치성낭 350례, 치근단낭 302례, 치성각화낭 300례, 법랑모세포종 230례의 환자로부터 획득한 총 1182매 파노라마방사선영상을 분석하였다. 또한 대조군으로 질환이 없는 정상 파노라마방사선영상 100매를 선택하였다. 파노라마방사선영상 데이터는 감마, 보정, 회전, 뒤집기 기법을 통하여 12배 증강되었다. 총 데이터의 60%는 훈련세트, 20%는 검증세트, 20%는 테스트세트로 사용하였다. 개발된 딥러닝신경망은 5배 교차검증(5-fold cross validation)기법을 이용하여 평가하였다. 본 연구에서 개발한 딥러닝신경망의 성능은 정확도(Accuracy), 민감도(sensitivity), 특이도(specificity) 및 ROC분석을 통한 AUC(area under the curve) 지표를 사용하여 측정하였다. 3. 결 과 본 연구에서 개발한 딥러닝신경망은 데이터 증강을 하지 않았을 때 78.2% 민감도, 93.9% 특이도, 91.3% 정확도 및 0.86의 AUC 값을 보였고 데이터 증강을 하였을 때에는 88.9% 민감도, 97.2% 특이도, 95.6% 정확도 및 0.94 AUC의 개선된 성능을 보여주었다. 함치성낭은 91.4% 민감도, 99.2% 특이도, 97.8% 정확도 및 0.96 AUC 값을 보였다. 치근단낭은 82.8% 민감도, 99.2% 특이도, 96.2% 정확도 및 0.92 AUC 값을 나타냈다. 치성각화낭은 98.4% 민감도, 92.3% 특이도, 94.0% 정확도 및 0.97 AUC 결과를 보였다. 법랑모세포종은 71.7% 민감도, 100% 특이도, 94.3% 정확도 및 0.86 AUC의 결과를 보였다. 그리고 정상적인 악골에서는 100% 민감도, 95.1% 특이도, 96.0% 정확도 및 0.97 AUC값을 각각 보였다. 4. 결 론 본 연구에서는 파노라마방사선영상에서 치성 낭과 종양을 자동으로 검출하고 진단하는 딥러닝신경망을 개발하였다. 본 연구는 파노라마방사선영상의 수가 충분하지 않았음에도 불구하고 데이터 증강 기법을 이용하여 우수한 민감도, 특이도 및 정확도 결과를 보였다. 본 연구결과를 통하여 개발된 시스템은 환자의 상기 질환을 조기에 진단하고 적절한 시기에 치료하는데 유용하다.Contents Abstract i Tables v Figure legends vi Introduction 1 Materials and Methods 5 Data preparation and augmentation of panoramic radiographs 5 A deep convolutional neural network model for detection and classification of multiple diseases YOLOv3 9 Evaluation of detection and classification performance of the deep convolutional neural network model 13 Results 15 Discussion 28 Conclusion 37 Acknowledgments 38 References 39 요약(국문초록) 48Docto

    Odontology & artificial intelligence

    Get PDF
    Neste trabalho avaliam-se os três fatores que fizeram da inteligência artificial uma tecnologia essencial hoje em dia, nomeadamente para a odontologia: o desempenho do computador, Big Data e avanços algorítmicos. Esta revisão da literatura avaliou todos os artigos publicados na PubMed até Abril de 2019 sobre inteligência artificial e odontologia. Ajudado com inteligência artificial, este artigo analisou 1511 artigos. Uma árvore de decisão (If/Then) foi executada para selecionar os artigos mais relevantes (217), e um algoritmo de cluster k-means para resumir e identificar oportunidades de inovação. O autor discute os artigos mais interessantes revistos e compara o que foi feito em inovação durante o International Dentistry Show, 2019 em Colónia. Concluiu, assim, de forma crítica que há uma lacuna entre tecnologia e aplicação clínica desta, sendo que a inteligência artificial fornecida pela indústria de hoje pode ser considerada um atraso para o clínico de amanhã, indicando-se um possível rumo para a aplicação clínica da inteligência artificial.There are three factors that have made artificial intelligence (AI) an essential technology today: the computer performance, Big Data and algorithmic advances. This study reviews the literature on AI and Odontology based on articles retrieved from PubMed. With the help of AI, this article analyses a large number of articles (a total of 1511). A decision tree (If/Then) was run to select the 217 most relevant articles-. Ak-means cluster algorithm was then used to summarize and identify innovation opportunities. The author discusses the most interesting articles on AI research and compares them to the innovation presented during the International Dentistry Show 2019 in Cologne. Three technologies available now are evaluated and three suggested options are been developed. The author concludes that AI provided by the industry today is a hold-up for the praticioner of tomorrow. The author gives his opinion on how to use AI for the profit of patients

    방사선학적 골 소실량과 치주염 단계의 딥러닝 기반 컴퓨터 보조진단 방법: 다기기 연구

    Get PDF
    학위논문 (박사) -- 서울대학교 대학원 : 융합과학기술대학원 융합과학부(방사선융합의생명전공), 2021. 2. 이원진.Periodontal diseases, including gingivitis and periodontitis, are some of the most common diseases that humankind suffers from. The decay of alveolar bone in the oral and maxillofacial region is one of the main symptoms of periodontal disease. This leads to alveolar bone loss, tooth loss, edentulism, and masticatory dysfunction, which indirectly affects nutrition. In 2017, the American Academy of Periodontology and the European Federation of Periodontology proposed a new definition and classification criteria for periodontitis based on a staging system. Recently, computer-aided diagnosis (CAD) based on deep learning has been used extensively for solving complex problems in radiology. In my previous study, a deep learning hybrid framework was developed to automatically stage periodontitis on dental panoramic radiographs. This was a hybrid of deep learning architecture for detection and conventional CAD processing to achieve classification. The framework was proposed to automatically quantify the periodontal bone loss and classify periodontitis for each individual tooth into three stages according to the criteria that was proposed at the 2017 World Workshop. In this study, the previously developed framework was improved in order to classify periodontitis into four stages by detecting the number of missing teeth/implants using an additional convolutional neural network (CNN). A multi-device study was performed to verify the generality of the method. A total of 500 panoramic radiographs (400, 50, and 50 images for device 1, device 2, and device 3, respectively) from multiple devices were collected to train the CNN. For a baseline study, three CNNs, which were commonly used for segmentation tasks and the modified CNN from the Mask Region with CNN (R-CNN) were trained and tested to compare the detection accuracy using dental panoramic radiographs that were acquired from multiple devices. In addition, a pre-trained weight derived from the previous study was used as an initial weight to train the CNN to detect the periodontal bone level (PBL), cemento-enamel junction level (CEJL), and teeth/implants to achieve a high training efficiency. The CNN, trained with the multi-device images that had sufficient variability, can produce an accurate detection and segmentation for the input images with various aspects. When detecting the missing teeth on the panoramic radiographs, the values of the precision, recall, F1-score, and mean average precision (AP) were set to 0.88, 0.85, 0.87, and 0.86, respectively, by using CNNv4-tiny. As a result of the qualitative and quantitative evaluation for detecting the PBL, CEJL, and teeth/implants, the Mask R-CNN showed the highest dice similarity coefficients (DSC) of 0.96, 0.92, and 0.94, respectively. Next, the automatically determined stages from the framework were compared to those that were developed by three oral and maxillofacial radiologists with different levels of experience. The mean absolute difference (MAD) between the periodontitis staging that was performed by the automatic method and that by the radiologists was 0.31 overall for all the teeth in the whole jaw. The classification accuracies for the images from the multiple devices were 0.25, 0.34, and 0.35 for device 1, device 2, and device 3, respectively. The overall Pearson correlation coefficient (PCC) values between the developed method and the radiologists’ diagnoses were 0.73, 0.77, and 0.75 for the images from device 1, device 2, and device 3, respectively (p < 0.01). The final intraclass correlation coefficient (ICC) value between the developed method and the radiologists’ diagnoses for all the images was 0.76 (p < 0.01). The overall ICC values between the developed method and the radiologists’ diagnoses were 0.91, 0.94, and 0.93 for the images from device 1, device 2, and device 3, respectively (p < 0.01). The final ICC value between the developed method and the radiologists’ diagnoses for all the images was 0.93 (p < 0.01). In the Passing and Bablok analysis, the slopes were 1.176 (p > 0.05), 1.100 (p > 0.05), and 1.111 (p > 0.05) with the intersections of -0.304, -0.199, and -0.371 for the radiologists with ten, five, and three-years of experience, respectively. For the Bland and Altman analysis, the average of the difference between the mean stages that were classified by the automatic method and those diagnosed by the radiologists with ten-years, five-years, and three-years of experience were 0.007 (95 % confidence interval (CI), -0.060 ~ 0.074), -0.022 (95 % CI, -0.098 ~ 0.053), and -0.198 (95 % CI, -0.291 ~ -0.104), respectively. The developed method for classifying the periodontitis stages that combined the deep learning architecture and conventional CAD approach had a high accuracy, reliability, and generality when automatically diagnosing periodontal bone loss and the staging of periodontitis by the multi-device study. The results demonstrated that when the CNN used the training data sets with increasing variability, the performance also improved in an unseen data set.치주염과 치은염을 포함한 치주질환은 인류가 겪고 있는 가장 흔한 질환 중 하나이다. 구강 및 악안면 부위 치조골의 침하는 치주질환의 주요 증상이며, 이는 골 손실, 치아 손실, 치주염을 유발할 수 있으며, 이를 방치할 경우 저작 기능 장애로 인한 영양실조의 원인이 될 수 있다. 2017년 미국치주학회(American Academy of Periodontology)와 유럽치주학회(European Federation of Periodontology)는 공동 워크샵을 통해 치주염에 대한 새로운 정의와 단계 분류 및 진단에 관련된 기준을 발표하였다. 최근, 딥러닝을 기반으로 한 컴퓨터 보조진단 기술 (Computer-aided Diagnoses, CAD)이 의료방사선영상 분야에서 복잡한 문제를 해결하는 데 광범위하게 사용되고 있다. 선행 연구에서 저자는 파노라마방사선영상에서 치주염을 자동으로 진단하기 위한 딥러닝 하이브리드 프레임워크를 개발하였다. 이는 해부학적 구조물 분할을 위한 딥러닝 신경망 기술과 치주염의 단계 분류를 위한 컴퓨터 보조진단 기술을 융합하여 단일 프레임워크에서 치주염을 자동으로 분류, 진단하는 방법이다. 이를 통해 각 치아에서 방사선적 치조골 소실량을 자동으로 정량화하고, 2017년 워크샵에서 제안된 기준에 따라 치주염을 3단계로 분류하였다. 본 연구에서는 선행 개발된 방법을 개선하여 상실 치아와 식립된 임플란트의 수를 검출, 정량화하여 치주염을 4단계로 분류하는 방법을 개발하였다. 또한 개발된 방법의 일반화 정도를 평가하기 위해 서로 다른 기기를 통해 촬영된 영상을 이용한 다기기 연구를 수행하였다. 3개의 기기를 이용하여 총 500매의 파노라마방사선영상을 수집하여 CNN 학습을 위한 데이터셋을 구축하였다. 수집된 영상 데이터셋을 이용하여, 기존 연구에서 의료영상 분할에 일반적으로 사용되는 3개의 CNN 모델과 Mask R-CNN을 학습시킨 후, 해부학적 구조물 분할 정확도 비교 평가를 실시하였다. 또한 CNN의 높은 학습 효율성 확보와 및 다기기 영상에 대한 추가 학습을 위해 선행 연구에서 도출된 사전 훈련 가중치(pre-trained weight)를 이용한 CNN의 전이학습을 실시하였다. CNNv4-tiny를 이용하여 상실 치아를 검출한 결과, 0.88, 0.85, 0.87, 0.86, 0.85의 precision, recall, F1-score, mAP 정확도를 보였다. 해부학적 구조물 분할 결과, Mask R-CNN을 기반으로 수정된 CNN은 치조골 수준에 대해0.96, 백악법랑경계 수준에 대해 0.92, 치아에 대해 0.94의 분할정확도(DSC)를 보였다. 이어 개발된 방법을 이용하여 학습에 사용되지 않은 30매(기기 별 10매)에서 자동으로 결정된 치주염의 단계와 서로 다른 임상경험을 가진 3명의 영상치의학 전문의가 진단한 단계 간 비교 평가를 수행하였다. 평가 결과, 모든 치아에 대해 자동으로 결정된 치주염 단계와 전문의들이 진단한 단계 간 0.31의 오차(MAD)를 보였다. 또한 기기1, 2, 3의 영상에 대해 각각 0.25, 0.34, 0.35의 오차를 보였다. 개발된 방법을 이용한 결과와 방사선 전문의의 진단 사이의 PCC 값은 기기1, 2, 3의 영상에 대해 각각 0.73, 0.77, 0.75로 계산되었다 (p<0.01). 전체 영상에 대한 최종 ICC 값은 0.76 (p<0.01)로 계산되었다. 또한 개발된 방법과 방사선 전문의의 진단 사이의 ICC 값은 기기1, 2, 3의 영상에 대해 각각 0.91, 0.94, 0.93으로 계산되었다 (p <0.01). 마지막으로 최종 ICC 값은 0.93으로 계산되었다 (p<0.01). Passing 및 Bablok 분석의 경우 회귀직선의 기울기와 x축 절편은 교수, 임상강사, 전공의에 대해 각각 1.176 (p>0.05), 1.100 (p>0.05), 1.111 (p>0.05)와 -0.304, -0.199, -0.371로 나타났다. Bland와 Altman 분석의 경우 자동으로 결정된 영상 별 평균 단계와 영상치의학 전공 치과의사의 진단 결과 간 교수, 임상강사, 전공의에 대해 0.007 (95 % 신뢰 구간 (CI), -0.060 ~ 0.074), 각각 -0.022 (95 % CI, -0.098 ~ 0.053), -0.198 (95 % CI, -0.291 ~ -0.104)로 계산되었다. 결론적으로, 본 논문에서 개발된 딥러닝 하이브리드 프레임워크는 딥러닝 신경망 기술과 컴퓨터 보조 진단 기술을 융합하여 환자의 파노라마 방사선 영상에서 치주염을 4단계로 분류하였다. 본 방법은 높은 해부학적 구조물 및 상실 치아 검출 정확도를 보였으며, 자동으로 결정된 치주염 단계는 임상의의 진단 결과와 높은 일치율과 상관성을 보여주었다. 또한 다기기 연구를 통해 개발된 방법의 높은 정확성과 일반화 정도를 검증하였다.CONTENTS Abstract •••••••••••••••••••••••••••••••••••••••••••••••••••••••••• i Contents •••••••••••••••••••••••••••••••••••••••••••••••••••••••• vi List of figures ••••••••••••••••••••••••••••••••••••••••••••••••• viii List of tables •••••••••••••••••••••••••••••••••••••••••••••••••••• x List of abbreviations ••••••••••••••••••••••••••••••••••••••••• xii Introduction •••••••••••••••••••••••••••••••••••••••••••••••••••• 1 Materials and Methods •••••••••••••••••••••••••••••••••••••••• 5 Overall process for deep learning-based computer-aided diagnosis method ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 5 Data preparation of dental panoramic radiographs from multiple devices ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 7 Detection of PBL and CEJL structures and teeth using CNNs ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 10 Detection of the missing teeth using CNNs ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧14 Staging periodontitis by the conventional CAD method ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧17Evaluation of detection and classification performance ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧20 Results ••••••••••••••••••••••••••••••••••••••••••••••••••••••••• 22 Detection performance for the anatomical structures ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 22 Detection performance for the missing teeth ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 26 Classification performance for the periodontitis stages ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 30 Classification performance of correlations, regressions, and agreements between the periodontitis stages ‧‧‧‧‧‧‧ 36 Discussion ••••••••••••••••••••••••••••••••••••••••••••••••••••• 42 References ••••••••••••••••••••••••••••••••••••••••••••••••••••• 55 Abstract in Korean ••••••••••••••••••••••••••••••••••••••••••• 73Docto
    corecore