117 research outputs found

    Mandibular cortical width measurement based on dental panoramic radiographs with computer-aided system

    Full text link
    The paper presents a method of the determining a mandibular cortical width on dental panoramic radiographs. Cortical width of lower border of mandible may potentially be associated with recognition of osteoporosis in postmenopausal women. An algorithm to perform a semiautomatic cortical width measurement in a given region of interest was developed. The algorithm is based on separate extraction of lower and upper boundaries of cortical bone. Results of boundaries extraction performed on 34 panoramic radiographs of healthy and osteoporotic individuals are presented, together with automatic measurements of particular distances. They were compared with results of hand-made measurements done by two maxillofacial radiologists. Presented algorithm may potentially be useful for screening patients with osteoporosis

    Trabecular Bone Segmentation Based On Segment Profile Characteristics Using Extreme Learning Machine On Dental Panoramic Radiographs

    Get PDF
    Dental panoramic radiograph contains a lot of Information which one of them can be identified from trabecular bone structure. This research proposes segmentation of trabecular bone area on dental panoramic radiograph based on segment profile characteristics using Extreme Learning Machine as classification method. The input of this method is dental panoramic radiograph. The selection of region of interest (ROI) is performed on the lower jawbone of the trabecular bone area in which there are teeth and cortical bone. The ROI is subdivided into two where the upper ROI contains the teeth and the lower ROI contains cortical bone. After that, the result of the ROI deduction is done by preprocessing using mean and median filters for upper ROI and motion blur filter for lower ROI. The separate images are extracted each pixel into four features consisting of image intensity, 2D Gaussian filter with two different sigma, and Log Gabor filter for upper ROI. For lower ROI, five feature extractions are image intensity, Gaussian 2D filter with two different sigma, phase congruency, and Laplacian of Gaussian. Then used some sample pixels as training data to create Extreme Learning Machine model. The output of this classifier is the segmentation area of trabecular bone. On the upper ROI, the average of sensitivity, specificity, and accuracy were 82.31%, 93.67%, and 90.33%, respectively. While on the lower ROI obtained the average of sensitivity, specificity, and accuracy of 95.01%, 96.50%, and 95.29%, respectively

    Artificial Intelligence in Oral Health

    Get PDF
    This Special Issue is intended to lay the foundation of AI applications focusing on oral health, including general dentistry, periodontology, implantology, oral surgery, oral radiology, orthodontics, and prosthodontics, among others

    Computer aided detection of oral lesions on CT images

    Get PDF
    Oral lesions are important findings on computed tomography images. They are difficult to detect on CT images because of low contrast, arbitrary orientation of objects, complicated topology and lack of clear lines indicating lesions. In this thesis, a fully automatic method to detect oral lesions from dental CT images is proposed to identify (1) Closed boundary lesions and (2) Bone deformation lesions. Two algorithms were developed to recognize these two types of lesions, which cover most of the lesion types that can be found on CT images. The results were validated using a dataset of 52 patients. Using non training dataset, closed boundary lesion detection algorithm yielded 71% sensitivity with 0.31 false positives per patient. Moreover, bone deformation lesion detection algorithm achieved 100% sensitivity with 0.13 false positives per patient. Results suggest that, the proposed framework has the potential to be used in clinical context, and assist radiologists for better diagnosis. --Abstract, page iv

    A comprehensive artificial intelligence framework for dental diagnosis and charting

    Get PDF
    Background: The aim of this study was to develop artificial intelligence (AI) guided framework to recognize tooth numbers in panoramic and intraoral radiographs (periapical and bitewing) without prior domain knowledge and arrange the intraoral radiographs into a full mouth series (FMS) arrangement template. This model can be integrated with different diseases diagnosis models, such as periodontitis or caries, to facilitate clinical examinations and diagnoses. Methods: The framework utilized image segmentation models to generate the masks of bone area, tooth, and cementoenamel junction (CEJ) lines from intraoral radiographs. These masks were used to detect and extract teeth bounding boxes utilizing several image analysis methods. Then, individual teeth were matched with a patient’s panoramic images (if available) or tooth repositories for assigning tooth numbers using the multi-scale matching strategy. This framework was tested on 1240 intraoral radiographs different from the training and internal validation cohort to avoid data snooping. Besides, a web interface was designed to generate a report for different dental abnormalities with tooth numbers to evaluate this framework’s practicality in clinical settings. Results: The proposed method achieved the following precision and recall via panoramic view: 0.96 and 0.96 (via panoramic view) and 0.87 and 0.87 (via repository match) by handling tooth shape variation and outperforming other state-of-the-art methods. Additionally, the proposed framework could accurately arrange a set of intraoral radiographs into an FMS arrangement template based on positions and tooth numbers with an accuracy of 95% for periapical images and 90% for bitewing images. The accuracy of this framework was also 94% in the images with missing teeth and 89% with restorations. Conclusions: The proposed tooth numbering model is robust and self-contained and can also be integrated with other dental diagnosis modules, such as alveolar bone assessment and caries detection. This artificial intelligence-based tooth detection and tooth number assignment in dental radiographs will help dentists with enhanced communication, documentation, and treatment planning accurately. In addition, the proposed framework can correctly specify detailed diagnostic information associated with a single tooth without human intervention

    방사선학적 골 소실량과 치주염 단계의 딥러닝 기반 컴퓨터 보조진단 방법: 다기기 연구

    Get PDF
    학위논문 (박사) -- 서울대학교 대학원 : 융합과학기술대학원 융합과학부(방사선융합의생명전공), 2021. 2. 이원진.Periodontal diseases, including gingivitis and periodontitis, are some of the most common diseases that humankind suffers from. The decay of alveolar bone in the oral and maxillofacial region is one of the main symptoms of periodontal disease. This leads to alveolar bone loss, tooth loss, edentulism, and masticatory dysfunction, which indirectly affects nutrition. In 2017, the American Academy of Periodontology and the European Federation of Periodontology proposed a new definition and classification criteria for periodontitis based on a staging system. Recently, computer-aided diagnosis (CAD) based on deep learning has been used extensively for solving complex problems in radiology. In my previous study, a deep learning hybrid framework was developed to automatically stage periodontitis on dental panoramic radiographs. This was a hybrid of deep learning architecture for detection and conventional CAD processing to achieve classification. The framework was proposed to automatically quantify the periodontal bone loss and classify periodontitis for each individual tooth into three stages according to the criteria that was proposed at the 2017 World Workshop. In this study, the previously developed framework was improved in order to classify periodontitis into four stages by detecting the number of missing teeth/implants using an additional convolutional neural network (CNN). A multi-device study was performed to verify the generality of the method. A total of 500 panoramic radiographs (400, 50, and 50 images for device 1, device 2, and device 3, respectively) from multiple devices were collected to train the CNN. For a baseline study, three CNNs, which were commonly used for segmentation tasks and the modified CNN from the Mask Region with CNN (R-CNN) were trained and tested to compare the detection accuracy using dental panoramic radiographs that were acquired from multiple devices. In addition, a pre-trained weight derived from the previous study was used as an initial weight to train the CNN to detect the periodontal bone level (PBL), cemento-enamel junction level (CEJL), and teeth/implants to achieve a high training efficiency. The CNN, trained with the multi-device images that had sufficient variability, can produce an accurate detection and segmentation for the input images with various aspects. When detecting the missing teeth on the panoramic radiographs, the values of the precision, recall, F1-score, and mean average precision (AP) were set to 0.88, 0.85, 0.87, and 0.86, respectively, by using CNNv4-tiny. As a result of the qualitative and quantitative evaluation for detecting the PBL, CEJL, and teeth/implants, the Mask R-CNN showed the highest dice similarity coefficients (DSC) of 0.96, 0.92, and 0.94, respectively. Next, the automatically determined stages from the framework were compared to those that were developed by three oral and maxillofacial radiologists with different levels of experience. The mean absolute difference (MAD) between the periodontitis staging that was performed by the automatic method and that by the radiologists was 0.31 overall for all the teeth in the whole jaw. The classification accuracies for the images from the multiple devices were 0.25, 0.34, and 0.35 for device 1, device 2, and device 3, respectively. The overall Pearson correlation coefficient (PCC) values between the developed method and the radiologists’ diagnoses were 0.73, 0.77, and 0.75 for the images from device 1, device 2, and device 3, respectively (p < 0.01). The final intraclass correlation coefficient (ICC) value between the developed method and the radiologists’ diagnoses for all the images was 0.76 (p < 0.01). The overall ICC values between the developed method and the radiologists’ diagnoses were 0.91, 0.94, and 0.93 for the images from device 1, device 2, and device 3, respectively (p < 0.01). The final ICC value between the developed method and the radiologists’ diagnoses for all the images was 0.93 (p < 0.01). In the Passing and Bablok analysis, the slopes were 1.176 (p > 0.05), 1.100 (p > 0.05), and 1.111 (p > 0.05) with the intersections of -0.304, -0.199, and -0.371 for the radiologists with ten, five, and three-years of experience, respectively. For the Bland and Altman analysis, the average of the difference between the mean stages that were classified by the automatic method and those diagnosed by the radiologists with ten-years, five-years, and three-years of experience were 0.007 (95 % confidence interval (CI), -0.060 ~ 0.074), -0.022 (95 % CI, -0.098 ~ 0.053), and -0.198 (95 % CI, -0.291 ~ -0.104), respectively. The developed method for classifying the periodontitis stages that combined the deep learning architecture and conventional CAD approach had a high accuracy, reliability, and generality when automatically diagnosing periodontal bone loss and the staging of periodontitis by the multi-device study. The results demonstrated that when the CNN used the training data sets with increasing variability, the performance also improved in an unseen data set.치주염과 치은염을 포함한 치주질환은 인류가 겪고 있는 가장 흔한 질환 중 하나이다. 구강 및 악안면 부위 치조골의 침하는 치주질환의 주요 증상이며, 이는 골 손실, 치아 손실, 치주염을 유발할 수 있으며, 이를 방치할 경우 저작 기능 장애로 인한 영양실조의 원인이 될 수 있다. 2017년 미국치주학회(American Academy of Periodontology)와 유럽치주학회(European Federation of Periodontology)는 공동 워크샵을 통해 치주염에 대한 새로운 정의와 단계 분류 및 진단에 관련된 기준을 발표하였다. 최근, 딥러닝을 기반으로 한 컴퓨터 보조진단 기술 (Computer-aided Diagnoses, CAD)이 의료방사선영상 분야에서 복잡한 문제를 해결하는 데 광범위하게 사용되고 있다. 선행 연구에서 저자는 파노라마방사선영상에서 치주염을 자동으로 진단하기 위한 딥러닝 하이브리드 프레임워크를 개발하였다. 이는 해부학적 구조물 분할을 위한 딥러닝 신경망 기술과 치주염의 단계 분류를 위한 컴퓨터 보조진단 기술을 융합하여 단일 프레임워크에서 치주염을 자동으로 분류, 진단하는 방법이다. 이를 통해 각 치아에서 방사선적 치조골 소실량을 자동으로 정량화하고, 2017년 워크샵에서 제안된 기준에 따라 치주염을 3단계로 분류하였다. 본 연구에서는 선행 개발된 방법을 개선하여 상실 치아와 식립된 임플란트의 수를 검출, 정량화하여 치주염을 4단계로 분류하는 방법을 개발하였다. 또한 개발된 방법의 일반화 정도를 평가하기 위해 서로 다른 기기를 통해 촬영된 영상을 이용한 다기기 연구를 수행하였다. 3개의 기기를 이용하여 총 500매의 파노라마방사선영상을 수집하여 CNN 학습을 위한 데이터셋을 구축하였다. 수집된 영상 데이터셋을 이용하여, 기존 연구에서 의료영상 분할에 일반적으로 사용되는 3개의 CNN 모델과 Mask R-CNN을 학습시킨 후, 해부학적 구조물 분할 정확도 비교 평가를 실시하였다. 또한 CNN의 높은 학습 효율성 확보와 및 다기기 영상에 대한 추가 학습을 위해 선행 연구에서 도출된 사전 훈련 가중치(pre-trained weight)를 이용한 CNN의 전이학습을 실시하였다. CNNv4-tiny를 이용하여 상실 치아를 검출한 결과, 0.88, 0.85, 0.87, 0.86, 0.85의 precision, recall, F1-score, mAP 정확도를 보였다. 해부학적 구조물 분할 결과, Mask R-CNN을 기반으로 수정된 CNN은 치조골 수준에 대해0.96, 백악법랑경계 수준에 대해 0.92, 치아에 대해 0.94의 분할정확도(DSC)를 보였다. 이어 개발된 방법을 이용하여 학습에 사용되지 않은 30매(기기 별 10매)에서 자동으로 결정된 치주염의 단계와 서로 다른 임상경험을 가진 3명의 영상치의학 전문의가 진단한 단계 간 비교 평가를 수행하였다. 평가 결과, 모든 치아에 대해 자동으로 결정된 치주염 단계와 전문의들이 진단한 단계 간 0.31의 오차(MAD)를 보였다. 또한 기기1, 2, 3의 영상에 대해 각각 0.25, 0.34, 0.35의 오차를 보였다. 개발된 방법을 이용한 결과와 방사선 전문의의 진단 사이의 PCC 값은 기기1, 2, 3의 영상에 대해 각각 0.73, 0.77, 0.75로 계산되었다 (p<0.01). 전체 영상에 대한 최종 ICC 값은 0.76 (p<0.01)로 계산되었다. 또한 개발된 방법과 방사선 전문의의 진단 사이의 ICC 값은 기기1, 2, 3의 영상에 대해 각각 0.91, 0.94, 0.93으로 계산되었다 (p <0.01). 마지막으로 최종 ICC 값은 0.93으로 계산되었다 (p<0.01). Passing 및 Bablok 분석의 경우 회귀직선의 기울기와 x축 절편은 교수, 임상강사, 전공의에 대해 각각 1.176 (p>0.05), 1.100 (p>0.05), 1.111 (p>0.05)와 -0.304, -0.199, -0.371로 나타났다. Bland와 Altman 분석의 경우 자동으로 결정된 영상 별 평균 단계와 영상치의학 전공 치과의사의 진단 결과 간 교수, 임상강사, 전공의에 대해 0.007 (95 % 신뢰 구간 (CI), -0.060 ~ 0.074), 각각 -0.022 (95 % CI, -0.098 ~ 0.053), -0.198 (95 % CI, -0.291 ~ -0.104)로 계산되었다. 결론적으로, 본 논문에서 개발된 딥러닝 하이브리드 프레임워크는 딥러닝 신경망 기술과 컴퓨터 보조 진단 기술을 융합하여 환자의 파노라마 방사선 영상에서 치주염을 4단계로 분류하였다. 본 방법은 높은 해부학적 구조물 및 상실 치아 검출 정확도를 보였으며, 자동으로 결정된 치주염 단계는 임상의의 진단 결과와 높은 일치율과 상관성을 보여주었다. 또한 다기기 연구를 통해 개발된 방법의 높은 정확성과 일반화 정도를 검증하였다.CONTENTS Abstract •••••••••••••••••••••••••••••••••••••••••••••••••••••••••• i Contents •••••••••••••••••••••••••••••••••••••••••••••••••••••••• vi List of figures ••••••••••••••••••••••••••••••••••••••••••••••••• viii List of tables •••••••••••••••••••••••••••••••••••••••••••••••••••• x List of abbreviations ••••••••••••••••••••••••••••••••••••••••• xii Introduction •••••••••••••••••••••••••••••••••••••••••••••••••••• 1 Materials and Methods •••••••••••••••••••••••••••••••••••••••• 5 Overall process for deep learning-based computer-aided diagnosis method ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 5 Data preparation of dental panoramic radiographs from multiple devices ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 7 Detection of PBL and CEJL structures and teeth using CNNs ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 10 Detection of the missing teeth using CNNs ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧14 Staging periodontitis by the conventional CAD method ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧17Evaluation of detection and classification performance ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧20 Results ••••••••••••••••••••••••••••••••••••••••••••••••••••••••• 22 Detection performance for the anatomical structures ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 22 Detection performance for the missing teeth ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 26 Classification performance for the periodontitis stages ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 30 Classification performance of correlations, regressions, and agreements between the periodontitis stages ‧‧‧‧‧‧‧ 36 Discussion ••••••••••••••••••••••••••••••••••••••••••••••••••••• 42 References ••••••••••••••••••••••••••••••••••••••••••••••••••••• 55 Abstract in Korean ••••••••••••••••••••••••••••••••••••••••••• 73Docto

    Dental Biometrics: Human Identification Using Dental Radiograph

    Get PDF
    Biometric is the science and innovation of measuring and analyzing biological information.In information technology, biometric refers to advancements that measures and analyzes human body attributes,for example,DNA, eye retinas, fingerprints and irises,face pattern,voice patterns,and hand geometry estimations,for identification purposes.The primary motivation behind scientific dentistry is to distinguish expired people,for whom different method for recognizable proof(e.g.,unique finger impression,face,and so on.)are not accessible.Dental elements survives most of the PM events which may disrupt or change other body tissues,e.g. casualties of motor vehicles mishaps,fierce violations,and work place accident,whose bodies could be deformed to such a degree,that identification even by a family member is neither desirable nor reliable.Dental Biometric utilises dental radiographs to distinguish casualties.The radiographs procured after the casualty's demise are called post-mortem radiograph and the radiograph obtained when the casualty was alive is called ante-mortem radiograph.The objective of dental biometric is to match the unidentified individual's post-mortem radiograph against a database of labelled antemortem radiograph.This thesis proposes a novel method for the contour extraction from dental radiographs.The proposed algorithm of Active Contour Model or the Snake model is used for this purpose. A correctly detected contour is essential for proper feature extraction.This thesis only works on the contour detection.The method has been tested on some radiographs images and is found to produce desired output.However,the input radiograph image may be of low quality,may suffer a clear separation between two adjacent teeth.In that case the method will not be able to produce a satisfactory result.There is a need of pre-processing(e.g. contrast enhancement) before the active contour detection model can be applie

    파노라마방사선영상에서 딥러닝 신경망을 이용한 치성 낭과 종양의 자동 진단 방법

    Get PDF
    학위논문 (박사) -- 서울대학교 대학원 : 치의학대학원 치의학과, 2021. 2. 이원진.Objective: The purpose of this study was to automatically diagnose odontogenic cysts and tumors of the jaw on panoramic radiographs using a deep convolutional neural network. A novel framework method of deep convolutional neural network was proposed with data augmentation for detection and classification of the multiple diseases. Methods: A deep convolutional neural network modified from YOLOv3 was developed for detecting and classifying odontogenic cysts and tumors of the jaw. Our dataset of 1,282 panoramic radiographs comprised 350 dentigerous cysts, 302 periapical cysts, 300 odontogenic keratocysts, 230 ameloblastomas, and 100 normal jaw with no disease. In addition, the number of radiographs was augmented 12-fold by flip, rotation, and intensity changes. The Intersection over union threshold value of 0.5 was used to obtain performance for detection and classification. The classification performance of the developed convolutional neural network was evaluated by calculating sensitivity, specificity, accuracy, and AUC (Area under the ROC curve) for diseases of the jaw. Results: The overall classification performance for the diseases improved from 78.2% sensitivity, 93.9% specificity, 91.3% accuracy, and 0.86 AUC using the convolutional neural network with unaugmented dataset to 88.9% sensitivity, 97.2% specificity, 95.6% accuracy, and 0.94 AUC using the convolutional neural network with augmented dataset. Convolutional neural network using augmented dataset had the following sensitivities, specificities, accuracies, and AUC: 91.4%, 99.2%, 97.8%, and 0.96 for dentigerous cysts, 82.8%, 99.2%, 96.2%, and 0.92 for periapical cysts, 98.4%, 92.3%, 94.0%, and 0.97 for odontogenic keratocysts, 71.7%, 100%, 94.3%, and 0.86 for ameloblastomas, and 100.0%, 95.1%, 96.0%, and 0.94 for normal jaw, respectively. Conclusion: The novel framework convolutional neural network method was developed for automatically diagnosing odontogenic cysts and tumors of the jaw on panoramic radiographs using data augmentation. The proposed convolutional neural network model showed high sensitivity, specificity, accuracy, and AUC despite the limited number of panoramic images involved.1. 목 적 구강악안면영역에서 발생하는 낭종 혹은 종양을 조기에 발견하지 못하여 적절한 치료가 이루어지지 못하고 지연되는 경우가 있다. 이러한 문제를 해결하기 위하여 인공신경망을 기반으로 하는 기계학습 기술인 딥러닝신경망(deep convolutional neural network)을 이용하는 컴퓨터 보조진단은 보다 정확하고 빠른 결과를 제공할 수 있다. 따라서 본 연구에서는 파노라마방사선영상에서 딥러닝신경망을 이용하여 구강악안면에서 자주 나타나는 4가지 질환(함치성낭, 치근단당, 치성각화낭, 법랑모세포종)을 자동으로 검출 및 진단하는 딥러닝신경망을 개발하고 그 정확성을 평가하였다. 2. 방 법 본 연구에서는 파노라마방사선영상에서 악골에 발생한 치성 낭과 종양을 검출하고 진단하기 위하여 YoLoV3를 기반으로 한 딥러닝신경망을 구축하였다. 1999년부터 2017년까지 서울대학교치과병원에서 조직병리학적으로 확진된 함치성낭 350례, 치근단낭 302례, 치성각화낭 300례, 법랑모세포종 230례의 환자로부터 획득한 총 1182매 파노라마방사선영상을 분석하였다. 또한 대조군으로 질환이 없는 정상 파노라마방사선영상 100매를 선택하였다. 파노라마방사선영상 데이터는 감마, 보정, 회전, 뒤집기 기법을 통하여 12배 증강되었다. 총 데이터의 60%는 훈련세트, 20%는 검증세트, 20%는 테스트세트로 사용하였다. 개발된 딥러닝신경망은 5배 교차검증(5-fold cross validation)기법을 이용하여 평가하였다. 본 연구에서 개발한 딥러닝신경망의 성능은 정확도(Accuracy), 민감도(sensitivity), 특이도(specificity) 및 ROC분석을 통한 AUC(area under the curve) 지표를 사용하여 측정하였다. 3. 결 과 본 연구에서 개발한 딥러닝신경망은 데이터 증강을 하지 않았을 때 78.2% 민감도, 93.9% 특이도, 91.3% 정확도 및 0.86의 AUC 값을 보였고 데이터 증강을 하였을 때에는 88.9% 민감도, 97.2% 특이도, 95.6% 정확도 및 0.94 AUC의 개선된 성능을 보여주었다. 함치성낭은 91.4% 민감도, 99.2% 특이도, 97.8% 정확도 및 0.96 AUC 값을 보였다. 치근단낭은 82.8% 민감도, 99.2% 특이도, 96.2% 정확도 및 0.92 AUC 값을 나타냈다. 치성각화낭은 98.4% 민감도, 92.3% 특이도, 94.0% 정확도 및 0.97 AUC 결과를 보였다. 법랑모세포종은 71.7% 민감도, 100% 특이도, 94.3% 정확도 및 0.86 AUC의 결과를 보였다. 그리고 정상적인 악골에서는 100% 민감도, 95.1% 특이도, 96.0% 정확도 및 0.97 AUC값을 각각 보였다. 4. 결 론 본 연구에서는 파노라마방사선영상에서 치성 낭과 종양을 자동으로 검출하고 진단하는 딥러닝신경망을 개발하였다. 본 연구는 파노라마방사선영상의 수가 충분하지 않았음에도 불구하고 데이터 증강 기법을 이용하여 우수한 민감도, 특이도 및 정확도 결과를 보였다. 본 연구결과를 통하여 개발된 시스템은 환자의 상기 질환을 조기에 진단하고 적절한 시기에 치료하는데 유용하다.Contents Abstract i Tables v Figure legends vi Introduction 1 Materials and Methods 5 Data preparation and augmentation of panoramic radiographs 5 A deep convolutional neural network model for detection and classification of multiple diseases YOLOv3 9 Evaluation of detection and classification performance of the deep convolutional neural network model 13 Results 15 Discussion 28 Conclusion 37 Acknowledgments 38 References 39 요약(국문초록) 48Docto
    corecore