3,765 research outputs found

    Computer aided analysis of dental radiographic images

    Full text link
    This paper is a result of a fruitful cooperation between the computer science and the dental diagnosis experiences. The study presents a new approach of applying computer algorithms to radiographic images of dental implantation used for bone regeneration. We focus here only on the contribution of the computer assistance to the clinical research as the periodontal therapy is beyond the scope of this paper. The proposed system is based on a pattern recognition approach, directed to recognize density changes in the intra-bony affected areas of patients. It comprises different modules with new algorithms specially designed to treat the patients&rsquo; radiographic images more accurately. The system includes digitizing, detecting the complicated region of interest (ROI), defining reference area to correct any projection discrepancy of the follow up images, and finally to extract the distinguishing features of the ROI as a basis for determining the rate of new bone density accumulation. This study is applied to two typical dental cases for a patient who received two different operations. The results are very encouraging and more accurate than traditional techniques reported before. <br /

    3D-printing techniques in a medical setting : a systematic literature review

    Get PDF
    Background: Three-dimensional (3D) printing has numerous applications and has gained much interest in the medical world. The constantly improving quality of 3D-printing applications has contributed to their increased use on patients. This paper summarizes the literature on surgical 3D-printing applications used on patients, with a focus on reported clinical and economic outcomes. Methods: Three major literature databases were screened for case series (more than three cases described in the same study) and trials of surgical applications of 3D printing in humans. Results: 227 surgical papers were analyzed and summarized using an evidence table. The papers described the use of 3D printing for surgical guides, anatomical models, and custom implants. 3D printing is used in multiple surgical domains, such as orthopedics, maxillofacial surgery, cranial surgery, and spinal surgery. In general, the advantages of 3D-printed parts are said to include reduced surgical time, improved medical outcome, and decreased radiation exposure. The costs of printing and additional scans generally increase the overall cost of the procedure. Conclusion: 3D printing is well integrated in surgical practice and research. Applications vary from anatomical models mainly intended for surgical planning to surgical guides and implants. Our research suggests that there are several advantages to 3D- printed applications, but that further research is needed to determine whether the increased intervention costs can be balanced with the observable advantages of this new technology. There is a need for a formal cost-effectiveness analysis

    Thickness of the buccal bone wall and root angulation in the maxilla and mandible: an approach to cone beam computed tomography

    Get PDF
    Background: The objective of this paper is to anatomically describe the bone morphology in the maxillary and mandibular tooth areas, which might help in planning post-extraction implants. Methods: CBCT images (Planmeca ProMax 3D) of 403 teeth (208 upper teeth and 195 lower teeth) were obtained from 49 patients referred to the Dental School of Seville from January to December 2014. The thickness of the facial wall was measured at the crest, point A, 4mm below, point B, and at the apex, point C. The second parameter was the angle formed between the dental axis and the axis of the basal bone. Results: A total of 403 teeth were measured. In the maxilla, 89.4% of incisors, 93.94% of canines, 78% of premolars and 70.5% of molars had a buccal bone wall thickness less than the ideal 2mm. In the mandible, 73.5% of incisors, 49% of canines, 64% of premolars and 53% of molars had <1mm buccal bone thickness as measured at point B. The mean angulation in the maxilla was 11.67±6.37° for incisors, 16.88±7.93° for canines, 13.93±8.6° for premolars, and 9.89±4.8° for molars. In the mandible, the mean values were 10.63±8.76° for incisors, 10.98±7.36° for canines, 10.54±5.82° for premolars and 16.19±11.22° for molars. Conclusions: The high incidence of a buccal wall thickness of less than 2mm in over 80% of the assessed sites indicates the need for additional regeneration procedures, and several locations may also require custom abutments to solve the angulation problems for screw-retained crowns

    방사선학적 골 소실량과 치주염 단계의 딥러닝 기반 컴퓨터 보조진단 방법: 다기기 연구

    Get PDF
    학위논문 (박사) -- 서울대학교 대학원 : 융합과학기술대학원 융합과학부(방사선융합의생명전공), 2021. 2. 이원진.Periodontal diseases, including gingivitis and periodontitis, are some of the most common diseases that humankind suffers from. The decay of alveolar bone in the oral and maxillofacial region is one of the main symptoms of periodontal disease. This leads to alveolar bone loss, tooth loss, edentulism, and masticatory dysfunction, which indirectly affects nutrition. In 2017, the American Academy of Periodontology and the European Federation of Periodontology proposed a new definition and classification criteria for periodontitis based on a staging system. Recently, computer-aided diagnosis (CAD) based on deep learning has been used extensively for solving complex problems in radiology. In my previous study, a deep learning hybrid framework was developed to automatically stage periodontitis on dental panoramic radiographs. This was a hybrid of deep learning architecture for detection and conventional CAD processing to achieve classification. The framework was proposed to automatically quantify the periodontal bone loss and classify periodontitis for each individual tooth into three stages according to the criteria that was proposed at the 2017 World Workshop. In this study, the previously developed framework was improved in order to classify periodontitis into four stages by detecting the number of missing teeth/implants using an additional convolutional neural network (CNN). A multi-device study was performed to verify the generality of the method. A total of 500 panoramic radiographs (400, 50, and 50 images for device 1, device 2, and device 3, respectively) from multiple devices were collected to train the CNN. For a baseline study, three CNNs, which were commonly used for segmentation tasks and the modified CNN from the Mask Region with CNN (R-CNN) were trained and tested to compare the detection accuracy using dental panoramic radiographs that were acquired from multiple devices. In addition, a pre-trained weight derived from the previous study was used as an initial weight to train the CNN to detect the periodontal bone level (PBL), cemento-enamel junction level (CEJL), and teeth/implants to achieve a high training efficiency. The CNN, trained with the multi-device images that had sufficient variability, can produce an accurate detection and segmentation for the input images with various aspects. When detecting the missing teeth on the panoramic radiographs, the values of the precision, recall, F1-score, and mean average precision (AP) were set to 0.88, 0.85, 0.87, and 0.86, respectively, by using CNNv4-tiny. As a result of the qualitative and quantitative evaluation for detecting the PBL, CEJL, and teeth/implants, the Mask R-CNN showed the highest dice similarity coefficients (DSC) of 0.96, 0.92, and 0.94, respectively. Next, the automatically determined stages from the framework were compared to those that were developed by three oral and maxillofacial radiologists with different levels of experience. The mean absolute difference (MAD) between the periodontitis staging that was performed by the automatic method and that by the radiologists was 0.31 overall for all the teeth in the whole jaw. The classification accuracies for the images from the multiple devices were 0.25, 0.34, and 0.35 for device 1, device 2, and device 3, respectively. The overall Pearson correlation coefficient (PCC) values between the developed method and the radiologists’ diagnoses were 0.73, 0.77, and 0.75 for the images from device 1, device 2, and device 3, respectively (p < 0.01). The final intraclass correlation coefficient (ICC) value between the developed method and the radiologists’ diagnoses for all the images was 0.76 (p < 0.01). The overall ICC values between the developed method and the radiologists’ diagnoses were 0.91, 0.94, and 0.93 for the images from device 1, device 2, and device 3, respectively (p < 0.01). The final ICC value between the developed method and the radiologists’ diagnoses for all the images was 0.93 (p < 0.01). In the Passing and Bablok analysis, the slopes were 1.176 (p > 0.05), 1.100 (p > 0.05), and 1.111 (p > 0.05) with the intersections of -0.304, -0.199, and -0.371 for the radiologists with ten, five, and three-years of experience, respectively. For the Bland and Altman analysis, the average of the difference between the mean stages that were classified by the automatic method and those diagnosed by the radiologists with ten-years, five-years, and three-years of experience were 0.007 (95 % confidence interval (CI), -0.060 ~ 0.074), -0.022 (95 % CI, -0.098 ~ 0.053), and -0.198 (95 % CI, -0.291 ~ -0.104), respectively. The developed method for classifying the periodontitis stages that combined the deep learning architecture and conventional CAD approach had a high accuracy, reliability, and generality when automatically diagnosing periodontal bone loss and the staging of periodontitis by the multi-device study. The results demonstrated that when the CNN used the training data sets with increasing variability, the performance also improved in an unseen data set.치주염과 치은염을 포함한 치주질환은 인류가 겪고 있는 가장 흔한 질환 중 하나이다. 구강 및 악안면 부위 치조골의 침하는 치주질환의 주요 증상이며, 이는 골 손실, 치아 손실, 치주염을 유발할 수 있으며, 이를 방치할 경우 저작 기능 장애로 인한 영양실조의 원인이 될 수 있다. 2017년 미국치주학회(American Academy of Periodontology)와 유럽치주학회(European Federation of Periodontology)는 공동 워크샵을 통해 치주염에 대한 새로운 정의와 단계 분류 및 진단에 관련된 기준을 발표하였다. 최근, 딥러닝을 기반으로 한 컴퓨터 보조진단 기술 (Computer-aided Diagnoses, CAD)이 의료방사선영상 분야에서 복잡한 문제를 해결하는 데 광범위하게 사용되고 있다. 선행 연구에서 저자는 파노라마방사선영상에서 치주염을 자동으로 진단하기 위한 딥러닝 하이브리드 프레임워크를 개발하였다. 이는 해부학적 구조물 분할을 위한 딥러닝 신경망 기술과 치주염의 단계 분류를 위한 컴퓨터 보조진단 기술을 융합하여 단일 프레임워크에서 치주염을 자동으로 분류, 진단하는 방법이다. 이를 통해 각 치아에서 방사선적 치조골 소실량을 자동으로 정량화하고, 2017년 워크샵에서 제안된 기준에 따라 치주염을 3단계로 분류하였다. 본 연구에서는 선행 개발된 방법을 개선하여 상실 치아와 식립된 임플란트의 수를 검출, 정량화하여 치주염을 4단계로 분류하는 방법을 개발하였다. 또한 개발된 방법의 일반화 정도를 평가하기 위해 서로 다른 기기를 통해 촬영된 영상을 이용한 다기기 연구를 수행하였다. 3개의 기기를 이용하여 총 500매의 파노라마방사선영상을 수집하여 CNN 학습을 위한 데이터셋을 구축하였다. 수집된 영상 데이터셋을 이용하여, 기존 연구에서 의료영상 분할에 일반적으로 사용되는 3개의 CNN 모델과 Mask R-CNN을 학습시킨 후, 해부학적 구조물 분할 정확도 비교 평가를 실시하였다. 또한 CNN의 높은 학습 효율성 확보와 및 다기기 영상에 대한 추가 학습을 위해 선행 연구에서 도출된 사전 훈련 가중치(pre-trained weight)를 이용한 CNN의 전이학습을 실시하였다. CNNv4-tiny를 이용하여 상실 치아를 검출한 결과, 0.88, 0.85, 0.87, 0.86, 0.85의 precision, recall, F1-score, mAP 정확도를 보였다. 해부학적 구조물 분할 결과, Mask R-CNN을 기반으로 수정된 CNN은 치조골 수준에 대해0.96, 백악법랑경계 수준에 대해 0.92, 치아에 대해 0.94의 분할정확도(DSC)를 보였다. 이어 개발된 방법을 이용하여 학습에 사용되지 않은 30매(기기 별 10매)에서 자동으로 결정된 치주염의 단계와 서로 다른 임상경험을 가진 3명의 영상치의학 전문의가 진단한 단계 간 비교 평가를 수행하였다. 평가 결과, 모든 치아에 대해 자동으로 결정된 치주염 단계와 전문의들이 진단한 단계 간 0.31의 오차(MAD)를 보였다. 또한 기기1, 2, 3의 영상에 대해 각각 0.25, 0.34, 0.35의 오차를 보였다. 개발된 방법을 이용한 결과와 방사선 전문의의 진단 사이의 PCC 값은 기기1, 2, 3의 영상에 대해 각각 0.73, 0.77, 0.75로 계산되었다 (p<0.01). 전체 영상에 대한 최종 ICC 값은 0.76 (p<0.01)로 계산되었다. 또한 개발된 방법과 방사선 전문의의 진단 사이의 ICC 값은 기기1, 2, 3의 영상에 대해 각각 0.91, 0.94, 0.93으로 계산되었다 (p <0.01). 마지막으로 최종 ICC 값은 0.93으로 계산되었다 (p<0.01). Passing 및 Bablok 분석의 경우 회귀직선의 기울기와 x축 절편은 교수, 임상강사, 전공의에 대해 각각 1.176 (p>0.05), 1.100 (p>0.05), 1.111 (p>0.05)와 -0.304, -0.199, -0.371로 나타났다. Bland와 Altman 분석의 경우 자동으로 결정된 영상 별 평균 단계와 영상치의학 전공 치과의사의 진단 결과 간 교수, 임상강사, 전공의에 대해 0.007 (95 % 신뢰 구간 (CI), -0.060 ~ 0.074), 각각 -0.022 (95 % CI, -0.098 ~ 0.053), -0.198 (95 % CI, -0.291 ~ -0.104)로 계산되었다. 결론적으로, 본 논문에서 개발된 딥러닝 하이브리드 프레임워크는 딥러닝 신경망 기술과 컴퓨터 보조 진단 기술을 융합하여 환자의 파노라마 방사선 영상에서 치주염을 4단계로 분류하였다. 본 방법은 높은 해부학적 구조물 및 상실 치아 검출 정확도를 보였으며, 자동으로 결정된 치주염 단계는 임상의의 진단 결과와 높은 일치율과 상관성을 보여주었다. 또한 다기기 연구를 통해 개발된 방법의 높은 정확성과 일반화 정도를 검증하였다.CONTENTS Abstract •••••••••••••••••••••••••••••••••••••••••••••••••••••••••• i Contents •••••••••••••••••••••••••••••••••••••••••••••••••••••••• vi List of figures ••••••••••••••••••••••••••••••••••••••••••••••••• viii List of tables •••••••••••••••••••••••••••••••••••••••••••••••••••• x List of abbreviations ••••••••••••••••••••••••••••••••••••••••• xii Introduction •••••••••••••••••••••••••••••••••••••••••••••••••••• 1 Materials and Methods •••••••••••••••••••••••••••••••••••••••• 5 Overall process for deep learning-based computer-aided diagnosis method ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 5 Data preparation of dental panoramic radiographs from multiple devices ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 7 Detection of PBL and CEJL structures and teeth using CNNs ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 10 Detection of the missing teeth using CNNs ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧14 Staging periodontitis by the conventional CAD method ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧17Evaluation of detection and classification performance ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧20 Results ••••••••••••••••••••••••••••••••••••••••••••••••••••••••• 22 Detection performance for the anatomical structures ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 22 Detection performance for the missing teeth ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 26 Classification performance for the periodontitis stages ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 30 Classification performance of correlations, regressions, and agreements between the periodontitis stages ‧‧‧‧‧‧‧ 36 Discussion ••••••••••••••••••••••••••••••••••••••••••••••••••••• 42 References ••••••••••••••••••••••••••••••••••••••••••••••••••••• 55 Abstract in Korean ••••••••••••••••••••••••••••••••••••••••••• 73Docto

    Geometrical modeling of complete dental shapes by using panoramic X-ray, digital mouth data and anatomical templates

    Get PDF
    In the field of orthodontic planning, the creation of a complete digital dental model to simulate and predict treatments is of utmost importance. Nowadays, orthodontists use panoramic radiographs (PAN) and dental crown representations obtained by optical scanning. However, these data do not contain any 3D information regarding tooth root geometries. A reliable orthodontic treatment should instead take into account entire geometrical models of dental shapes in order to better predict tooth movements. This paper presents a methodology to create complete 3D patient dental anatomies by combining digital mouth models and panoramic radiographs. The modeling process is based on using crown surfaces, reconstructed by optical scanning, and root geometries, obtained by adapting anatomical CAD templates over patient specific information extracted from radiographic data. The radiographic process is virtually replicated on crown digital geometries through the Discrete Radon Transform (DRT). The resulting virtual PAN image is used to integrate the actual radiographic data and the digital mouth model. This procedure provides the root references on the 3D digital crown models, which guide a shape adjustment of the dental CAD templates. The entire geometrical models are finally created by merging dental crowns, captured by optical scanning, and root geometries, obtained from the CAD templates

    Advances in Radiographic Techniques Used in Dentistry

    Get PDF

    파노라마방사선영상에서 딥러닝 신경망을 이용한 치성 낭과 종양의 자동 진단 방법

    Get PDF
    학위논문 (박사) -- 서울대학교 대학원 : 치의학대학원 치의학과, 2021. 2. 이원진.Objective: The purpose of this study was to automatically diagnose odontogenic cysts and tumors of the jaw on panoramic radiographs using a deep convolutional neural network. A novel framework method of deep convolutional neural network was proposed with data augmentation for detection and classification of the multiple diseases. Methods: A deep convolutional neural network modified from YOLOv3 was developed for detecting and classifying odontogenic cysts and tumors of the jaw. Our dataset of 1,282 panoramic radiographs comprised 350 dentigerous cysts, 302 periapical cysts, 300 odontogenic keratocysts, 230 ameloblastomas, and 100 normal jaw with no disease. In addition, the number of radiographs was augmented 12-fold by flip, rotation, and intensity changes. The Intersection over union threshold value of 0.5 was used to obtain performance for detection and classification. The classification performance of the developed convolutional neural network was evaluated by calculating sensitivity, specificity, accuracy, and AUC (Area under the ROC curve) for diseases of the jaw. Results: The overall classification performance for the diseases improved from 78.2% sensitivity, 93.9% specificity, 91.3% accuracy, and 0.86 AUC using the convolutional neural network with unaugmented dataset to 88.9% sensitivity, 97.2% specificity, 95.6% accuracy, and 0.94 AUC using the convolutional neural network with augmented dataset. Convolutional neural network using augmented dataset had the following sensitivities, specificities, accuracies, and AUC: 91.4%, 99.2%, 97.8%, and 0.96 for dentigerous cysts, 82.8%, 99.2%, 96.2%, and 0.92 for periapical cysts, 98.4%, 92.3%, 94.0%, and 0.97 for odontogenic keratocysts, 71.7%, 100%, 94.3%, and 0.86 for ameloblastomas, and 100.0%, 95.1%, 96.0%, and 0.94 for normal jaw, respectively. Conclusion: The novel framework convolutional neural network method was developed for automatically diagnosing odontogenic cysts and tumors of the jaw on panoramic radiographs using data augmentation. The proposed convolutional neural network model showed high sensitivity, specificity, accuracy, and AUC despite the limited number of panoramic images involved.1. 목 적 구강악안면영역에서 발생하는 낭종 혹은 종양을 조기에 발견하지 못하여 적절한 치료가 이루어지지 못하고 지연되는 경우가 있다. 이러한 문제를 해결하기 위하여 인공신경망을 기반으로 하는 기계학습 기술인 딥러닝신경망(deep convolutional neural network)을 이용하는 컴퓨터 보조진단은 보다 정확하고 빠른 결과를 제공할 수 있다. 따라서 본 연구에서는 파노라마방사선영상에서 딥러닝신경망을 이용하여 구강악안면에서 자주 나타나는 4가지 질환(함치성낭, 치근단당, 치성각화낭, 법랑모세포종)을 자동으로 검출 및 진단하는 딥러닝신경망을 개발하고 그 정확성을 평가하였다. 2. 방 법 본 연구에서는 파노라마방사선영상에서 악골에 발생한 치성 낭과 종양을 검출하고 진단하기 위하여 YoLoV3를 기반으로 한 딥러닝신경망을 구축하였다. 1999년부터 2017년까지 서울대학교치과병원에서 조직병리학적으로 확진된 함치성낭 350례, 치근단낭 302례, 치성각화낭 300례, 법랑모세포종 230례의 환자로부터 획득한 총 1182매 파노라마방사선영상을 분석하였다. 또한 대조군으로 질환이 없는 정상 파노라마방사선영상 100매를 선택하였다. 파노라마방사선영상 데이터는 감마, 보정, 회전, 뒤집기 기법을 통하여 12배 증강되었다. 총 데이터의 60%는 훈련세트, 20%는 검증세트, 20%는 테스트세트로 사용하였다. 개발된 딥러닝신경망은 5배 교차검증(5-fold cross validation)기법을 이용하여 평가하였다. 본 연구에서 개발한 딥러닝신경망의 성능은 정확도(Accuracy), 민감도(sensitivity), 특이도(specificity) 및 ROC분석을 통한 AUC(area under the curve) 지표를 사용하여 측정하였다. 3. 결 과 본 연구에서 개발한 딥러닝신경망은 데이터 증강을 하지 않았을 때 78.2% 민감도, 93.9% 특이도, 91.3% 정확도 및 0.86의 AUC 값을 보였고 데이터 증강을 하였을 때에는 88.9% 민감도, 97.2% 특이도, 95.6% 정확도 및 0.94 AUC의 개선된 성능을 보여주었다. 함치성낭은 91.4% 민감도, 99.2% 특이도, 97.8% 정확도 및 0.96 AUC 값을 보였다. 치근단낭은 82.8% 민감도, 99.2% 특이도, 96.2% 정확도 및 0.92 AUC 값을 나타냈다. 치성각화낭은 98.4% 민감도, 92.3% 특이도, 94.0% 정확도 및 0.97 AUC 결과를 보였다. 법랑모세포종은 71.7% 민감도, 100% 특이도, 94.3% 정확도 및 0.86 AUC의 결과를 보였다. 그리고 정상적인 악골에서는 100% 민감도, 95.1% 특이도, 96.0% 정확도 및 0.97 AUC값을 각각 보였다. 4. 결 론 본 연구에서는 파노라마방사선영상에서 치성 낭과 종양을 자동으로 검출하고 진단하는 딥러닝신경망을 개발하였다. 본 연구는 파노라마방사선영상의 수가 충분하지 않았음에도 불구하고 데이터 증강 기법을 이용하여 우수한 민감도, 특이도 및 정확도 결과를 보였다. 본 연구결과를 통하여 개발된 시스템은 환자의 상기 질환을 조기에 진단하고 적절한 시기에 치료하는데 유용하다.Contents Abstract i Tables v Figure legends vi Introduction 1 Materials and Methods 5 Data preparation and augmentation of panoramic radiographs 5 A deep convolutional neural network model for detection and classification of multiple diseases YOLOv3 9 Evaluation of detection and classification performance of the deep convolutional neural network model 13 Results 15 Discussion 28 Conclusion 37 Acknowledgments 38 References 39 요약(국문초록) 48Docto
    corecore