356 research outputs found

    방사선학적 골 소실량과 치주염 단계의 딥러닝 기반 컴퓨터 보조진단 방법: 다기기 연구

    Get PDF
    학위논문 (박사) -- 서울대학교 대학원 : 융합과학기술대학원 융합과학부(방사선융합의생명전공), 2021. 2. 이원진.Periodontal diseases, including gingivitis and periodontitis, are some of the most common diseases that humankind suffers from. The decay of alveolar bone in the oral and maxillofacial region is one of the main symptoms of periodontal disease. This leads to alveolar bone loss, tooth loss, edentulism, and masticatory dysfunction, which indirectly affects nutrition. In 2017, the American Academy of Periodontology and the European Federation of Periodontology proposed a new definition and classification criteria for periodontitis based on a staging system. Recently, computer-aided diagnosis (CAD) based on deep learning has been used extensively for solving complex problems in radiology. In my previous study, a deep learning hybrid framework was developed to automatically stage periodontitis on dental panoramic radiographs. This was a hybrid of deep learning architecture for detection and conventional CAD processing to achieve classification. The framework was proposed to automatically quantify the periodontal bone loss and classify periodontitis for each individual tooth into three stages according to the criteria that was proposed at the 2017 World Workshop. In this study, the previously developed framework was improved in order to classify periodontitis into four stages by detecting the number of missing teeth/implants using an additional convolutional neural network (CNN). A multi-device study was performed to verify the generality of the method. A total of 500 panoramic radiographs (400, 50, and 50 images for device 1, device 2, and device 3, respectively) from multiple devices were collected to train the CNN. For a baseline study, three CNNs, which were commonly used for segmentation tasks and the modified CNN from the Mask Region with CNN (R-CNN) were trained and tested to compare the detection accuracy using dental panoramic radiographs that were acquired from multiple devices. In addition, a pre-trained weight derived from the previous study was used as an initial weight to train the CNN to detect the periodontal bone level (PBL), cemento-enamel junction level (CEJL), and teeth/implants to achieve a high training efficiency. The CNN, trained with the multi-device images that had sufficient variability, can produce an accurate detection and segmentation for the input images with various aspects. When detecting the missing teeth on the panoramic radiographs, the values of the precision, recall, F1-score, and mean average precision (AP) were set to 0.88, 0.85, 0.87, and 0.86, respectively, by using CNNv4-tiny. As a result of the qualitative and quantitative evaluation for detecting the PBL, CEJL, and teeth/implants, the Mask R-CNN showed the highest dice similarity coefficients (DSC) of 0.96, 0.92, and 0.94, respectively. Next, the automatically determined stages from the framework were compared to those that were developed by three oral and maxillofacial radiologists with different levels of experience. The mean absolute difference (MAD) between the periodontitis staging that was performed by the automatic method and that by the radiologists was 0.31 overall for all the teeth in the whole jaw. The classification accuracies for the images from the multiple devices were 0.25, 0.34, and 0.35 for device 1, device 2, and device 3, respectively. The overall Pearson correlation coefficient (PCC) values between the developed method and the radiologists’ diagnoses were 0.73, 0.77, and 0.75 for the images from device 1, device 2, and device 3, respectively (p < 0.01). The final intraclass correlation coefficient (ICC) value between the developed method and the radiologists’ diagnoses for all the images was 0.76 (p < 0.01). The overall ICC values between the developed method and the radiologists’ diagnoses were 0.91, 0.94, and 0.93 for the images from device 1, device 2, and device 3, respectively (p < 0.01). The final ICC value between the developed method and the radiologists’ diagnoses for all the images was 0.93 (p < 0.01). In the Passing and Bablok analysis, the slopes were 1.176 (p > 0.05), 1.100 (p > 0.05), and 1.111 (p > 0.05) with the intersections of -0.304, -0.199, and -0.371 for the radiologists with ten, five, and three-years of experience, respectively. For the Bland and Altman analysis, the average of the difference between the mean stages that were classified by the automatic method and those diagnosed by the radiologists with ten-years, five-years, and three-years of experience were 0.007 (95 % confidence interval (CI), -0.060 ~ 0.074), -0.022 (95 % CI, -0.098 ~ 0.053), and -0.198 (95 % CI, -0.291 ~ -0.104), respectively. The developed method for classifying the periodontitis stages that combined the deep learning architecture and conventional CAD approach had a high accuracy, reliability, and generality when automatically diagnosing periodontal bone loss and the staging of periodontitis by the multi-device study. The results demonstrated that when the CNN used the training data sets with increasing variability, the performance also improved in an unseen data set.치주염과 치은염을 포함한 치주질환은 인류가 겪고 있는 가장 흔한 질환 중 하나이다. 구강 및 악안면 부위 치조골의 침하는 치주질환의 주요 증상이며, 이는 골 손실, 치아 손실, 치주염을 유발할 수 있으며, 이를 방치할 경우 저작 기능 장애로 인한 영양실조의 원인이 될 수 있다. 2017년 미국치주학회(American Academy of Periodontology)와 유럽치주학회(European Federation of Periodontology)는 공동 워크샵을 통해 치주염에 대한 새로운 정의와 단계 분류 및 진단에 관련된 기준을 발표하였다. 최근, 딥러닝을 기반으로 한 컴퓨터 보조진단 기술 (Computer-aided Diagnoses, CAD)이 의료방사선영상 분야에서 복잡한 문제를 해결하는 데 광범위하게 사용되고 있다. 선행 연구에서 저자는 파노라마방사선영상에서 치주염을 자동으로 진단하기 위한 딥러닝 하이브리드 프레임워크를 개발하였다. 이는 해부학적 구조물 분할을 위한 딥러닝 신경망 기술과 치주염의 단계 분류를 위한 컴퓨터 보조진단 기술을 융합하여 단일 프레임워크에서 치주염을 자동으로 분류, 진단하는 방법이다. 이를 통해 각 치아에서 방사선적 치조골 소실량을 자동으로 정량화하고, 2017년 워크샵에서 제안된 기준에 따라 치주염을 3단계로 분류하였다. 본 연구에서는 선행 개발된 방법을 개선하여 상실 치아와 식립된 임플란트의 수를 검출, 정량화하여 치주염을 4단계로 분류하는 방법을 개발하였다. 또한 개발된 방법의 일반화 정도를 평가하기 위해 서로 다른 기기를 통해 촬영된 영상을 이용한 다기기 연구를 수행하였다. 3개의 기기를 이용하여 총 500매의 파노라마방사선영상을 수집하여 CNN 학습을 위한 데이터셋을 구축하였다. 수집된 영상 데이터셋을 이용하여, 기존 연구에서 의료영상 분할에 일반적으로 사용되는 3개의 CNN 모델과 Mask R-CNN을 학습시킨 후, 해부학적 구조물 분할 정확도 비교 평가를 실시하였다. 또한 CNN의 높은 학습 효율성 확보와 및 다기기 영상에 대한 추가 학습을 위해 선행 연구에서 도출된 사전 훈련 가중치(pre-trained weight)를 이용한 CNN의 전이학습을 실시하였다. CNNv4-tiny를 이용하여 상실 치아를 검출한 결과, 0.88, 0.85, 0.87, 0.86, 0.85의 precision, recall, F1-score, mAP 정확도를 보였다. 해부학적 구조물 분할 결과, Mask R-CNN을 기반으로 수정된 CNN은 치조골 수준에 대해0.96, 백악법랑경계 수준에 대해 0.92, 치아에 대해 0.94의 분할정확도(DSC)를 보였다. 이어 개발된 방법을 이용하여 학습에 사용되지 않은 30매(기기 별 10매)에서 자동으로 결정된 치주염의 단계와 서로 다른 임상경험을 가진 3명의 영상치의학 전문의가 진단한 단계 간 비교 평가를 수행하였다. 평가 결과, 모든 치아에 대해 자동으로 결정된 치주염 단계와 전문의들이 진단한 단계 간 0.31의 오차(MAD)를 보였다. 또한 기기1, 2, 3의 영상에 대해 각각 0.25, 0.34, 0.35의 오차를 보였다. 개발된 방법을 이용한 결과와 방사선 전문의의 진단 사이의 PCC 값은 기기1, 2, 3의 영상에 대해 각각 0.73, 0.77, 0.75로 계산되었다 (p<0.01). 전체 영상에 대한 최종 ICC 값은 0.76 (p<0.01)로 계산되었다. 또한 개발된 방법과 방사선 전문의의 진단 사이의 ICC 값은 기기1, 2, 3의 영상에 대해 각각 0.91, 0.94, 0.93으로 계산되었다 (p <0.01). 마지막으로 최종 ICC 값은 0.93으로 계산되었다 (p<0.01). Passing 및 Bablok 분석의 경우 회귀직선의 기울기와 x축 절편은 교수, 임상강사, 전공의에 대해 각각 1.176 (p>0.05), 1.100 (p>0.05), 1.111 (p>0.05)와 -0.304, -0.199, -0.371로 나타났다. Bland와 Altman 분석의 경우 자동으로 결정된 영상 별 평균 단계와 영상치의학 전공 치과의사의 진단 결과 간 교수, 임상강사, 전공의에 대해 0.007 (95 % 신뢰 구간 (CI), -0.060 ~ 0.074), 각각 -0.022 (95 % CI, -0.098 ~ 0.053), -0.198 (95 % CI, -0.291 ~ -0.104)로 계산되었다. 결론적으로, 본 논문에서 개발된 딥러닝 하이브리드 프레임워크는 딥러닝 신경망 기술과 컴퓨터 보조 진단 기술을 융합하여 환자의 파노라마 방사선 영상에서 치주염을 4단계로 분류하였다. 본 방법은 높은 해부학적 구조물 및 상실 치아 검출 정확도를 보였으며, 자동으로 결정된 치주염 단계는 임상의의 진단 결과와 높은 일치율과 상관성을 보여주었다. 또한 다기기 연구를 통해 개발된 방법의 높은 정확성과 일반화 정도를 검증하였다.CONTENTS Abstract •••••••••••••••••••••••••••••••••••••••••••••••••••••••••• i Contents •••••••••••••••••••••••••••••••••••••••••••••••••••••••• vi List of figures ••••••••••••••••••••••••••••••••••••••••••••••••• viii List of tables •••••••••••••••••••••••••••••••••••••••••••••••••••• x List of abbreviations ••••••••••••••••••••••••••••••••••••••••• xii Introduction •••••••••••••••••••••••••••••••••••••••••••••••••••• 1 Materials and Methods •••••••••••••••••••••••••••••••••••••••• 5 Overall process for deep learning-based computer-aided diagnosis method ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 5 Data preparation of dental panoramic radiographs from multiple devices ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 7 Detection of PBL and CEJL structures and teeth using CNNs ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 10 Detection of the missing teeth using CNNs ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧14 Staging periodontitis by the conventional CAD method ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧17Evaluation of detection and classification performance ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧20 Results ••••••••••••••••••••••••••••••••••••••••••••••••••••••••• 22 Detection performance for the anatomical structures ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 22 Detection performance for the missing teeth ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 26 Classification performance for the periodontitis stages ‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧‧ 30 Classification performance of correlations, regressions, and agreements between the periodontitis stages ‧‧‧‧‧‧‧ 36 Discussion ••••••••••••••••••••••••••••••••••••••••••••••••••••• 42 References ••••••••••••••••••••••••••••••••••••••••••••••••••••• 55 Abstract in Korean ••••••••••••••••••••••••••••••••••••••••••• 73Docto

    Tooth Instance Segmentation from Cone-Beam CT Images through Point-based Detection and Gaussian Disentanglement

    Full text link
    Individual tooth segmentation and identification from cone-beam computed tomography images are preoperative prerequisites for orthodontic treatments. Instance segmentation methods using convolutional neural networks have demonstrated ground-breaking results on individual tooth segmentation tasks, and are used in various medical imaging applications. While point-based detection networks achieve superior results on dental images, it is still a challenging task to distinguish adjacent teeth because of their similar topologies and proximate nature. In this study, we propose a point-based tooth localization network that effectively disentangles each individual tooth based on a Gaussian disentanglement objective function. The proposed network first performs heatmap regression accompanied by box regression for all the anatomical teeth. A novel Gaussian disentanglement penalty is employed by minimizing the sum of the pixel-wise multiplication of the heatmaps for all adjacent teeth pairs. Subsequently, individual tooth segmentation is performed by converting a pixel-wise labeling task to a distance map regression task to minimize false positives in adjacent regions of the teeth. Experimental results demonstrate that the proposed algorithm outperforms state-of-the-art approaches by increasing the average precision of detection by 9.1%, which results in a high performance in terms of individual tooth segmentation. The primary significance of the proposed method is two-fold: 1) the introduction of a point-based tooth detection framework that does not require additional classification and 2) the design of a novel loss function that effectively separates Gaussian distributions based on heatmap responses in the point-based detection framework.Comment: 11 pages, 7 figure

    심층신경망을 이용한 자동화된 치과 의료영상 분석

    Get PDF
    학위논문(박사) -- 서울대학교대학원 : 치과대학 치의과학과, 2021.8. 한중석.목 적: 치과 영역에서도 심층신경망(Deep Neural Network) 모델을 이용한 방사선사진에서의 임플란트 분류, 병소 위치 탐지 등의 연구들이 진행되었으나, 최근 개발된 키포인트 탐지(keypoint detection) 모델 또는 전체적 구획화(panoptic segmentation) 모델을 의료분야에 적용한 연구는 아직 미비하다. 본 연구의 목적은 치근단 방사선사진에서 키포인트 탐지를 이용해 임플란트 골 소실 정도를 파악하는 모델과 panoptic segmentation을 파노라마영상에 적용하여 다양한 구조물들을 구획화하는 모델을 학습시켜 진료에 보조적으로 활용되도록 만들어보고, 이 모델들의 추론결과를 평가해보는 것이다. 방 법: 객체 탐지 및 구획화에 있어 널리 연구된 합성곱 신경망 모델인 Mask-RCNN을 키포인트 탐지가 가능한 형태로 준비하여 치근단 방사선사진에서 임플란트의 top, apex, 그리고 bone level 지점을 좌우로 총 6지점 탐지하게끔 학습시킨 뒤, 학습에 사용되지 않은 시험 데이터셋을 대상으로 탐지시킨다. 키포인트 탐지 평가용 지표인 object keypoint similarity (OKS) 및 이를 이용한 average precision (AP) 값을 계산하고, 평균 OKS값을 통해 모델 및 치과의사의 결과를 비교한다. 또한, 탐지된 키포인트를 바탕으로 방사선사진상에서의 골 소실 정도를 수치화한다. Panoptic segmentation을 위해서는 기존의 벤치마크에서 우수한 성적을 거둔 신경망 모델인 Panoptic DeepLab을 파노라마영상에서 주요 구조물(상악동, 상악골, 하악관, 하악골, 자연치, 치료된 치아, 임플란트)을 구획화하도록 학습시킨 뒤, 시험 데이터셋에서의 구획화 결과에 panoptic / semantic / instance segmentation 각각의 평가지표들을 적용하고, 픽셀들의 정답(ground truth) 클래스와 모델이 추론한 클래스에 대한 confusion matrix를 계산한다. 결 과: OKS값을 기반으로 계산한 키포인트 탐지 AP는, 모든 OKS threshold에 대한 평균의 경우, 상악 임플란트에서는 0.761, 하악 임플란트에서는 0.786이었다. 평균 OKS는 모델이 0.8885, 치과의사가 0.9012로, 통계적으로 유의미한 차이가 없었다 (p = 0.41). 모델의 평균 OKS 값은 사람의 키포인트 어노테이션 정규분포상에서 상위 66.92% 수준이었다. 파노라마영상 구조물 구획화에서는, panoptic segmentation 평가지표인 panoptic quality 값의 경우 모든 클래스의 평균은 80.47이었으며, 치료된 치아가 57.13으로 가장 낮았고 하악관이 65.97로 두번째로 낮은 값을 보였다. Semantic segmentation 평가지표인 global한 Intersection over Union (IoU) 값은 모든 클래스 평균 0.795였으며, 하악관이 0.639로 가장 낮았고 치료된 치아가 0.656으로 두번째로 낮은 값을 보였다. Confusion matrix 계산 결과, ground truth 픽셀들 중 올바르게 추론된 픽셀들의 비율은 하악관이 0.802로 가장 낮았다. 개별 객체에 대한 IoU를 기반으로 계산한 Instance segmentation 평가지표인 AP값은, 모든 IoU threshold에 대한 평균의 경우, 치료된 치아가 0.316, 임플란트가 0.414, 자연치가 0.520이었다. 결 론: 키포인트 탐지 신경망 모델을 이용하여, 치근단 방사선사진에서 임플란트의 주요 지점을 사람과 다소 유사한 수준으로 탐지할 수 있었다. 또한, 탐지된 지점들을 기반으로 방사선사진상에서의 임플란트 주위 골 소실 비율 계산을 자동화할 수 있고, 이 값은 임플란트 주위염의 심도 분류에 사용될 수 있다. 파노라마 영상에서는 panoptic segmentation이 가능한 신경망 모델을 이용하여 상악동과 하악관을 포함한 주요 구조물들을 구획화할 수 있었다. 따라서, 이와 같이 각 작업에 맞는 심층신경망을 적절한 데이터로 학습시킨다면 진료 보조 수단으로 활용될 수 있다.Purpose: In dentistry, deep neural network models have been applied in areas such as implant classification or lesion detection in radiographs. However, few studies have applied the recently developed keypoint detection model or panoptic segmentation model to medical or dental images. The purpose of this study is to train two neural network models to be used as aids in clinical practice and evaluate them: a model to determine the extent of implant bone loss using keypoint detection in periapical radiographs and a model that segments various structures on panoramic radiographs using panoptic segmentation. Methods: Mask-RCNN, a widely studied convolutional neural network for object detection and instance segmentation, was constructed in a form that is capable of keypoint detection, and trained to detect six points of an implant in a periapical radiograph: left and right of the top, apex, and bone level. Next, a test dataset was used to evaluate the inference results. Object keypoint similarity (OKS), a metric to evaluate the keypoint detection task, and average precision (AP), based on the OKS values, were calculated. Furthermore, the results of the model and those arrived at by a dentist were compared using the mean OKS. Based on the detected keypoint, the peri-implant bone loss ratio was obtained from the radiograph. For panoptic segmentation, Panoptic DeepLab, a neural network model ranked high in the previous benchmark, was trained to segment key structures in panoramic radiographs: maxillary sinus, maxilla, mandibular canal, mandible, natural tooth, treated tooth, and dental implant. Then, each evaluation metric of panoptic, semantic, and instance segmentation was applied to the inference results of the test dataset. Finally, the confusion matrix for the ground truth class of pixels and the class inferred by the model was obtained. Results: The AP of keypoint detection for the average of all OKS thresholds was 0.761 for the upper implants and 0.786 for the lower implants. The mean OKS was 0.8885 for the model and 0.9012 for the dentist; thus, the difference was not statistically significant (p = 0.41). The mean OKS of the model was in the top 66.92% of the normal distribution of human keypoint annotations. In panoramic radiograph segmentation, the average panoptic quality (PQ) of all classes was 80.47. The treated teeth showed the lowest PQ of 57.13, and the mandibular canal showed the second lowest PQ of 65.97. The Intersection over Union (IoU) was 0.795 on average for all classes, where the mandibular canal showed the lowest IoU of 0.639, and the treated tooth showed the second lowest IoU of 0.656. In the confusion matrix, the proportion of correctly inferred pixels among the ground truth pixels was the lowest in the mandibular canal at 0.802. The AP, averaged for all IoU thresholds, was 0.316 for the treated tooth, 0.414 for the dental implant, and 0.520 for the normal tooth. Conclusion: Using the keypoint detection neural network model, it was possible to detect major landmarks around dental implants in periapical radiographs to a degree similar to that of human experts. In addition, it was possible to automate the calculation of the peri-implant bone loss ratio on periapical radiographs based on the detected keypoints, and this value could be used to classify the degree of peri-implantitis. In panoramic radiographs, the major structures including the maxillary sinus and the mandibular canal could be segmented using a neural network model capable of panoptic segmentation. Thus, if deep neural networks suitable for each task are trained using suitable datasets, the proposed approach can be used to assist dental clinicians.Chapter 1. Introduction 1 Chapter 2. Materials and methods 5 Chapter 3. Results 23 Chapter 4. Discussion 32 Chapter 5. Conclusions 45 Published papers related to this study 46 References 47 Abbreviations 52 Abstract in Korean 53 Acknowledgements 56박

    3DTeethSeg'22: 3D Teeth Scan Segmentation and Labeling Challenge

    Full text link
    Teeth localization, segmentation, and labeling from intra-oral 3D scans are essential tasks in modern dentistry to enhance dental diagnostics, treatment planning, and population-based studies on oral health. However, developing automated algorithms for teeth analysis presents significant challenges due to variations in dental anatomy, imaging protocols, and limited availability of publicly accessible data. To address these challenges, the 3DTeethSeg'22 challenge was organized in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2022, with a call for algorithms tackling teeth localization, segmentation, and labeling from intraoral 3D scans. A dataset comprising a total of 1800 scans from 900 patients was prepared, and each tooth was individually annotated by a human-machine hybrid algorithm. A total of 6 algorithms were evaluated on this dataset. In this study, we present the evaluation results of the 3DTeethSeg'22 challenge. The 3DTeethSeg'22 challenge code can be accessed at: https://github.com/abenhamadou/3DTeethSeg22_challengeComment: 29 pages, MICCAI 2022 Singapore, Satellite Event, Challeng

    Machine learning methods as an aid in planning orthodontic treatment on the example of Cone-Beam Computed Tomography analysis: a literature review

    Get PDF
    Convolutional neural networks (CNNs) are used in many areas of computer vision, such as object tracking and recognition, security, military, and biomedical image analysis. In this work, we describe the current methods, the architectures of deep convolutional neural networks used in CBCT. Literature from 2000-2020 from the PubMed database, Google Scholar, was analyzed. Account has been taken of publications in English that describe architectures of deep convolutional neural networks used in CBCT. The results of the reviewed studies indicate that deep learning methods employed in orthodontics can be far superior in comparison to other high-performing algorithms

    XAS: Automatic yet eXplainable Age and Sex determination by combining imprecise per-tooth predictions

    Get PDF
    Chronological age and biological sex estimation are two key tasks in a variety of procedures, including human identification and migration control. Issues such as these have led to the development of both semiautomatic and automatic prediction models, but the former are expensive in terms of time and human resources, while the latter lack the interpretability required to be applicable in real-life scenarios. This paper therefore proposes a new, fully automatic methodology for the estimation of age and sex. This first applies a tooth detection by means of a modified CNN with the objective of extracting the oriented bounding boxes of each tooth. Then, it feeds the image features inside the tooth boxes into a second CNN module designed to produce per-tooth age and sex probability distributions. The method then adopts an uncertainty-aware policy to aggregate these estimated distributions. Our approach yielded a lower mean absolute error than any other previously described, at 0.97 years. The accuracy of the sex classification was 91.82%, confirming the suitability of the teeth for this purpose. The proposed model also allows analyses of age and sex estimations on every tooth, enabling experts to identify the most relevant for each task or population cohort or to detect potential developmental problems. In conclusion, the performance of the method in both age and sex predictions is excellent and has a high degree of interpretability, making it suitable for use in a wide range of application scenariosS
    corecore