72 research outputs found
The reliability of cephalometric tracing using AI
Introduction : L'objectif de cette รฉtude est de comparer la diffรฉrence entre l'analyse cรฉphalomรฉtrique manuelle et l'analyse automatisรฉe par lโintelligence artificielle afin de confirmer la fiabilitรฉ de cette derniรจre. Notre hypothรจse de recherche est que la technique manuelle est la plus fiable des deux mรฉthodes.
Mรฉthode : Un total de 99 radiographies cรฉphalomรฉtriques latรฉrales sont recueillies. Des tracรฉs par technique manuelle (MT) et par localisation automatisรฉe par intelligence artificielle (AI) sont rรฉalisรฉs pour toutes les radiographies. La localisation de 29 points cรฉphalomรฉtriques couramment utilisรฉs est comparรฉe entre les deux groupes. L'erreur radiale moyenne (MRE) et un taux de dรฉtection rรฉussie (SDR) de 2 mm sont utilisรฉs pour comparer les deux groupes. Le logiciel AudaxCeph version 6.2.57.4225 est utilisรฉ pour l'analyse manuelle et l'analyse AI.
Rรฉsultats : Le MRE et SDR pour le test de fiabilitรฉ inter-examinateur sont respectivement de 0,87 ยฑ 0,61mm et 95%. Pour la comparaison entre la technique manuelle MT et le repรฉrage par intelligence artificielle AI, le MRE et SDR pour tous les repรจres sont respectivement de 1,48 ยฑ 1,42 mm et 78 %. Lorsque les repรจres dentaires sont exclus, le MRE diminue ร 1,33 ยฑ 1,39 mm et le SDR augmente ร 84 %. Lorsque seuls les repรจres des tissus durs sont inclus (excluant les points des tissus mous et dentaires), le MRE diminue encore ร 1,25 ยฑ 1,09 mm et le SDR augmente ร 85 %. Lorsque seuls les points de repรจre des tissus mous sont inclus, le MRE augmente ร 1,68 ยฑ 1,89 mm et le SDR diminue ร 78 %.
Conclusion: La performance du logiciel est similaire ร celles prรฉcรฉdemment rapportรฉe dans la littรฉrature pour des logiciels utilisant un cadre de modรฉlisation similaire. Nos rรฉsultats rรฉvรจlent que le repรฉrage manuel a donnรฉ lieu ร une plus grande prรฉcision. Le logiciel a obtenu de trรจs bons rรฉsultats pour les points de tissus durs, mais sa prรฉcision a diminuรฉ pour les tissus mous et dentaires. Nous concluons que cette technologie est trรจs prometteuse pour une application en milieu clinique sous la supervision du docteur.Introduction: The objective of this study is to compare the difference between manual cephalometric analysis and automatic analysis by artificial intelligence to confirm the reliability of the latter. Our research hypothesis is that the manual technique is the most reliable of the methods and is still considered the gold standard.
Method: A total of 99 lateral cephalometric radiographs were collected in this study. Manual technique (MT) and automatic localization by artificial intelligence (AI) tracings were performed for all radiographs. The localization of 29 commonly used landmarks were compared between both groups. Mean radial error (MRE) and a successful detection rate (SDR) of 2mm were used to compare both groups. AudaxCeph software version 6.2.57.4225 (Audax d.o.o., Ljubljana, Slovenia) was used for both manual and AI analysis.
Results: The MRE and SDR for the inter-examinator reliability test were 0.87 ยฑ 0.61mm and 95% respectively. For the comparison between the manual technique MT and landmarking with artificial intelligence AI, the MRE and SDR for all landmarks were 1.48 ยฑ 1.42mm and 78% respectively. When dental landmarks are excluded, the MRE decreases to 1.33 ยฑ 1.39mm and the SDR increases to 84%. When only hard tissue landmarks are included (excluding soft tissue and dental points) the MRE decreases further to 1.25 ยฑ 1.09mm and the SDR increases to 85%. When only soft tissue landmarks are included the MRE increases to 1.68 ยฑ 1.89mm and the SDR decreases to 78%.
Conclusion: The software performed similarly to what was previously reported in literature for software that use analogous modeling framework. Comparing the softwareโs landmarking to manual landmarking our results reveal that the manual landmarking resulted in higher accuracy. The software operated very well for hard tissue points, but its accuracy went down for soft and dental tissue. Our conclusion is this technology shows great promise for application in clinical settings under the doctorโs supervision
'Aariz: A Benchmark Dataset for Automatic Cephalometric Landmark Detection and CVM Stage Classification
The accurate identification and precise localization of cephalometric
landmarks enable the classification and quantification of anatomical
abnormalities. The traditional way of marking cephalometric landmarks on
lateral cephalograms is a monotonous and time-consuming job. Endeavours to
develop automated landmark detection systems have persistently been made,
however, they are inadequate for orthodontic applications due to unavailability
of a reliable dataset. We proposed a new state-of-the-art dataset to facilitate
the development of robust AI solutions for quantitative morphometric analysis.
The dataset includes 1000 lateral cephalometric radiographs (LCRs) obtained
from 7 different radiographic imaging devices with varying resolutions, making
it the most diverse and comprehensive cephalometric dataset to date. The
clinical experts of our team meticulously annotated each radiograph with 29
cephalometric landmarks, including the most significant soft tissue landmarks
ever marked in any publicly available dataset. Additionally, our experts also
labelled the cervical vertebral maturation (CVM) stage of the patient in a
radiograph, making this dataset the first standard resource for CVM
classification. We believe that this dataset will be instrumental in the
development of reliable automated landmark detection frameworks for use in
orthodontics and beyond
Automatic Three-Dimensional Cephalometric Annotation System Using Three-Dimensional Convolutional Neural Networks
Background: Three-dimensional (3D) cephalometric analysis using computerized
tomography data has been rapidly adopted for dysmorphosis and anthropometry.
Several different approaches to automatic 3D annotation have been proposed to
overcome the limitations of traditional cephalometry. The purpose of this study
was to evaluate the accuracy of our newly-developed system using a deep
learning algorithm for automatic 3D cephalometric annotation. Methods: To
overcome current technical limitations, some measures were developed to
directly annotate 3D human skull data. Our deep learning-based model system
mainly consisted of a 3D convolutional neural network and image data
resampling. Results: The discrepancies between the referenced and predicted
coordinate values in three axes and in 3D distance were calculated to evaluate
system accuracy. Our new model system yielded prediction errors of 3.26, 3.18,
and 4.81 mm (for three axes) and 7.61 mm (for 3D). Moreover, there was no
difference among the landmarks of the three groups, including the midsagittal
plane, horizontal plane, and mandible (p>0.05). Conclusion: A new 3D
convolutional neural network-based automatic annotation system for 3D
cephalometry was developed. The strategies used to implement the system were
detailed and measurement results were evaluated for accuracy. Further
development of this system is planned for full clinical application of
automatic 3D cephalometric annotation
An Attention-Guided Deep Regression Model for Landmark Detection in Cephalograms
Cephalometric tracing method is usually used in orthodontic diagnosis and
treatment planning. In this paper, we propose a deep learning based framework
to automatically detect anatomical landmarks in cephalometric X-ray images. We
train the deep encoder-decoder for landmark detection, and combine global
landmark configuration with local high-resolution feature responses. The
proposed frame-work is based on 2-stage u-net, regressing the multi-channel
heatmaps for land-mark detection. In this framework, we embed attention
mechanism with global stage heatmaps, guiding the local stage inferring, to
regress the local heatmap patches in a high resolution. Besides, the Expansive
Exploration strategy improves robustness while inferring, expanding the
searching scope without increasing model complexity. We have evaluated our
framework in the most widely-used public dataset of landmark detection in
cephalometric X-ray images. With less computation and manually tuning, our
framework achieves state-of-the-art results
Deep learning for cephalometric landmark detection: systematic review and meta-analysis
Objectives: Deep learning (DL) has been increasingly employed for automated landmark detection, e.g., for cephalometric purposes. We performed a systematic review and meta-analysis to assess the accuracy and underlying evidence for DL for cephalometric landmark detection on 2-D and 3-D radiographs.
Methods: Diagnostic accuracy studies published in 2015-2020 in Medline/Embase/IEEE/arXiv and employing DL for cephalometric landmark detection were identified and extracted by two independent reviewers. Random-effects meta-analysis, subgroup, and meta-regression were performed, and study quality was assessed using QUADAS-2. The review was registered (PROSPERO no. 227498).
Data: From 321 identified records, 19 studies (published 2017-2020), all employing convolutional neural networks, mainly on 2-D lateral radiographs (n=15), using data from publicly available datasets (n=12) and testing the detection of a mean of 30 (SD: 25; range.: 7-93) landmarks, were included. The reference test was established by two experts (n=11), 1 expert (n=4), 3 experts (n=3), and a set of annotators (n=1). Risk of bias was high, and applicability concerns were detected for most studies, mainly regarding the data selection and reference test conduct. Landmark prediction error centered around a 2-mm error threshold (mean; 95% confidence interval: (-0.581; 95 CI: -1.264 to 0.102 mm)). The proportion of landmarks detected within this 2-mm threshold was 0.799 (0.770 to 0.824).
Conclusions: DL shows relatively high accuracy for detecting landmarks on cephalometric imagery. The overall body of evidence is consistent but suffers from high risk of bias. Demonstrating robustness and generalizability of DL for landmark detection is needed.
Clinical significance: Existing DL models show consistent and largely high accuracy for automated detection of cephalometric landmarks. The majority of studies so far focused on 2-D imagery; data on 3-D imagery are sparse, but promising. Future studies should focus on demonstrating generalizability, robustness, and clinical usefulness of DL for this objective
์์ ์ ๊ต์ ์์ ๊ณผ ์์ ๊ต์ ์น๋ฃ๋ฅผ ๋ฐ์ ๊ณจ๊ฒฉ์ฑ III๊ธ ๋ถ์ ๊ตํฉ ํ์์ ์ธก๋ชจ ๋๋ถ๊ณ์ธก๋ฐฉ์ฌ์ ์ฌ์ง ์์์์ ์ธ๊ณต์ง๋ฅ์ ์ด์ฉํ ๊ฒฝ์กฐ์ง ๊ณ์ธก์ ์๋ณ ์ ํ๊ท ์ค์ฐจ์ ๋ณํ ์์
ํ์๋
ผ๋ฌธ(๋ฐ์ฌ) -- ์์ธ๋ํ๊ต๋ํ์ : ์น๊ณผ๋ํ ์น์๊ณผํ๊ณผ, 2022. 8. ๋ฐฑ์นํ.Objective: Recently, auto digitization of hard tissue landmarks on lateral cephalograms (Lat-cephs) has reported, with regard to artificial intelligence models using cascade convolutional neural network (CNN). The aim of this study was to investigate the pattern of accuracy change in artificial intelligence (AI)-assisted hard tissue landmark identification in serial Lat-cephs of Class III patients who underwent two-jaw orthognathic surgery and orthodontic treatment using a cascade CNN algorithm.
Materials and Methods: A total of 3,188 Lat-cephs of 797 Class III patients were allocated into the training and validation sets (3,004 Lat-cephs of 751 patients) and test set (184 Lat-cephs of 46 patients; subdivided into the genioplasty and non-genioplasty groups, n=23 per group) for landmark identification using a cascade CNN model. Each Class III patient in the test set had four Lat-cephs: initial (T0), pre-surgery [T1, presence of orthodontic brackets (OBs)], post-surgery [T2, presence of OBs and surgical plates and screws (SPS)], and debonding [T3, presence of SPS and fixed retainers (FR)]. After mean errors of 20 hard tissue landmarks between human gold standard and the cascade CNN model were calculated, statistical analysis was performed.
Results: Results are as follows. (1) The total mean error was 1.17 mm without significant difference among the four time-points (T0, 1.20 mm; T1, 1.14 mm; T2, 1.18 mm; T3, 1.15 mm). (2) In comparison of two time-points [(T0, T1) vs. (T2, T3)], ANS, A point, and B point showed an increase in error (P<0.01; P<0.05; P<0.01), while distal contact point of the maxillary first molar (Mx6D) and distal contact point of the mandibular first molar (Md6D) showed a decrease in error (P<0.01; P<0.01). (3) No difference in errors existed at B point, Pogonion, Menton, crown tip of the mandibular central incisor (Md1C), and root apex of the mandibular central incisor (Md1R) between the genioplasty and non-genioplasty groups.
Conclusion: The cascade CNN model can be used for auto-digitization of hard tissue landmark in serial Lat-cephs including initial, pre-and post-surgery, and debonding time points despite presence of OB, SPS, FR, genioplasty, and bone remodeling.์ฐ๊ตฌ๋ชฉ์ : ์ต๊ทผ ์ง๋ ฌ ํฉ์ฑ๊ณฑ ์ ๊ฒฝ๋ง (cascade TJneural network) ์ธ๊ณต์ง๋ฅ ๋ชจ๋ธ์ ์ฌ์ฉํ์ฌ ์ธก๋ชจ๋๋ถ๊ณ์ธก๋ฐฉ์ฌ์ ์ฌ์ง ์์์์ ๊ฒฝ์กฐ์ง ๊ณ์ธก์ ์ ์๋ ์๋ณ(auto-digitization)ํ๋ ์ฐ๊ตฌ๋ค์ด ๋ฐํ๋๊ณ ์๋ค. ๋ณธ ์ฐ๊ตฌ์ ๋ชฉ์ ์ ์์
์
๊ต์ ์์ ๊ณผ ์ ์ ๋ฐ ์ ํ ๊ต์ ์น๋ฃ๋ฅผ ๋ฐ์ ํ์๋ค์ ์ฐ์์ ์ธ ์ธก๋ชจ๋๋ถ๊ณ์ธก๋ฐฉ์ฌ์ ์ฌ์ง ์์์์ ๊ฒฝ์กฐ์ง ๊ณ์ธก์ ์ ์๋ ์๋ณํ๋ ์ ํ๋๊ฐ ์ดฌ์ ์์ ์ ๋ฐ๋ผ ์ด๋ ํ ์์์ผ๋ก ๋ณํํ๋ ์ง ํ๊ฐํ๋ ๊ฒ์ด์๋ค.
์ฐ๊ตฌ์ฌ๋ฃ ๋ฐ ๋ฐฉ๋ฒ: ์ฐ๊ตฌ ๋์์ ์์
์
๊ต์ ์์ ์ ๋ฐ์ ๊ณจ๊ฒฉ์ฑ III๊ธ ๋ถ์ ๊ตํฉ ํ์ 797๋ช
์ ์ธก๋ชจ๋๋ถ๊ณ์ธก๋ฐฉ์ฌ์ ์ฌ์ง ์์ 3,188์ฅ์ด์๋ค. 751๋ช
์ผ๋ก๋ถํฐ ํ๋ณดํ 3,004์ฅ์ ์์์ training set๊ณผ validation set์ผ๋ก ํ์ฉํ์๊ณ , 46๋ช
ํ์ [์ด๋ถ์ฑํ์ (genioplasty) ์ํ ๊ตฐ (n=23), ์ด๋ถ์ฑํ์ (genioplasty) ๋น์ํ๊ตฐ (n=23)]๋ก๋ถํฐ ํ๋ณดํ 184์ฅ์ ์์์ test set์ผ๋ก ํ์๋ค. Test set์ ์ด์ง(T0), ๊ต์ ์ฉ ๋ธ๋ผ์ผ ์์์ด ํฌํจ๋ ์ ์ (T1), ์์ ์ฉ ๊ณ ์ ํ๊ณผ ๋์ฌ (surgical plate and screw) ์์์ด ๋ํ๋ ์ ํ(T2), ์์ ์ฉ ๊ณ ์ ํ๊ณผ ๋์ฌ ๋ฐ ๊ณ ์ ์ ์ ์ง์ฅ์น๊ฐ ์์์ผ๋ก ๋ณด์ด๋ ์ข
๋ฃ(T3)์ 4๊ฐ์ง ์์ ๋ณ๋ก ์ดฌ์๋ ์ธก๋ชจ๋๋ถ๊ณ์ธก๋ฐฉ์ฌ์ ์ฌ์ง ์์์ผ๋ก ๊ตฌ์ฑ๋์๋ค. ์ธ๊ณต์ง๋ฅ๊ณผ ์น๊ณผ๊ต์ ๊ณผ ์ ๋ฌธ์1์ธ(human gold standard)์ด ๊ฐ๊ฐ 20๊ฐ์ ๊ฒฝ์กฐ์ง ๊ณ์ธก์ ์ digitization ํ ํ, ์ด์ ๋ฐ๋ฅธ ์ค์ฐจ๋ฅผ ํต๊ณ์ ์ผ๋ก ๊ฒ์ฆํ์๋ค.
์ฐ๊ตฌ ๊ฒฐ๊ณผ: ๊ทธ ๊ฒฐ๊ณผ๋ ๋ค์๊ณผ ๊ฐ์๋ค. (1) ์ ์ฒด ํ๊ท ์ค์ฐจ๋ 1.17 mm, T0์ ํ๊ท ์ค์ฐจ๋ 1.20 mm, T1์ ํ๊ท ์ค์ฐจ๋ 1.14 mm, T2์ ํ๊ท ์ค์ฐจ๋ 1.18 mm, T3์ ํ๊ท ์ค์ฐจ๋ 1.15 mm๋ก ๋ํ๋ฌ๋ค. (2) ์ ์ (T0, T1)๊ณผ ์ ํ(T2, T3)์ ๋น๊ต์์๋, ANS, A point, B point ์ ์ค์ฐจ๊ฐ ํต๊ณํ์ ์ผ๋ก๋ ์ฆ๊ฐํ์์ผ๋ ์์์ ์ผ๋ก๋ ์ ์๋ฏธํ์ง ์์๊ณ (P<0.01; P<0.05; P<0.01), Mx6D์ Md6D๋ ์ค์ฐจ๊ฐ ๊ฐ์ํ์๋ค (P<0.01; P<0.01). (3) genioplasty ์คํ๊ตฐ๊ณผ genioplasty ๋น์คํ๊ตฐ ๊ฐ์ ๋น๊ต์์ B point, Pogonion, Menton, Md1C, Md1R์ ์ค์ฐจ๋ ํต๊ณ์ ์ธ ์ฐจ์ด๊ฐ ๋ํ๋์ง ์์๋ค.
๊ฒฐ๋ก : ์ง๋ ฌ ์ธ๊ณต์ง๋ฅ ํฉ์ฑ๊ณฑ ์ ๊ฒฝ๋ง ๋ชจ๋ธ์ ๊ต์ ๋ธ๋ผ์ผ, ์์ ์ฉ ๊ณ ์ ํ๊ณผ ๋์ฌ, ๊ณ ์ ์ ์ ์ง์ฅ์น, ์ด๋ถ์ฑํ์ ๊ทธ๋ฆฌ๊ณ ์์ ํ ๊ณจ ๊ฐ์กฐ์๋ ๋ถ๊ตฌํ๊ณ , ์ด์ง, ์ ์ ๊ต์ ์น๋ฃ, ์ ํ ๊ต์ ์น๋ฃ, ์ข
๋ฃ ์ ์ดฌ์๋ ์ธก๋ชจ๋๋ถ๊ณ์ธก๋ฐฉ์ฌ์ ์ฌ์ง ์์์์ ๊ฒฝ์กฐ์ง ๊ณ์ธก์ ๋ค์ ์๋์๋ณ ํ๋๋ฐ ์ฌ์ฉ๋ ์ ์๋ค๋ ๊ฒฐ๋ก ์ ๋์ถํ์๋ค.I. INTRODUCTION 1
II. REVIEW OF LITERATURE 3
III. MATERIALS AND METHODS 10
IV. RESULTS 14
V. DISCUSSION 17
VI. CONCLUSIONS 23
REFERENCES 25
Tables 28
Figures 36
๊ตญ๋ฌธ์ด๋ก 42๋ฐ
Empirical Evaluation ofย Deep Learning Approaches forย Landmark Detection inย Fish Bioimages
In this paper we perform an empirical evaluation of variants of deep learning methods to automatically localize anatomical landmarks in bioimages of fishes acquired using different imaging modalities (microscopy and radiography). We compare two methodologies namely heatmap based regression and multivariate direct regression, and evaluate them in combination with several Convolutional Neural Network (CNN) architectures. Heatmap based regression approaches employ Gaussian or Exponential heatmap generation functions combined with CNNs to output the heatmaps corresponding to landmark locations whereas direct regression approaches output directly the (x, y) coordinates corresponding to landmark locations. In our experiments, we use two microscopy datasets of Zebrafish and Medaka fish and one radiography dataset of gilthead Seabream. On our three datasets, the heatmap approach with Exponential function and U-Net architecture performs better.
Datasets and open-source code for training and prediction are made available to ease future landmark detection research and bioimaging applications
๋๋ถ๊ณ์ธก๋ฐฉ์ฌ์ ์ฌ์ง ๊ณ์ธก์ ์๋ ์๋ณ์ ์ต์ ๊ธฐ๊ณ ํ์ต ์๊ณ ๋ฆฌ์ฆ ๊ฐ ์ ํ๋ ๋ฐ ์ฐ์ฐ ์ฑ๋ฅ ๋น๊ต ์ฐ๊ตฌ โ YOLOv3 vs SSD
ํ์๋
ผ๋ฌธ(๋ฐ์ฌ)--์์ธ๋ํ๊ต ๋ํ์ :์น์ํ๋ํ์ ์น์ํ๊ณผ,2019. 8. ์ด์ ์ฌ.Introduction: The purpose of this study was to compare two of the latest deep learning algorithms for automatic identification of cephalometric landmarks in their accuracy and computational efficiency. This study uses two different algorithms for automated cephalometric landmark identification with an extended number of landmarks: 1) You-Only-Look-Once version 3 (YOLOv3) based method with modification, and 2) the Single Shot Detector (SSD) based method.
Materials and methods: A total of 1,028 cephalometric radiographic images were selected as learning data that trained YOLOv3 and SSD methods. The number of target labelling was 80 landmarks. After the deep learning process, the algorithms were tested using a new test data set comprised of 283 images. The accuracy was determined by measuring the mean point-to-point error, success detection rate (SDR), and visualized by drawing 2-dimensional scattergrams. Computational time of both algorithms were also recorded.
Results: YOLOv3 algorithm outperformed SSD in accuracy for 38/80 landmarks. The other 42/80 landmarks did not show a statistically significant difference between YOLOv3 and SSD. Error plots of YOLOv3 showed not only a smaller error range, but also a more isotropic tendency. Mean computational time spent per image was 0.05 seconds and 2.89 seconds for YOLOv3 and SSD, respectively. YOLOv3 showed approximately 5% higher accuracy compared with the top benchmarks in the literature.
Conclusions: Between the two algorithms applied, YOLOv3 seems to be promising as a fully automated cephalometric landmark identification system for use in clinical practice.์ฐ๊ตฌ ๋ชฉ์ : ๋ณธ ์ฐ๊ตฌ์ ๋ชฉ์ ์ ๋๋ถ๊ณ์ธก๋ฐฉ์ฌ์ ์ฌ์ง ๊ณ์ธก์ ์๋ ์๋ณ์ ์์ด, ์ต๊ทผ ๊ฐ๋ฐ๋ ๋ ๊ฐ์ง ๋ฅ ๋ฌ๋ ์๊ณ ๋ฆฌ์ฆ์ ์ ํ๋์ ์ฐ์ฐ ์ฑ๋ฅ์ ๋น๊ตํ๋ ๊ฒ์ด๋ค. ๋ณธ ์ฐ๊ตฌ์์๋ ๋ค์ ๋ ๊ฐ์ง์ ์๊ณ ๋ฆฌ์ฆ์ ๊ณ์ธก์ ์๋ ์๋ณ์ ์ ์ฉํ์๋ค. 1) You-Only-Look-Once version 3 (YOLOv3) ๋ฐ 2) the Single Shot Detector (SSD).
์ฌ๋ฃ ๋ฐ ๋ฐฉ๋ฒ: ์ด 1,028 ๊ฐ์ ๋๋ถ๊ณ์ธก๋ฐฉ์ฌ์ ์ฌ์ง ์์์ด YOLOv3 ์ SSD๋ฐฉ์์ ํ์ต ๋ฐ์ดํฐ๋ก ์ฌ์ฉ๋์๋ค. ๋์ ๊ณ์ธก์ ์ 80๊ฐ์๋ค. ํ์ต ๊ณผ์ ์ ๊ฑฐ์น ํ, ๊ฐ๊ฐ์ ์๊ณ ๋ฆฌ์ฆ์ ์๋ก์ด 283 ๊ฐ์ ํ
์คํธ ์์์์ ๋น๊ต ๋ถ์ํ์๋ค. ์ ํ๋๋ 1) ํ๊ท ์ ์ธ point-to-point error, 2) success detection rate (SDR), ๊ทธ๋ฆฌ๊ณ 3) 2์ฐจ์ ํ๋ฉด์์ ์๊ฐํํ scattergram ์ ๊ธฐ๋ฐ์ผ๋ก ํ๊ฐํ๋ค. ๊ฐ๊ฐ์ ์๊ณ ๋ฆฌ์ฆ์ ํ๊ท ์ฐ์ฐ ์๊ฐ ์ญ์ ๊ธฐ๋กํ์๋ค.
๊ฒฐ๊ณผ: YOLOv3 ๋ SSD ์ ๋นํด ์ด 38/80 ๊ฐ์ ๊ณ์ธก์ ์์ ๋ ๋์ ์ ํ๋๋ฅผ ๋ณด์๋ค. ๋๋จธ์ง 42/80 ๊ฐ์ ๊ณ์ธก์ ์ ๋ ์๊ณ ๋ฆฌ์ฆ ๊ฐ์ ์ ํ๋์ ์์ด ํต๊ณ์ ์ผ๋ก ์ ์๋ฏธํ ์ฐจ์ด๋ฅผ ๋ํ๋ด์ง ์์๋ค. Error plot ์์๋ YOLOv3 ๊ฐ SSD ์ ๋นํด์ error ์ ๋ฒ์๊ฐ ๋ ์์ ๋ฟ ์๋๋ผ, 2์ฐจ์ ํ๋ฉด์์ ๋ฐฉํฅ์ฑ์ ์ํฅ์ ๋ ๋ฐ๋ ๊ฒ์ผ๋ก ๋ํ๋ฌ๋ค. ํ๋์ ์์์์ ๊ณ์ธก์ ์ ์๋ ์๋ณํ๋๋ฐ ์์๋ ํ๊ท ์๊ฐ์ YOLOv3 ์ SSD ๊ฐ ๊ฐ๊ฐ 0.05 ์ด, 2.89 ์ด๋ก ๊ธฐ๋ก๋์๋ค. ๋ณธ ์ฐ๊ตฌ์์ YOLOv3 ๋ ๊ธฐ์กด ๋ฌธํ์์ ์ต์์ ์ ํ๋๋ฅผ ๊ธฐ๋กํ๋ ์ฐ๊ตฌ์ ๋นํด ์ฝ 5% ๊ฐ๋ ๋์ ์ ํ๋๋ฅผ ๋ณด์๋ค.
๊ฒฐ๋ก : ๋ณธ ์ฐ๊ตฌ๋ฅผ ํตํด ์ ์ฉ๋ ๋ ๊ฐ์ ์๊ณ ๋ฆฌ์ฆ ์ค, YOLOv3 ๊ฐ ๋๋ถ๊ณ์ธก๋ฐฉ์ฌ์ ์ฌ์ง ๊ณ์ธก์ ์์ ์๋ ์๋ณ์ ์์์ ์ธ ์ ์ฉ์ ๊ฐ๋ฅ์ฑ ๋์ ์๊ณ ๋ฆฌ์ฆ์์ ํ์ธํ์๋ค.Abstract
Contents
I. INTRODUCTION 1
II. MATERIALS AND METHODS 4
1. Subjects 4
2. Manual identification of cephalometric landmarks 4
3. Two Deep Learning Systems 5
4. Test Procedures and Comparisons between the two systems 6
III. RESULTS 7
IV. DISCUSSION 8
V. CONCLUSIONS 13
REFERENCES 14
TABLES 20
FIGURES 24
๊ตญ๋ฌธ์ด๋ก 46Docto
- โฆ