72 research outputs found

    The reliability of cephalometric tracing using AI

    Full text link
    Introduction : L'objectif de cette รฉtude est de comparer la diffรฉrence entre l'analyse cรฉphalomรฉtrique manuelle et l'analyse automatisรฉe par lโ€™intelligence artificielle afin de confirmer la fiabilitรฉ de cette derniรจre. Notre hypothรจse de recherche est que la technique manuelle est la plus fiable des deux mรฉthodes. Mรฉthode : Un total de 99 radiographies cรฉphalomรฉtriques latรฉrales sont recueillies. Des tracรฉs par technique manuelle (MT) et par localisation automatisรฉe par intelligence artificielle (AI) sont rรฉalisรฉs pour toutes les radiographies. La localisation de 29 points cรฉphalomรฉtriques couramment utilisรฉs est comparรฉe entre les deux groupes. L'erreur radiale moyenne (MRE) et un taux de dรฉtection rรฉussie (SDR) de 2 mm sont utilisรฉs pour comparer les deux groupes. Le logiciel AudaxCeph version 6.2.57.4225 est utilisรฉ pour l'analyse manuelle et l'analyse AI. Rรฉsultats : Le MRE et SDR pour le test de fiabilitรฉ inter-examinateur sont respectivement de 0,87 ยฑ 0,61mm et 95%. Pour la comparaison entre la technique manuelle MT et le repรฉrage par intelligence artificielle AI, le MRE et SDR pour tous les repรจres sont respectivement de 1,48 ยฑ 1,42 mm et 78 %. Lorsque les repรจres dentaires sont exclus, le MRE diminue ร  1,33 ยฑ 1,39 mm et le SDR augmente ร  84 %. Lorsque seuls les repรจres des tissus durs sont inclus (excluant les points des tissus mous et dentaires), le MRE diminue encore ร  1,25 ยฑ 1,09 mm et le SDR augmente ร  85 %. Lorsque seuls les points de repรจre des tissus mous sont inclus, le MRE augmente ร  1,68 ยฑ 1,89 mm et le SDR diminue ร  78 %. Conclusion: La performance du logiciel est similaire ร  celles prรฉcรฉdemment rapportรฉe dans la littรฉrature pour des logiciels utilisant un cadre de modรฉlisation similaire. Nos rรฉsultats rรฉvรจlent que le repรฉrage manuel a donnรฉ lieu ร  une plus grande prรฉcision. Le logiciel a obtenu de trรจs bons rรฉsultats pour les points de tissus durs, mais sa prรฉcision a diminuรฉ pour les tissus mous et dentaires. Nous concluons que cette technologie est trรจs prometteuse pour une application en milieu clinique sous la supervision du docteur.Introduction: The objective of this study is to compare the difference between manual cephalometric analysis and automatic analysis by artificial intelligence to confirm the reliability of the latter. Our research hypothesis is that the manual technique is the most reliable of the methods and is still considered the gold standard. Method: A total of 99 lateral cephalometric radiographs were collected in this study. Manual technique (MT) and automatic localization by artificial intelligence (AI) tracings were performed for all radiographs. The localization of 29 commonly used landmarks were compared between both groups. Mean radial error (MRE) and a successful detection rate (SDR) of 2mm were used to compare both groups. AudaxCeph software version 6.2.57.4225 (Audax d.o.o., Ljubljana, Slovenia) was used for both manual and AI analysis. Results: The MRE and SDR for the inter-examinator reliability test were 0.87 ยฑ 0.61mm and 95% respectively. For the comparison between the manual technique MT and landmarking with artificial intelligence AI, the MRE and SDR for all landmarks were 1.48 ยฑ 1.42mm and 78% respectively. When dental landmarks are excluded, the MRE decreases to 1.33 ยฑ 1.39mm and the SDR increases to 84%. When only hard tissue landmarks are included (excluding soft tissue and dental points) the MRE decreases further to 1.25 ยฑ 1.09mm and the SDR increases to 85%. When only soft tissue landmarks are included the MRE increases to 1.68 ยฑ 1.89mm and the SDR decreases to 78%. Conclusion: The software performed similarly to what was previously reported in literature for software that use analogous modeling framework. Comparing the softwareโ€™s landmarking to manual landmarking our results reveal that the manual landmarking resulted in higher accuracy. The software operated very well for hard tissue points, but its accuracy went down for soft and dental tissue. Our conclusion is this technology shows great promise for application in clinical settings under the doctorโ€™s supervision

    'Aariz: A Benchmark Dataset for Automatic Cephalometric Landmark Detection and CVM Stage Classification

    Full text link
    The accurate identification and precise localization of cephalometric landmarks enable the classification and quantification of anatomical abnormalities. The traditional way of marking cephalometric landmarks on lateral cephalograms is a monotonous and time-consuming job. Endeavours to develop automated landmark detection systems have persistently been made, however, they are inadequate for orthodontic applications due to unavailability of a reliable dataset. We proposed a new state-of-the-art dataset to facilitate the development of robust AI solutions for quantitative morphometric analysis. The dataset includes 1000 lateral cephalometric radiographs (LCRs) obtained from 7 different radiographic imaging devices with varying resolutions, making it the most diverse and comprehensive cephalometric dataset to date. The clinical experts of our team meticulously annotated each radiograph with 29 cephalometric landmarks, including the most significant soft tissue landmarks ever marked in any publicly available dataset. Additionally, our experts also labelled the cervical vertebral maturation (CVM) stage of the patient in a radiograph, making this dataset the first standard resource for CVM classification. We believe that this dataset will be instrumental in the development of reliable automated landmark detection frameworks for use in orthodontics and beyond

    Automatic Three-Dimensional Cephalometric Annotation System Using Three-Dimensional Convolutional Neural Networks

    Full text link
    Background: Three-dimensional (3D) cephalometric analysis using computerized tomography data has been rapidly adopted for dysmorphosis and anthropometry. Several different approaches to automatic 3D annotation have been proposed to overcome the limitations of traditional cephalometry. The purpose of this study was to evaluate the accuracy of our newly-developed system using a deep learning algorithm for automatic 3D cephalometric annotation. Methods: To overcome current technical limitations, some measures were developed to directly annotate 3D human skull data. Our deep learning-based model system mainly consisted of a 3D convolutional neural network and image data resampling. Results: The discrepancies between the referenced and predicted coordinate values in three axes and in 3D distance were calculated to evaluate system accuracy. Our new model system yielded prediction errors of 3.26, 3.18, and 4.81 mm (for three axes) and 7.61 mm (for 3D). Moreover, there was no difference among the landmarks of the three groups, including the midsagittal plane, horizontal plane, and mandible (p>0.05). Conclusion: A new 3D convolutional neural network-based automatic annotation system for 3D cephalometry was developed. The strategies used to implement the system were detailed and measurement results were evaluated for accuracy. Further development of this system is planned for full clinical application of automatic 3D cephalometric annotation

    An Attention-Guided Deep Regression Model for Landmark Detection in Cephalograms

    Full text link
    Cephalometric tracing method is usually used in orthodontic diagnosis and treatment planning. In this paper, we propose a deep learning based framework to automatically detect anatomical landmarks in cephalometric X-ray images. We train the deep encoder-decoder for landmark detection, and combine global landmark configuration with local high-resolution feature responses. The proposed frame-work is based on 2-stage u-net, regressing the multi-channel heatmaps for land-mark detection. In this framework, we embed attention mechanism with global stage heatmaps, guiding the local stage inferring, to regress the local heatmap patches in a high resolution. Besides, the Expansive Exploration strategy improves robustness while inferring, expanding the searching scope without increasing model complexity. We have evaluated our framework in the most widely-used public dataset of landmark detection in cephalometric X-ray images. With less computation and manually tuning, our framework achieves state-of-the-art results

    Deep learning for cephalometric landmark detection: systematic review and meta-analysis

    Get PDF
    Objectives: Deep learning (DL) has been increasingly employed for automated landmark detection, e.g., for cephalometric purposes. We performed a systematic review and meta-analysis to assess the accuracy and underlying evidence for DL for cephalometric landmark detection on 2-D and 3-D radiographs. Methods: Diagnostic accuracy studies published in 2015-2020 in Medline/Embase/IEEE/arXiv and employing DL for cephalometric landmark detection were identified and extracted by two independent reviewers. Random-effects meta-analysis, subgroup, and meta-regression were performed, and study quality was assessed using QUADAS-2. The review was registered (PROSPERO no. 227498). Data: From 321 identified records, 19 studies (published 2017-2020), all employing convolutional neural networks, mainly on 2-D lateral radiographs (n=15), using data from publicly available datasets (n=12) and testing the detection of a mean of 30 (SD: 25; range.: 7-93) landmarks, were included. The reference test was established by two experts (n=11), 1 expert (n=4), 3 experts (n=3), and a set of annotators (n=1). Risk of bias was high, and applicability concerns were detected for most studies, mainly regarding the data selection and reference test conduct. Landmark prediction error centered around a 2-mm error threshold (mean; 95% confidence interval: (-0.581; 95 CI: -1.264 to 0.102 mm)). The proportion of landmarks detected within this 2-mm threshold was 0.799 (0.770 to 0.824). Conclusions: DL shows relatively high accuracy for detecting landmarks on cephalometric imagery. The overall body of evidence is consistent but suffers from high risk of bias. Demonstrating robustness and generalizability of DL for landmark detection is needed. Clinical significance: Existing DL models show consistent and largely high accuracy for automated detection of cephalometric landmarks. The majority of studies so far focused on 2-D imagery; data on 3-D imagery are sparse, but promising. Future studies should focus on demonstrating generalizability, robustness, and clinical usefulness of DL for this objective

    ์–‘์•… ์•…๊ต์ •์ˆ˜์ˆ ๊ณผ ์ˆ˜์ˆ  ๊ต์ •์น˜๋ฃŒ๋ฅผ ๋ฐ›์€ ๊ณจ๊ฒฉ์„ฑ III๊ธ‰ ๋ถ€์ •๊ตํ•ฉ ํ™˜์ž์˜ ์ธก๋ชจ ๋‘๋ถ€๊ณ„์ธก๋ฐฉ์‚ฌ์„ ์‚ฌ์ง„ ์˜์ƒ์—์„œ ์ธ๊ณต์ง€๋Šฅ์„ ์ด์šฉํ•œ ๊ฒฝ์กฐ์ง ๊ณ„์ธก์  ์‹๋ณ„ ์‹œ ํ‰๊ท  ์˜ค์ฐจ์˜ ๋ณ€ํ™” ์–‘์ƒ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ์น˜๊ณผ๋Œ€ํ•™ ์น˜์˜๊ณผํ•™๊ณผ, 2022. 8. ๋ฐฑ์Šนํ•™.Objective: Recently, auto digitization of hard tissue landmarks on lateral cephalograms (Lat-cephs) has reported, with regard to artificial intelligence models using cascade convolutional neural network (CNN). The aim of this study was to investigate the pattern of accuracy change in artificial intelligence (AI)-assisted hard tissue landmark identification in serial Lat-cephs of Class III patients who underwent two-jaw orthognathic surgery and orthodontic treatment using a cascade CNN algorithm. Materials and Methods: A total of 3,188 Lat-cephs of 797 Class III patients were allocated into the training and validation sets (3,004 Lat-cephs of 751 patients) and test set (184 Lat-cephs of 46 patients; subdivided into the genioplasty and non-genioplasty groups, n=23 per group) for landmark identification using a cascade CNN model. Each Class III patient in the test set had four Lat-cephs: initial (T0), pre-surgery [T1, presence of orthodontic brackets (OBs)], post-surgery [T2, presence of OBs and surgical plates and screws (SPS)], and debonding [T3, presence of SPS and fixed retainers (FR)]. After mean errors of 20 hard tissue landmarks between human gold standard and the cascade CNN model were calculated, statistical analysis was performed. Results: Results are as follows. (1) The total mean error was 1.17 mm without significant difference among the four time-points (T0, 1.20 mm; T1, 1.14 mm; T2, 1.18 mm; T3, 1.15 mm). (2) In comparison of two time-points [(T0, T1) vs. (T2, T3)], ANS, A point, and B point showed an increase in error (P<0.01; P<0.05; P<0.01), while distal contact point of the maxillary first molar (Mx6D) and distal contact point of the mandibular first molar (Md6D) showed a decrease in error (P<0.01; P<0.01). (3) No difference in errors existed at B point, Pogonion, Menton, crown tip of the mandibular central incisor (Md1C), and root apex of the mandibular central incisor (Md1R) between the genioplasty and non-genioplasty groups. Conclusion: The cascade CNN model can be used for auto-digitization of hard tissue landmark in serial Lat-cephs including initial, pre-and post-surgery, and debonding time points despite presence of OB, SPS, FR, genioplasty, and bone remodeling.์—ฐ๊ตฌ๋ชฉ์ : ์ตœ๊ทผ ์ง๋ ฌ ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง (cascade TJneural network) ์ธ๊ณต์ง€๋Šฅ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ธก๋ชจ๋‘๋ถ€๊ณ„์ธก๋ฐฉ์‚ฌ์„ ์‚ฌ์ง„ ์˜์ƒ์—์„œ ๊ฒฝ์กฐ์ง ๊ณ„์ธก์ ์„ ์ž๋™ ์‹๋ณ„(auto-digitization)ํ•˜๋Š” ์—ฐ๊ตฌ๋“ค์ด ๋ฐœํ‘œ๋˜๊ณ  ์žˆ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์˜ ๋ชฉ์ ์€ ์–‘์•… ์•…๊ต์ •์ˆ˜์ˆ ๊ณผ ์ˆ ์ „ ๋ฐ ์ˆ ํ›„ ๊ต์ •์น˜๋ฃŒ๋ฅผ ๋ฐ›์€ ํ™˜์ž๋“ค์˜ ์—ฐ์†์ ์ธ ์ธก๋ชจ๋‘๋ถ€๊ณ„์ธก๋ฐฉ์‚ฌ์„ ์‚ฌ์ง„ ์˜์ƒ์—์„œ ๊ฒฝ์กฐ์ง ๊ณ„์ธก์ ์„ ์ž๋™ ์‹๋ณ„ํ•˜๋Š” ์ •ํ™•๋„๊ฐ€ ์ดฌ์˜ ์‹œ์ ์— ๋”ฐ๋ผ ์–ด๋– ํ•œ ์–‘์ƒ์œผ๋กœ ๋ณ€ํ™”ํ•˜๋Š” ์ง€ ํ‰๊ฐ€ํ•˜๋Š” ๊ฒƒ์ด์—ˆ๋‹ค. ์—ฐ๊ตฌ์žฌ๋ฃŒ ๋ฐ ๋ฐฉ๋ฒ•: ์—ฐ๊ตฌ ๋Œ€์ƒ์€ ์–‘์•… ์•…๊ต์ • ์ˆ˜์ˆ ์„ ๋ฐ›์€ ๊ณจ๊ฒฉ์„ฑ III๊ธ‰ ๋ถ€์ •๊ตํ•ฉ ํ™˜์ž 797๋ช…์˜ ์ธก๋ชจ๋‘๋ถ€๊ณ„์ธก๋ฐฉ์‚ฌ์„ ์‚ฌ์ง„ ์˜์ƒ 3,188์žฅ์ด์—ˆ๋‹ค. 751๋ช…์œผ๋กœ๋ถ€ํ„ฐ ํ™•๋ณดํ•œ 3,004์žฅ์˜ ์˜์ƒ์„ training set๊ณผ validation set์œผ๋กœ ํ™œ์šฉํ•˜์˜€๊ณ , 46๋ช… ํ™˜์ž [์ด๋ถ€์„ฑํ˜•์ˆ (genioplasty) ์‹œํ–‰ ๊ตฐ (n=23), ์ด๋ถ€์„ฑํ˜•์ˆ (genioplasty) ๋น„์‹œํ–‰๊ตฐ (n=23)]๋กœ๋ถ€ํ„ฐ ํ™•๋ณดํ•œ 184์žฅ์˜ ์˜์ƒ์„ test set์œผ๋กœ ํ•˜์˜€๋‹ค. Test set์€ ์ดˆ์ง„(T0), ๊ต์ •์šฉ ๋ธŒ๋ผ์ผ“ ์˜์ƒ์ด ํฌํ•จ๋œ ์ˆ ์ „(T1), ์ˆ˜์ˆ ์šฉ ๊ณ ์ •ํŒ๊ณผ ๋‚˜์‚ฌ (surgical plate and screw) ์˜์ƒ์ด ๋‚˜ํƒ€๋‚œ ์ˆ ํ›„(T2), ์ˆ˜์ˆ ์šฉ ๊ณ ์ •ํŒ๊ณผ ๋‚˜์‚ฌ ๋ฐ ๊ณ ์ •์‹ ์œ ์ง€์žฅ์น˜๊ฐ€ ์˜์ƒ์œผ๋กœ ๋ณด์ด๋Š” ์ข…๋ฃŒ(T3)์˜ 4๊ฐ€์ง€ ์‹œ์ ๋ณ„๋กœ ์ดฌ์˜๋œ ์ธก๋ชจ๋‘๋ถ€๊ณ„์ธก๋ฐฉ์‚ฌ์„ ์‚ฌ์ง„ ์˜์ƒ์œผ๋กœ ๊ตฌ์„ฑ๋˜์—ˆ๋‹ค. ์ธ๊ณต์ง€๋Šฅ๊ณผ ์น˜๊ณผ๊ต์ •๊ณผ ์ „๋ฌธ์˜1์ธ(human gold standard)์ด ๊ฐ๊ฐ 20๊ฐœ์˜ ๊ฒฝ์กฐ์ง ๊ณ„์ธก์ ์„ digitization ํ•œ ํ›„, ์ด์— ๋”ฐ๋ฅธ ์˜ค์ฐจ๋ฅผ ํ†ต๊ณ„์ ์œผ๋กœ ๊ฒ€์ฆํ•˜์˜€๋‹ค. ์—ฐ๊ตฌ ๊ฒฐ๊ณผ: ๊ทธ ๊ฒฐ๊ณผ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์•˜๋‹ค. (1) ์ „์ฒด ํ‰๊ท  ์˜ค์ฐจ๋Š” 1.17 mm, T0์˜ ํ‰๊ท  ์˜ค์ฐจ๋Š” 1.20 mm, T1์˜ ํ‰๊ท  ์˜ค์ฐจ๋Š” 1.14 mm, T2์˜ ํ‰๊ท  ์˜ค์ฐจ๋Š” 1.18 mm, T3์˜ ํ‰๊ท  ์˜ค์ฐจ๋Š” 1.15 mm๋กœ ๋‚˜ํƒ€๋‚ฌ๋‹ค. (2) ์ˆ ์ „(T0, T1)๊ณผ ์ˆ ํ›„(T2, T3)์˜ ๋น„๊ต์—์„œ๋Š”, ANS, A point, B point ์˜ ์˜ค์ฐจ๊ฐ€ ํ†ต๊ณ„ํ•™์ ์œผ๋กœ๋Š” ์ฆ๊ฐ€ํ•˜์˜€์œผ๋‚˜ ์ž„์ƒ์ ์œผ๋กœ๋Š” ์œ ์˜๋ฏธํ•˜์ง€ ์•Š์•˜๊ณ  (P<0.01; P<0.05; P<0.01), Mx6D์™€ Md6D๋Š” ์˜ค์ฐจ๊ฐ€ ๊ฐ์†Œํ•˜์˜€๋‹ค (P<0.01; P<0.01). (3) genioplasty ์‹คํ–‰๊ตฐ๊ณผ genioplasty ๋น„์‹คํ–‰๊ตฐ ๊ฐ„์˜ ๋น„๊ต์—์„œ B point, Pogonion, Menton, Md1C, Md1R์˜ ์˜ค์ฐจ๋Š” ํ†ต๊ณ„์ ์ธ ์ฐจ์ด๊ฐ€ ๋‚˜ํƒ€๋‚˜์ง€ ์•Š์•˜๋‹ค. ๊ฒฐ๋ก : ์ง๋ ฌ ์ธ๊ณต์ง€๋Šฅ ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง ๋ชจ๋ธ์€ ๊ต์ • ๋ธŒ๋ผ์ผ“, ์ˆ˜์ˆ ์šฉ ๊ณ ์ •ํŒ๊ณผ ๋‚˜์‚ฌ, ๊ณ ์ •์‹ ์œ ์ง€์žฅ์น˜, ์ด๋ถ€์„ฑํ˜•์ˆ  ๊ทธ๋ฆฌ๊ณ  ์ˆ˜์ˆ  ํ›„ ๊ณจ ๊ฐœ์กฐ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ , ์ดˆ์ง„, ์ˆ ์ „ ๊ต์ • ์น˜๋ฃŒ, ์ˆ ํ›„ ๊ต์ •์น˜๋ฃŒ, ์ข…๋ฃŒ ์‹œ ์ดฌ์˜๋œ ์ธก๋ชจ๋‘๋ถ€๊ณ„์ธก๋ฐฉ์‚ฌ์„ ์‚ฌ์ง„ ์˜์ƒ์—์„œ ๊ฒฝ์กฐ์ง ๊ณ„์ธก์ ๋“ค์„ ์ž๋™์‹๋ณ„ ํ•˜๋Š”๋ฐ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒฐ๋ก ์„ ๋„์ถœํ•˜์˜€๋‹ค.I. INTRODUCTION 1 II. REVIEW OF LITERATURE 3 III. MATERIALS AND METHODS 10 IV. RESULTS 14 V. DISCUSSION 17 VI. CONCLUSIONS 23 REFERENCES 25 Tables 28 Figures 36 ๊ตญ๋ฌธ์ดˆ๋ก 42๋ฐ•

    Empirical Evaluation ofย Deep Learning Approaches forย Landmark Detection inย Fish Bioimages

    Get PDF
    In this paper we perform an empirical evaluation of variants of deep learning methods to automatically localize anatomical landmarks in bioimages of fishes acquired using different imaging modalities (microscopy and radiography). We compare two methodologies namely heatmap based regression and multivariate direct regression, and evaluate them in combination with several Convolutional Neural Network (CNN) architectures. Heatmap based regression approaches employ Gaussian or Exponential heatmap generation functions combined with CNNs to output the heatmaps corresponding to landmark locations whereas direct regression approaches output directly the (x, y) coordinates corresponding to landmark locations. In our experiments, we use two microscopy datasets of Zebrafish and Medaka fish and one radiography dataset of gilthead Seabream. On our three datasets, the heatmap approach with Exponential function and U-Net architecture performs better. Datasets and open-source code for training and prediction are made available to ease future landmark detection research and bioimaging applications

    ๋‘๋ถ€๊ณ„์ธก๋ฐฉ์‚ฌ์„  ์‚ฌ์ง„ ๊ณ„์ธก์  ์ž๋™ ์‹๋ณ„์˜ ์ตœ์‹  ๊ธฐ๊ณ„ ํ•™์Šต ์•Œ๊ณ ๋ฆฌ์ฆ˜ ๊ฐ„ ์ •ํ™•๋„ ๋ฐ ์—ฐ์‚ฐ ์„ฑ๋Šฅ ๋น„๊ต ์—ฐ๊ตฌ โ€“ YOLOv3 vs SSD

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ)--์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› :์น˜์˜ํ•™๋Œ€ํ•™์› ์น˜์˜ํ•™๊ณผ,2019. 8. ์ด์‹ ์žฌ.Introduction: The purpose of this study was to compare two of the latest deep learning algorithms for automatic identification of cephalometric landmarks in their accuracy and computational efficiency. This study uses two different algorithms for automated cephalometric landmark identification with an extended number of landmarks: 1) You-Only-Look-Once version 3 (YOLOv3) based method with modification, and 2) the Single Shot Detector (SSD) based method. Materials and methods: A total of 1,028 cephalometric radiographic images were selected as learning data that trained YOLOv3 and SSD methods. The number of target labelling was 80 landmarks. After the deep learning process, the algorithms were tested using a new test data set comprised of 283 images. The accuracy was determined by measuring the mean point-to-point error, success detection rate (SDR), and visualized by drawing 2-dimensional scattergrams. Computational time of both algorithms were also recorded. Results: YOLOv3 algorithm outperformed SSD in accuracy for 38/80 landmarks. The other 42/80 landmarks did not show a statistically significant difference between YOLOv3 and SSD. Error plots of YOLOv3 showed not only a smaller error range, but also a more isotropic tendency. Mean computational time spent per image was 0.05 seconds and 2.89 seconds for YOLOv3 and SSD, respectively. YOLOv3 showed approximately 5% higher accuracy compared with the top benchmarks in the literature. Conclusions: Between the two algorithms applied, YOLOv3 seems to be promising as a fully automated cephalometric landmark identification system for use in clinical practice.์—ฐ๊ตฌ ๋ชฉ์ : ๋ณธ ์—ฐ๊ตฌ์˜ ๋ชฉ์ ์€ ๋‘๋ถ€๊ณ„์ธก๋ฐฉ์‚ฌ์„  ์‚ฌ์ง„ ๊ณ„์ธก์  ์ž๋™ ์‹๋ณ„์— ์žˆ์–ด, ์ตœ๊ทผ ๊ฐœ๋ฐœ๋œ ๋‘ ๊ฐ€์ง€ ๋”ฅ ๋Ÿฌ๋‹ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ์ •ํ™•๋„์™€ ์—ฐ์‚ฐ ์„ฑ๋Šฅ์„ ๋น„๊ตํ•˜๋Š” ๊ฒƒ์ด๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ๋‹ค์Œ ๋‘ ๊ฐ€์ง€์˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ณ„์ธก์  ์ž๋™ ์‹๋ณ„์— ์ ์šฉํ•˜์˜€๋‹ค. 1) You-Only-Look-Once version 3 (YOLOv3) ๋ฐ 2) the Single Shot Detector (SSD). ์žฌ๋ฃŒ ๋ฐ ๋ฐฉ๋ฒ•: ์ด 1,028 ๊ฐœ์˜ ๋‘๋ถ€๊ณ„์ธก๋ฐฉ์‚ฌ์„  ์‚ฌ์ง„ ์˜์ƒ์ด YOLOv3 ์™€ SSD๋ฐฉ์‹์˜ ํ•™์Šต ๋ฐ์ดํ„ฐ๋กœ ์‚ฌ์šฉ๋˜์—ˆ๋‹ค. ๋Œ€์ƒ ๊ณ„์ธก์ ์€ 80๊ฐœ์˜€๋‹ค. ํ•™์Šต ๊ณผ์ •์„ ๊ฑฐ์นœ ํ›„, ๊ฐ๊ฐ์˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ƒˆ๋กœ์šด 283 ๊ฐœ์˜ ํ…Œ์ŠคํŠธ ์˜์ƒ์—์„œ ๋น„๊ต ๋ถ„์„ํ•˜์˜€๋‹ค. ์ •ํ™•๋„๋Š” 1) ํ‰๊ท ์ ์ธ point-to-point error, 2) success detection rate (SDR), ๊ทธ๋ฆฌ๊ณ  3) 2์ฐจ์› ํ‰๋ฉด์—์„œ ์‹œ๊ฐํ™”ํ•œ scattergram ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ‰๊ฐ€ํ–ˆ๋‹ค. ๊ฐ๊ฐ์˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ํ‰๊ท  ์—ฐ์‚ฐ ์‹œ๊ฐ„ ์—ญ์‹œ ๊ธฐ๋กํ•˜์˜€๋‹ค. ๊ฒฐ๊ณผ: YOLOv3 ๋Š” SSD ์— ๋น„ํ•ด ์ด 38/80 ๊ฐœ์˜ ๊ณ„์ธก์ ์—์„œ ๋” ๋†’์€ ์ •ํ™•๋„๋ฅผ ๋ณด์˜€๋‹ค. ๋‚˜๋จธ์ง€ 42/80 ๊ฐœ์˜ ๊ณ„์ธก์ ์€ ๋‘ ์•Œ๊ณ ๋ฆฌ์ฆ˜ ๊ฐ„์— ์ •ํ™•๋„์— ์žˆ์–ด ํ†ต๊ณ„์ ์œผ๋กœ ์œ ์˜๋ฏธํ•œ ์ฐจ์ด๋ฅผ ๋‚˜ํƒ€๋‚ด์ง€ ์•Š์•˜๋‹ค. Error plot ์—์„œ๋Š” YOLOv3 ๊ฐ€ SSD ์— ๋น„ํ•ด์„œ error ์˜ ๋ฒ”์œ„๊ฐ€ ๋” ์ž‘์„ ๋ฟ ์•„๋‹ˆ๋ผ, 2์ฐจ์› ํ‰๋ฉด์—์„œ ๋ฐฉํ–ฅ์„ฑ์˜ ์˜ํ–ฅ์„ ๋œ ๋ฐ›๋Š” ๊ฒƒ์œผ๋กœ ๋‚˜ํƒ€๋‚ฌ๋‹ค. ํ•˜๋‚˜์˜ ์˜์ƒ์—์„œ ๊ณ„์ธก์ ์„ ์ž๋™ ์‹๋ณ„ํ•˜๋Š”๋ฐ ์†Œ์š”๋œ ํ‰๊ท  ์‹œ๊ฐ„์€ YOLOv3 ์™€ SSD ๊ฐ€ ๊ฐ๊ฐ 0.05 ์ดˆ, 2.89 ์ดˆ๋กœ ๊ธฐ๋ก๋˜์—ˆ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ YOLOv3 ๋Š” ๊ธฐ์กด ๋ฌธํ—Œ์—์„œ ์ตœ์ƒ์˜ ์ •ํ™•๋„๋ฅผ ๊ธฐ๋กํ–ˆ๋˜ ์—ฐ๊ตฌ์— ๋น„ํ•ด ์•ฝ 5% ๊ฐ€๋Ÿ‰ ๋†’์€ ์ •ํ™•๋„๋ฅผ ๋ณด์˜€๋‹ค. ๊ฒฐ๋ก : ๋ณธ ์—ฐ๊ตฌ๋ฅผ ํ†ตํ•ด ์ ์šฉ๋œ ๋‘ ๊ฐœ์˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜ ์ค‘, YOLOv3 ๊ฐ€ ๋‘๋ถ€๊ณ„์ธก๋ฐฉ์‚ฌ์„  ์‚ฌ์ง„ ๊ณ„์ธก์  ์™„์ „ ์ž๋™ ์‹๋ณ„์˜ ์ž„์ƒ์ ์ธ ์ ์šฉ์— ๊ฐ€๋Šฅ์„ฑ ๋†’์€ ์•Œ๊ณ ๋ฆฌ์ฆ˜์ž„์„ ํ™•์ธํ•˜์˜€๋‹ค.Abstract Contents I. INTRODUCTION 1 II. MATERIALS AND METHODS 4 1. Subjects 4 2. Manual identification of cephalometric landmarks 4 3. Two Deep Learning Systems 5 4. Test Procedures and Comparisons between the two systems 6 III. RESULTS 7 IV. DISCUSSION 8 V. CONCLUSIONS 13 REFERENCES 14 TABLES 20 FIGURES 24 ๊ตญ๋ฌธ์ดˆ๋ก 46Docto
    • โ€ฆ
    corecore