90 research outputs found

    The reliability of cephalometric tracing using AI

    Full text link
    Introduction : L'objectif de cette étude est de comparer la différence entre l'analyse céphalométrique manuelle et l'analyse automatisée par l’intelligence artificielle afin de confirmer la fiabilité de cette dernière. Notre hypothèse de recherche est que la technique manuelle est la plus fiable des deux méthodes. Méthode : Un total de 99 radiographies céphalométriques latérales sont recueillies. Des tracés par technique manuelle (MT) et par localisation automatisée par intelligence artificielle (AI) sont réalisés pour toutes les radiographies. La localisation de 29 points céphalométriques couramment utilisés est comparée entre les deux groupes. L'erreur radiale moyenne (MRE) et un taux de détection réussie (SDR) de 2 mm sont utilisés pour comparer les deux groupes. Le logiciel AudaxCeph version 6.2.57.4225 est utilisé pour l'analyse manuelle et l'analyse AI. Résultats : Le MRE et SDR pour le test de fiabilité inter-examinateur sont respectivement de 0,87 ± 0,61mm et 95%. Pour la comparaison entre la technique manuelle MT et le repérage par intelligence artificielle AI, le MRE et SDR pour tous les repères sont respectivement de 1,48 ± 1,42 mm et 78 %. Lorsque les repères dentaires sont exclus, le MRE diminue à 1,33 ± 1,39 mm et le SDR augmente à 84 %. Lorsque seuls les repères des tissus durs sont inclus (excluant les points des tissus mous et dentaires), le MRE diminue encore à 1,25 ± 1,09 mm et le SDR augmente à 85 %. Lorsque seuls les points de repère des tissus mous sont inclus, le MRE augmente à 1,68 ± 1,89 mm et le SDR diminue à 78 %. Conclusion: La performance du logiciel est similaire à celles précédemment rapportée dans la littérature pour des logiciels utilisant un cadre de modélisation similaire. Nos résultats révèlent que le repérage manuel a donné lieu à une plus grande précision. Le logiciel a obtenu de très bons résultats pour les points de tissus durs, mais sa précision a diminué pour les tissus mous et dentaires. Nous concluons que cette technologie est très prometteuse pour une application en milieu clinique sous la supervision du docteur.Introduction: The objective of this study is to compare the difference between manual cephalometric analysis and automatic analysis by artificial intelligence to confirm the reliability of the latter. Our research hypothesis is that the manual technique is the most reliable of the methods and is still considered the gold standard. Method: A total of 99 lateral cephalometric radiographs were collected in this study. Manual technique (MT) and automatic localization by artificial intelligence (AI) tracings were performed for all radiographs. The localization of 29 commonly used landmarks were compared between both groups. Mean radial error (MRE) and a successful detection rate (SDR) of 2mm were used to compare both groups. AudaxCeph software version 6.2.57.4225 (Audax d.o.o., Ljubljana, Slovenia) was used for both manual and AI analysis. Results: The MRE and SDR for the inter-examinator reliability test were 0.87 ± 0.61mm and 95% respectively. For the comparison between the manual technique MT and landmarking with artificial intelligence AI, the MRE and SDR for all landmarks were 1.48 ± 1.42mm and 78% respectively. When dental landmarks are excluded, the MRE decreases to 1.33 ± 1.39mm and the SDR increases to 84%. When only hard tissue landmarks are included (excluding soft tissue and dental points) the MRE decreases further to 1.25 ± 1.09mm and the SDR increases to 85%. When only soft tissue landmarks are included the MRE increases to 1.68 ± 1.89mm and the SDR decreases to 78%. Conclusion: The software performed similarly to what was previously reported in literature for software that use analogous modeling framework. Comparing the software’s landmarking to manual landmarking our results reveal that the manual landmarking resulted in higher accuracy. The software operated very well for hard tissue points, but its accuracy went down for soft and dental tissue. Our conclusion is this technology shows great promise for application in clinical settings under the doctor’s supervision

    Artificial Intelligence in Orthodontics: Where Are We Now? A Scoping Review

    Get PDF
    Objective: This scoping review aims to determine the applications of Artificial Intelligence (AI) that are extensively employed in the field of Orthodontics, to evaluate its benefits, and to discuss its potential implications in this speciality. Recent decades have witnessed enormous changes in our profession. The arrival of new and more aesthetic options in orthodontic treatment, the transition to a fully digital workflow, the emergence of temporary anchorage devices and new imaging methods all provide both patients and professionals with a new focus in orthodontic care. Materials and methods: This review was performed following the Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) guidelines. The electronic literature search was performed through MEDLINE/PubMed, Scopus, Web of Science, Cochrane and IEEE Xplore databases with a 11-year time restriction: January 2010 till March 2021. No additional manual searches were performed. Results: The electronic literature search initially returned 311 records, and 115 after removing duplicate references. Finally, the application of the inclusion criteria resulted in 17 eligible publications in the qualitative synthesis review. Conclusion: The analysed studies demonstrated that Convolution Neural Networks can be used for the automatic detection of anatomical reference points on radiological images. In the growth and development research area, the Cervical Vertebral Maturation stage can be determined using an Artificial Neural Network model and obtain the same results as expert human observers. AI technology can also improve the diagnostic accuracy for orthodontic treatments, thereby helping the orthodontist work more accurately and efficiently

    Automatic Three-Dimensional Cephalometric Annotation System Using Three-Dimensional Convolutional Neural Networks

    Full text link
    Background: Three-dimensional (3D) cephalometric analysis using computerized tomography data has been rapidly adopted for dysmorphosis and anthropometry. Several different approaches to automatic 3D annotation have been proposed to overcome the limitations of traditional cephalometry. The purpose of this study was to evaluate the accuracy of our newly-developed system using a deep learning algorithm for automatic 3D cephalometric annotation. Methods: To overcome current technical limitations, some measures were developed to directly annotate 3D human skull data. Our deep learning-based model system mainly consisted of a 3D convolutional neural network and image data resampling. Results: The discrepancies between the referenced and predicted coordinate values in three axes and in 3D distance were calculated to evaluate system accuracy. Our new model system yielded prediction errors of 3.26, 3.18, and 4.81 mm (for three axes) and 7.61 mm (for 3D). Moreover, there was no difference among the landmarks of the three groups, including the midsagittal plane, horizontal plane, and mandible (p>0.05). Conclusion: A new 3D convolutional neural network-based automatic annotation system for 3D cephalometry was developed. The strategies used to implement the system were detailed and measurement results were evaluated for accuracy. Further development of this system is planned for full clinical application of automatic 3D cephalometric annotation

    Anatomical Structure Sketcher for Cephalograms by Bimodal Deep Learning

    Full text link
    The lateral cephalogram is a commonly used medium to acquire patient-specific morphology for diagnose and treatment planning in clinical dentistry. The robust anatomical structure detection and accurate annotation remain challenging considering the personal skeletal variations and image blurs caused by device-specific projection magnification, together with structure overlapping in the lateral cephalograms. We propose a novel cephalogram sketcher system, where the contour extraction of anatomical structures is formulated as a cross-modal morphology transfer from regular image patches to arbitrary curves. Specifically, the image patches of structures of interest are located by a hierarchical pictorial model. The automatic contour sketcher converts the image patch to a morphable boundary curve via a bimodal deep Boltzmann machine. The deep machine learns a joint representation of patch textures and contours, and forms a path from one modality (patches) to the other (contours). Thus, the sketcher can infer the contours by alternating Gibbs sampling along the path in a manner similar to the data completion. The proposed method is robust not only to structure detection, but also tends to produce accurate structure shapes and landmarks even in blurry X-ray images. The experiments performed on clinically captured cephalograms demonstrate the effectiveness of our method.http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000346352700099&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=8e1609b174ce4e31116a60747a720701Computer Science, Artificial IntelligenceCPCI-S(ISTP)

    Speech Signal and Facial Image Processing for Obstructive Sleep Apnea Assessment

    Full text link
    Obstructive sleep apnea (OSA) is a common sleep disorder characterized by recurring breathing pauses during sleep caused by a blockage of the upper airway (UA). OSA is generally diagnosed through a costly procedure requiring an overnight stay of the patient at the hospital. This has led to proposing less costly procedures based on the analysis of patients' facial images and voice recordings to help in OSA detection and severity assessment. In this paper we investigate the use of both image and speech processing to estimate the apnea-hypopnea index, AHI (which describes the severity of the condition), over a population of 285 male Spanish subjects suspected to suffer from OSA and referred to a Sleep Disorders Unit. Photographs and voice recordings were collected in a supervised but not highly controlled way trying to test a scenario close to an OSA assessment application running on a mobile device (i.e., smartphones or tablets). Spectral information in speech utterances is modeled by a state-of-the-art low-dimensional acoustic representation, called i-vector. A set of local craniofacial features related to OSA are extracted from images after detecting facial landmarks using Active Appearance Models (AAMs). Support vector regression (SVR) is applied on facial features and i-vectors to estimate the AHI.The activities in this paper were funded by the Spanish Ministry of Economy and Competitiveness and the European Union (FEDER) as part of the TEC2012-37585-C02 (CMC-V2) project. Authors also thank Sonia Martinez Diaz for her effort in collecting the OSA database that is used in this study

    Applications of artificial intelligence in dentistry: A comprehensive review

    Get PDF
    This work was funded by the Spanish Ministry of Sciences, Innovation and Universities under Projects RTI2018-101674-B-I00 and PGC2018-101904-A-100, University of Granada project A.TEP. 280.UGR18, I+D+I Junta de Andalucia 2020 project P20-00200, and Fapergs/Capes do Brasil grant 19/25510000928-3. Funding for open-access charge: Universidad de Granada/CBUAObjective: To perform a comprehensive review of the use of artificial intelligence (AI) and machine learning (ML) in dentistry, providing the community with a broad insight on the different advances that these technologies and tools have produced, paying special attention to the area of esthetic dentistry and color research. Materials and methods: The comprehensive review was conducted in MEDLINE/ PubMed, Web of Science, and Scopus databases, for papers published in English language in the last 20 years. Results: Out of 3871 eligible papers, 120 were included for final appraisal. Study methodologies included deep learning (DL; n = 76), fuzzy logic (FL; n = 12), and other ML techniques (n = 32), which were mainly applied to disease identification, image segmentation, image correction, and biomimetic color analysis and modeling. Conclusions: The insight provided by the present work has reported outstanding results in the design of high-performance decision support systems for the aforementioned areas. The future of digital dentistry goes through the design of integrated approaches providing personalized treatments to patients. In addition, esthetic dentistry can benefit from those advances by developing models allowing a complete characterization of tooth color, enhancing the accuracy of dental restorations. Clinical significance: The use of AI and ML has an increasing impact on the dental profession and is complementing the development of digital technologies and tools, with a wide application in treatment planning and esthetic dentistry procedures.Spanish Ministry of Sciences, Innovation and Universities RTI2018-101674-B-I00 PGC2018-101904-A-100University of Granada project A.TEP. 280.UGR18Junta de Andalucia P20-00200Fapergs/Capes do Brasil grant 19/25510000928-3Universidad de Granada/CBU
    • …
    corecore