80 research outputs found

    Relational Reasoning Network (RRN) for Anatomical Landmarking

    Full text link
    Accurately identifying anatomical landmarks is a crucial step in deformation analysis and surgical planning for craniomaxillofacial (CMF) bones. Available methods require segmentation of the object of interest for precise landmarking. Unlike those, our purpose in this study is to perform anatomical landmarking using the inherent relation of CMF bones without explicitly segmenting them. We propose a new deep network architecture, called relational reasoning network (RRN), to accurately learn the local and the global relations of the landmarks. Specifically, we are interested in learning landmarks in CMF region: mandible, maxilla, and nasal bones. The proposed RRN works in an end-to-end manner, utilizing learned relations of the landmarks based on dense-block units and without the need for segmentation. For a given a few landmarks as input, the proposed system accurately and efficiently localizes the remaining landmarks on the aforementioned bones. For a comprehensive evaluation of RRN, we used cone-beam computed tomography (CBCT) scans of 250 patients. The proposed system identifies the landmark locations very accurately even when there are severe pathologies or deformations in the bones. The proposed RRN has also revealed unique relationships among the landmarks that help us infer several reasoning about informativeness of the landmark points. RRN is invariant to order of landmarks and it allowed us to discover the optimal configurations (number and location) for landmarks to be localized within the object of interest (mandible) or nearby objects (maxilla and nasal). To the best of our knowledge, this is the first of its kind algorithm finding anatomical relations of the objects using deep learning.Comment: 10 pages, 6 Figures, 3 Table

    Automatic Three-Dimensional Cephalometric Annotation System Using Three-Dimensional Convolutional Neural Networks

    Full text link
    Background: Three-dimensional (3D) cephalometric analysis using computerized tomography data has been rapidly adopted for dysmorphosis and anthropometry. Several different approaches to automatic 3D annotation have been proposed to overcome the limitations of traditional cephalometry. The purpose of this study was to evaluate the accuracy of our newly-developed system using a deep learning algorithm for automatic 3D cephalometric annotation. Methods: To overcome current technical limitations, some measures were developed to directly annotate 3D human skull data. Our deep learning-based model system mainly consisted of a 3D convolutional neural network and image data resampling. Results: The discrepancies between the referenced and predicted coordinate values in three axes and in 3D distance were calculated to evaluate system accuracy. Our new model system yielded prediction errors of 3.26, 3.18, and 4.81 mm (for three axes) and 7.61 mm (for 3D). Moreover, there was no difference among the landmarks of the three groups, including the midsagittal plane, horizontal plane, and mandible (p>0.05). Conclusion: A new 3D convolutional neural network-based automatic annotation system for 3D cephalometry was developed. The strategies used to implement the system were detailed and measurement results were evaluated for accuracy. Further development of this system is planned for full clinical application of automatic 3D cephalometric annotation

    Accuracy of automated 3D cephalometric landmarks by deep learning algorithms: systematic review and meta-analysis

    Get PDF
    Objectives The aim of the present systematic review and meta-analysis is to assess the accuracy of automated landmarking using deep learning in comparison with manual tracing for cephalometric analysis of 3D medical images. Methods PubMed/Medline, IEEE Xplore, Scopus and ArXiv electronic databases were searched. Selection criteria were: ex vivo and in vivo volumetric data images suitable for 3D landmarking (Problem), a minimum of five automated landmarking performed by deep learning method (Intervention), manual landmarking (Comparison), and mean accuracy, in mm, between manual and automated landmarking (Outcome). QUADAS-2 was adapted for quality analysis. Meta-analysis was performed on studies that reported as outcome mean values and standard deviation of the difference (error) between manual and automated landmarking. Linear regression plots were used to analyze correlations between mean accuracy and year of publication. Results The initial electronic screening yielded 252 papers published between 2020 and 2022. A total of 15 studies were included for the qualitative synthesis, whereas 11 studies were used for the meta-analysis. Overall random effect model revealed a mean value of 2.44 mm, with a high heterogeneity (I-2 = 98.13%, tau(2) = 1.018, p-value < 0.001); risk of bias was high due to the presence of issues for several domains per study. Meta-regression indicated a significant relation between mean error and year of publication (p value = 0.012). Conclusion Deep learning algorithms showed an excellent accuracy for automated 3D cephalometric landmarking. In the last two years promising algorithms have been developed and improvements in landmarks annotation accuracy have been done

    Automatic Craniomaxillofacial Landmark Digitization via Segmentation-Guided Partially-Joint Regression Forest Model and Multiscale Statistical Features

    Get PDF
    The goal of this paper is to automatically digitize craniomaxillofacial (CMF) landmarks efficiently and accurately from cone-beam computed tomography (CBCT) images, by addressing the challenge caused by large morphological variations across patients and image artifacts of CBCT images

    The reliability of cephalometric tracing using AI

    Full text link
    Introduction : L'objectif de cette étude est de comparer la différence entre l'analyse céphalométrique manuelle et l'analyse automatisée par l’intelligence artificielle afin de confirmer la fiabilité de cette dernière. Notre hypothèse de recherche est que la technique manuelle est la plus fiable des deux méthodes. Méthode : Un total de 99 radiographies céphalométriques latérales sont recueillies. Des tracés par technique manuelle (MT) et par localisation automatisée par intelligence artificielle (AI) sont réalisés pour toutes les radiographies. La localisation de 29 points céphalométriques couramment utilisés est comparée entre les deux groupes. L'erreur radiale moyenne (MRE) et un taux de détection réussie (SDR) de 2 mm sont utilisés pour comparer les deux groupes. Le logiciel AudaxCeph version 6.2.57.4225 est utilisé pour l'analyse manuelle et l'analyse AI. Résultats : Le MRE et SDR pour le test de fiabilité inter-examinateur sont respectivement de 0,87 ± 0,61mm et 95%. Pour la comparaison entre la technique manuelle MT et le repérage par intelligence artificielle AI, le MRE et SDR pour tous les repères sont respectivement de 1,48 ± 1,42 mm et 78 %. Lorsque les repères dentaires sont exclus, le MRE diminue à 1,33 ± 1,39 mm et le SDR augmente à 84 %. Lorsque seuls les repères des tissus durs sont inclus (excluant les points des tissus mous et dentaires), le MRE diminue encore à 1,25 ± 1,09 mm et le SDR augmente à 85 %. Lorsque seuls les points de repère des tissus mous sont inclus, le MRE augmente à 1,68 ± 1,89 mm et le SDR diminue à 78 %. Conclusion: La performance du logiciel est similaire à celles précédemment rapportée dans la littérature pour des logiciels utilisant un cadre de modélisation similaire. Nos résultats révèlent que le repérage manuel a donné lieu à une plus grande précision. Le logiciel a obtenu de très bons résultats pour les points de tissus durs, mais sa précision a diminué pour les tissus mous et dentaires. Nous concluons que cette technologie est très prometteuse pour une application en milieu clinique sous la supervision du docteur.Introduction: The objective of this study is to compare the difference between manual cephalometric analysis and automatic analysis by artificial intelligence to confirm the reliability of the latter. Our research hypothesis is that the manual technique is the most reliable of the methods and is still considered the gold standard. Method: A total of 99 lateral cephalometric radiographs were collected in this study. Manual technique (MT) and automatic localization by artificial intelligence (AI) tracings were performed for all radiographs. The localization of 29 commonly used landmarks were compared between both groups. Mean radial error (MRE) and a successful detection rate (SDR) of 2mm were used to compare both groups. AudaxCeph software version 6.2.57.4225 (Audax d.o.o., Ljubljana, Slovenia) was used for both manual and AI analysis. Results: The MRE and SDR for the inter-examinator reliability test were 0.87 ± 0.61mm and 95% respectively. For the comparison between the manual technique MT and landmarking with artificial intelligence AI, the MRE and SDR for all landmarks were 1.48 ± 1.42mm and 78% respectively. When dental landmarks are excluded, the MRE decreases to 1.33 ± 1.39mm and the SDR increases to 84%. When only hard tissue landmarks are included (excluding soft tissue and dental points) the MRE decreases further to 1.25 ± 1.09mm and the SDR increases to 85%. When only soft tissue landmarks are included the MRE increases to 1.68 ± 1.89mm and the SDR decreases to 78%. Conclusion: The software performed similarly to what was previously reported in literature for software that use analogous modeling framework. Comparing the software’s landmarking to manual landmarking our results reveal that the manual landmarking resulted in higher accuracy. The software operated very well for hard tissue points, but its accuracy went down for soft and dental tissue. Our conclusion is this technology shows great promise for application in clinical settings under the doctor’s supervision

    How accurate are the fusion of Cone-beam CT and 3-D stereophotographic images?

    Get PDF
    Background: Cone-beam Computed Tomography (CBCT) and stereophotography are two of the latest imaging modalities available for three-dimensional (3-D) visualization of craniofacial structures. However, CBCT provides only limited information on surface texture. This can be overcome by combining the bone images derived from CBCT with 3-D photographs. The objectives of this study were 1) to evaluate the feasibility of integrating 3-D Photos and CBCT images 2) to assess degree of error that may occur during the above processes and 3) to identify facial regions that would be most appropriate for 3-D image registration. Methodology: CBCT scans and stereophotographic images from 29 patients were used for this study. Two 3-D images corresponding to the skin and bone were extracted from the CBCT data. The 3-D photo was superimposed on the CBCT skin image using relatively immobile areas of the face as a reference. 3-D colour maps were used to assess the accuracy of superimposition were distance differences between the CBCT and 3-D photo were recorded as the signed average and the Root Mean Square (RMS) error. Principal Findings: The signed average and RMS of the distance differences between the registered surfaces were -0.018 (±0.129) mm and 0.739 (±0.239) mm respectively. The most errors were found in areas surrounding the lips and the eyes, while minimal errors were noted in the forehead, root of the nose and zygoma. Conclusions: CBCT and 3-D photographic data can be successfully fused with minimal errors. When compared to RMS, the signed average was found to under-represent the registration error. The virtual 3-D composite craniofacial models permit concurrent assessment of bone and soft tissues during diagnosis and treatment planning. © 2012 Jayaratne et al.published_or_final_versio

    Synergistic Visualization And Quantitative Analysis Of Volumetric Medical Images

    Get PDF
    The medical diagnosis process starts with an interview with the patient, and continues with the physical exam. In practice, the medical professional may require additional screenings to precisely diagnose. Medical imaging is one of the most frequently used non-invasive screening methods to acquire insight of human body. Medical imaging is not only essential for accurate diagnosis, but also it can enable early prevention. Medical data visualization refers to projecting the medical data into a human understandable format at mediums such as 2D or head-mounted displays without causing any interpretation which may lead to clinical intervention. In contrast to the medical visualization, quantification refers to extracting the information in the medical scan to enable the clinicians to make fast and accurate decisions. Despite the extraordinary process both in medical visualization and quantitative radiology, efforts to improve these two complementary fields are often performed independently and synergistic combination is under-studied. Existing image-based software platforms mostly fail to be used in routine clinics due to lack of a unified strategy that guides clinicians both visually and quan- titatively. Hence, there is an urgent need for a bridge connecting the medical visualization and automatic quantification algorithms in the same software platform. In this thesis, we aim to fill this research gap by visualizing medical images interactively from anywhere, and performing a fast, accurate and fully-automatic quantification of the medical imaging data. To end this, we propose several innovative and novel methods. Specifically, we solve the following sub-problems of the ul- timate goal: (1) direct web-based out-of-core volume rendering, (2) robust, accurate, and efficient learning based algorithms to segment highly pathological medical data, (3) automatic landmark- ing for aiding diagnosis and surgical planning and (4) novel artificial intelligence algorithms to determine the sufficient and necessary data to derive large-scale problems

    Contributions to the three-dimensional virtual treatment planning of orthognathic surgery

    Get PDF
    Orientadores: José Mario De Martino, Luis Augusto PasseriTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: A tecnologia mais recente à disposição da Cirurgia Ortognática possibilita que o diagnóstico e o planejamento do tratamento das deformidades dentofaciais sejam realizados sob uma representação virtual tridimensional (3D) da cabeça do paciente. Com o propósito de contribuir para o aperfeiçoamento desta tecnologia, o trabalho apresentado nesta tese identificou e tratou quatro problemas. A primeira contribuição consistiu na verificação da validade da hipótese de que a mudança de definição do plano horizontal de Frankfort não produz diferenças de medição clinicamente relevantes quando sob indivíduos cujos crânios são consideravelmente simétricos. Os resultados da análise realizada no contexto deste tese indicam que, ao contrário do que se presumia, a hipótese é falsa. A segunda contribuição consistiu na extensão do método de análise cefalométrica de McNamara para que ele pudesse produzir valores 3D. Ao contrário de outros métodos de análise cefalométrica 3D, a extensão criada produz valores verdadeiramente 3D, não perde as informações do método original e preserva as definições geométricas originais das linhas e planos cefalométricos. A terceira contribuição consistiu a) no estabelecimento de normas cefalométricas para brasileiros adultos de ascendência europeia, a partir de imagens de tomografia computadorizada de feixe cônico, que produz uma imagem craniofacial mais precisa e confiável do que a telerradiografia; e b) na avaliação de dimorfismo sexual, para a identificação de características anatômicas diferenciadas entre homens e mulheres desta população. A quarta e última contribuição consistiu na automatização da principal etapa da tecnologia em questão, na qual o cirurgião executa o reposicionamento dos segmentos ósseos maxilares no crânio. O método criado é capaz de corrigir automaticamente os problemas dentofaciais mais comuns tratados pela Cirurgia Ortognática, que envolvem maloclusão esquelética, assimetria facial e discrepância de maxilares. Todas as contribuições deste trabalho foram publicadas em periódicos internacionais do campo da Odontologia e afinsAbstract: The latest technology available for orthognathic surgery allows the diagnosis and treatment planning of dentofacial deformities based on a three-dimensional (3D) virtual representation of the patient's head. In order to contribute to the improvement of this technology, the work presented in this thesis identified and treated four problems. The first contribution consisted in testing the validity of the hypothesis that changing the definition of the Frankfort horizontal plane does not produce clinically relevant measurement differences for subjects whose skulls are considerably symmetrical. The results of the analysis performed in this thesis indicate that, contrary to what was presumed, the hypothesis is false. The second contribution is an extension of the McNamara's method of cephalometric analysis to produce 3D values. Unlike other methods of 3D cephalometric analysis, the extension produces true 3D values, does not lose information captured by the original method, and preserves the original geometric definitions of the cephalometric lines and planes. The third contribution consisted in a) establishing cephalometric norms for Brazilian adults of European descent, based on images from cone-beam computed tomography, which produce a more accurate and reliable craniofacial image than cephalometric radiography; and b) evaluating sexual dimorphism, for the identification of distinct anatomic features between males and females of this population. The fourth contribution consisted in automating the main stage of the technology in question, in which the surgeon performs the positioning of jaw bone segments in the skull. The created method is able to automatically correct the most common dentofacial problems treated by orthognathic surgery, which involves skeletal malocclusion, facial asymmetry, and jaw discrepancy. The contributions of this work were published in international journals of the field of Dentistry and relatedDoutoradoEngenharia de ComputaçãoDoutor em Engenharia ElétricaCAPE

    Orthognathic surgical simulation of Class III patients using 3-D cone beam CT images

    Get PDF
    Objective: Our aim is to determine if virtual surgery performed on 3-D cone beam CT models correctly simulated the actual surgical outcome of Class III orthognathic surgical patients. Methods: All data was acquired from the UNC orthognathic surgery stability studies. We created segmentations of the maxillofacial hard tissues of twenty class III patients. We performed virtual surgeries on cone beam CT images using the CranioMaxilloFacial Application software. Results: The virtual surgical models were superimposed on the models of the actual surgical outcomes. The virtual surgery accurately recreated all surgical movements. Surgery residents showed greater variability in lateral ramus positioning than attending faculty. Conclusions: Our methodology demonstrated valid recreation of the subjects' craniofacial skeleton. It allows the surgeon to better predict surgical outcomes. Future validation of occlusal and soft tissue components would be valuable. Virtual surgical training for surgical residents could be beneficial. Supported by NIDCR DE 005215 and the SA
    • …
    corecore