85 research outputs found

    Automatic Craniomaxillofacial Landmark Digitization via Segmentation-Guided Partially-Joint Regression Forest Model and Multiscale Statistical Features

    Get PDF
    The goal of this paper is to automatically digitize craniomaxillofacial (CMF) landmarks efficiently and accurately from cone-beam computed tomography (CBCT) images, by addressing the challenge caused by large morphological variations across patients and image artifacts of CBCT images

    Clinically applicable artificial intelligence system for dental diagnosis with CBCT

    Get PDF
    Abstract In this study, a novel AI system based on deep learning methods was evaluated to determine its real-time performance of CBCT imaging diagnosis of anatomical landmarks, pathologies, clinical effectiveness, and safety when used by dentists in a clinical setting. The system consists of 5 modules: ROI-localization-module (segmentation of teeth and jaws), tooth-localization and numeration-module, periodontitis-module, caries-localization-module, and periapical-lesion-localization-module. These modules use CNN based on state-of-the-art architectures. In total, 1346 CBCT scans were used to train the modules. After annotation and model development, the AI system was tested for diagnostic capabilities of the Diagnocat AI system. 24 dentists participated in the clinical evaluation of the system. 30 CBCT scans were examined by two groups of dentists, where one group was aided by Diagnocat and the other was unaided. The results for the overall sensitivity and specificity for aided and unaided groups were calculated as an aggregate of all conditions. The sensitivity values for aided and unaided groups were 0.8537 and 0.7672 while specificity was 0.9672 and 0.9616 respectively. There was a statistically significant difference between the groups (p = 0.032). This study showed that the proposed AI system significantly improved the diagnostic capabilities of dentists

    Accuracy of artificial intelligence in the detection and segmentation of oral and maxillofacial structures using cone-beam computed tomography images : a systematic review and meta-analysis

    Get PDF
    Purpose: The aim of the present systematic review and meta-analysis was to resolve the conflicts on the diagnostic accuracy of artificial intelligence systems in detecting and segmenting oral and maxillofacial structures using conebeam computed tomography (CBCT) images. Material and methods: We performed a literature search of the Embase, PubMed, and Scopus databases for reports published from their inception to 31 October 2022. We included studies that explored the accuracy of artificial intelligence in the automatic detection or segmentation of oral and maxillofacial anatomical landmarks or lesions using CBCT images. The extracted data were pooled, and the estimates were presented with 95% confidence intervals (CIs). Results: In total, 19 eligible studies were identified. As per the analysis, the overall pooled diagnostic accuracy of artificial intelligence was 0.93 (95% CI: 0.91-0.94). This rate was 0.93 (95% CI: 0.89-0.96) for anatomical landmarks based on 7 studies and 0.92 (95% CI: 0.90-0.94) for lesions according to 12 reports. Moreover, the pooled accuracy of detection and segmentation tasks for artificial intelligence was 0.93 (95% CI: 0.91-0.94) and 0.92 (95% CI: 0.85-0.95) based on 14 and 5 surveys, respectively. Conclusions: Excellent accuracy was observed for the detection and segmentation objectives of artificial intelligence using oral and maxillofacial CBCT images. These systems have the potential to streamline oral and dental healthcare services

    Accuracy of automated 3D cephalometric landmarks by deep learning algorithms: systematic review and meta-analysis

    Get PDF
    Objectives The aim of the present systematic review and meta-analysis is to assess the accuracy of automated landmarking using deep learning in comparison with manual tracing for cephalometric analysis of 3D medical images. Methods PubMed/Medline, IEEE Xplore, Scopus and ArXiv electronic databases were searched. Selection criteria were: ex vivo and in vivo volumetric data images suitable for 3D landmarking (Problem), a minimum of five automated landmarking performed by deep learning method (Intervention), manual landmarking (Comparison), and mean accuracy, in mm, between manual and automated landmarking (Outcome). QUADAS-2 was adapted for quality analysis. Meta-analysis was performed on studies that reported as outcome mean values and standard deviation of the difference (error) between manual and automated landmarking. Linear regression plots were used to analyze correlations between mean accuracy and year of publication. Results The initial electronic screening yielded 252 papers published between 2020 and 2022. A total of 15 studies were included for the qualitative synthesis, whereas 11 studies were used for the meta-analysis. Overall random effect model revealed a mean value of 2.44 mm, with a high heterogeneity (I-2 = 98.13%, tau(2) = 1.018, p-value < 0.001); risk of bias was high due to the presence of issues for several domains per study. Meta-regression indicated a significant relation between mean error and year of publication (p value = 0.012). Conclusion Deep learning algorithms showed an excellent accuracy for automated 3D cephalometric landmarking. In the last two years promising algorithms have been developed and improvements in landmarks annotation accuracy have been done

    AUTOMATIC RECOGNITION OF DENTAL PATHOLOGIES AS PART OF A CLINICAL DECISION SUPPORT PLATFORM

    Get PDF
    The current work is done within the context of Romanian National Program II (PNII) research project "Application for Using Image Data Mining and 3D Modeling in Dental Screening" (AIMMS). The AIMMS project aims to design a program that can detect anatomical information and possible pathological formations from a collection of digital imaging and communications in medicine (DICOM) images. The main function of the AIMMS platform is to provide the user with the opportunity to use an integrated dental support platform, using image processing techniques and 3D modeling. From the literature review, it can be found that for the detection and classification of teeth and dental pathologies existing studies are in their infancy. Therefore, the work reported in this article makes a scientific contribution in this field. In this article it is presented the relevant literature review and algorithms that were created for detection of dental pathologies in the context of research project AIMMS

    Synergistic Visualization And Quantitative Analysis Of Volumetric Medical Images

    Get PDF
    The medical diagnosis process starts with an interview with the patient, and continues with the physical exam. In practice, the medical professional may require additional screenings to precisely diagnose. Medical imaging is one of the most frequently used non-invasive screening methods to acquire insight of human body. Medical imaging is not only essential for accurate diagnosis, but also it can enable early prevention. Medical data visualization refers to projecting the medical data into a human understandable format at mediums such as 2D or head-mounted displays without causing any interpretation which may lead to clinical intervention. In contrast to the medical visualization, quantification refers to extracting the information in the medical scan to enable the clinicians to make fast and accurate decisions. Despite the extraordinary process both in medical visualization and quantitative radiology, efforts to improve these two complementary fields are often performed independently and synergistic combination is under-studied. Existing image-based software platforms mostly fail to be used in routine clinics due to lack of a unified strategy that guides clinicians both visually and quan- titatively. Hence, there is an urgent need for a bridge connecting the medical visualization and automatic quantification algorithms in the same software platform. In this thesis, we aim to fill this research gap by visualizing medical images interactively from anywhere, and performing a fast, accurate and fully-automatic quantification of the medical imaging data. To end this, we propose several innovative and novel methods. Specifically, we solve the following sub-problems of the ul- timate goal: (1) direct web-based out-of-core volume rendering, (2) robust, accurate, and efficient learning based algorithms to segment highly pathological medical data, (3) automatic landmark- ing for aiding diagnosis and surgical planning and (4) novel artificial intelligence algorithms to determine the sufficient and necessary data to derive large-scale problems

    Comparative linear accuracy and reliability of cone beam CT derived 2-dimensional and 3-dimensional images constructed using an orthodontic volumetric rendering program.

    Get PDF
    The purpose of this project was to compare the accuracy and reliability of linear measurements made on 2D projections and 3D reconstructions using Dolphin 3D software (Chatsworth, CA) as compared to direct measurements made on human skulls. The linear dimensions between 6 bilateral and 8 mid-sagittal anatomical landmarks on 23 dentate dry human skulls were measured three times by multiple observers using a digital caliper to provide twenty orthodontic linear measurements. The skulls were stabilized and imaged via PSP digital cephalometry as well as CBCT. The PSP cephalograms were imported into Dolphin (Chatsworth, CA, USA) and the 3D volumetric data set was imported into Dolphin 3D (Version 2.3, Chatsworth, CA, USA). Using Dolphin 3D, planar cephalograms as well as 3D volumetric surface reconstructions were (3D CBCT) generated. The linear measurements between landmarks of each three modalities were then computed by a single observer three times. For 2D measurements, a one way ANOVA for each measurement dimension was calculated as well as a post hoc Scheffe multiple comparison test with the anatomic distance as the control group. 3D measurements were compared to anatomic truth using Student\u27s t test (PiÜ50.05). The intraclass correlation coefficient (ICC) and absolute linear and percentage error were determined as indices of intraobserver reliability. Our results show that for 2D mid sagittal measurements that Simulated LC images are accurate and similar to those from PSP images (except for Ba-Na), and for bilateral measurements simulated LC measurements were similar to PSP but less accurate, underestimating dimensions by between 4.7% to 17%.For 3D volumetric renderings, 2/3 rd of CBCT measurements are statistically different from actual measurements, however this possibly is not clinically relevant
    corecore