34 research outputs found

    Deep Segmentation of the Mandibular Canal: a New 3D Annotated Dataset of CBCT Volumes

    Get PDF
    Inferior Alveolar Nerve (IAN) canal detection has been the focus of multiple recent works in dentistry and maxillofacial imaging. Deep learning-based techniques have reached interesting results in this research field, although the small size of 3D maxillofacial datasets has strongly limited the performance of these algorithms. Researchers have been forced to build their own private datasets, thus precluding any opportunity for reproducing results and fairly comparing proposals. This work describes a novel, large, and publicly available mandibular Cone Beam Computed Tomography (CBCT) dataset, with 2D and 3D manual annotations, provided by expert clinicians. Leveraging this dataset and employing deep learning techniques, we are able to improve the state of the art on the 3D mandibular canal segmentation. The source code which allows to exactly reproduce all the reported experiments is released as an open-source project, along with this article

    Inferior Alveolar Nerve Segmentation in CBCT images using Connectivity-Based Selective Re-training

    Full text link
    Inferior Alveolar Nerve (IAN) canal detection in CBCT is an important step in many dental and maxillofacial surgery applications to prevent irreversible damage to the nerve during the procedure.The ToothFairy2023 Challenge aims to establish a 3D maxillofacial dataset consisting of all sparse labels and partial dense labels, and improve the ability of automatic IAN segmentation. In this work, in order to avoid the negative impact brought by sparse labeling, we transform the mixed supervised problem into a semi-supervised problem. Inspired by self-training via pseudo labeling, we propose a selective re-training framework based on IAN connectivity. Our method is quantitatively evaluated on the ToothFairy verification cases, achieving the dice similarity coefficient (DSC) of 0.7956, and 95\% hausdorff distance (HD95) of 4.4905, and wining the champion in the competition. Code is available at https://github.com/GaryNico517/SSL-IAN-Retraining.Comment: technical paper for Miccai ToothFairy2023 Challeng

    Automatic mandibular canal detection using a deep convolutional neural network

    Get PDF
    The practicability of deep learning techniques has been demonstrated by their successful implementation in varied fields, including diagnostic imaging for clinicians. In accordance with the increasing demands in the healthcare industry, techniques for automatic prediction and detection are being widely researched. Particularly in dentistry, for various reasons, automated mandibular canal detection has become highly desirable. The positioning of the inferior alveolar nerve (IAN), which is one of the major structures in the mandible, is crucial to prevent nerve injury during surgical procedures. However, automatic segmentation using Cone beam computed tomography (CBCT) poses certain difficulties, such as the complex appearance of the human skull, limited number of datasets, unclear edges, and noisy images. Using work-in-progress automation software, experiments were conducted with models based on 2D SegNet, 2D and 3D U-Nets as preliminary research for a dental segmentation automation tool. The 2D U-Net with adjacent images demonstrates higher global accuracy of 0.82 than naïve U-Net variants. The 2D SegNet showed the second highest global accuracy of 0.96, and the 3D U-Net showed the best global accuracy of 0.99. The automated canal detection system through deep learning will contribute significantly to efficient treatment planning and to reducing patients’ discomfort by a dentist. This study will be a preliminary report and an opportunity to explore the application of deep learning to other dental fields.Peer reviewe

    Enhancing Patch-Based Learning for the Segmentation of the Mandibular Canal

    Get PDF
    Segmentation of the Inferior Alveolar Canal (IAC) is a critical aspect of dentistry and maxillofacial imaging, garnering considerable attention in recent research endeavors. Deep learning techniques have shown promising results in this domain, yet their efficacy is still significantly hindered by the limited availability of 3D maxillofacial datasets. An inherent challenge is posed by the size of input volumes, which necessitates a patch-based processing approach that compromises the neural network performance due to the absence of global contextual information. This study introduces a novel approach that harnesses the spatial information within the extracted patches and incorporates it into a Transformer architecture, thereby enhancing the segmentation process through the use of prior knowledge about the patch location. Our method significantly improves the Dice score by a factor of 4 points, with respect to the previous work proposed by Cipriano et al., while also reducing the training steps required by the entire pipeline. By integrating spatial information and leveraging the power of Transformer architectures, this research not only advances the accuracy of IAC segmentation, but also streamlines the training process, offering a promising direction for improving dental and maxillofacial image analysis

    Accuracy of artificial intelligence in the detection and segmentation of oral and maxillofacial structures using cone-beam computed tomography images : a systematic review and meta-analysis

    Get PDF
    Purpose: The aim of the present systematic review and meta-analysis was to resolve the conflicts on the diagnostic accuracy of artificial intelligence systems in detecting and segmenting oral and maxillofacial structures using conebeam computed tomography (CBCT) images. Material and methods: We performed a literature search of the Embase, PubMed, and Scopus databases for reports published from their inception to 31 October 2022. We included studies that explored the accuracy of artificial intelligence in the automatic detection or segmentation of oral and maxillofacial anatomical landmarks or lesions using CBCT images. The extracted data were pooled, and the estimates were presented with 95% confidence intervals (CIs). Results: In total, 19 eligible studies were identified. As per the analysis, the overall pooled diagnostic accuracy of artificial intelligence was 0.93 (95% CI: 0.91-0.94). This rate was 0.93 (95% CI: 0.89-0.96) for anatomical landmarks based on 7 studies and 0.92 (95% CI: 0.90-0.94) for lesions according to 12 reports. Moreover, the pooled accuracy of detection and segmentation tasks for artificial intelligence was 0.93 (95% CI: 0.91-0.94) and 0.92 (95% CI: 0.85-0.95) based on 14 and 5 surveys, respectively. Conclusions: Excellent accuracy was observed for the detection and segmentation objectives of artificial intelligence using oral and maxillofacial CBCT images. These systems have the potential to streamline oral and dental healthcare services

    Automatic Segmentation of Mandible from Conventional Methods to Deep Learning-A Review

    Get PDF
    Medical imaging techniques, such as (cone beam) computed tomography and magnetic resonance imaging, have proven to be a valuable component for oral and maxillofacial surgery (OMFS). Accurate segmentation of the mandible from head and neck (H&N) scans is an important step in order to build a personalized 3D digital mandible model for 3D printing and treatment planning of OMFS. Segmented mandible structures are used to effectively visualize the mandible volumes and to evaluate particular mandible properties quantitatively. However, mandible segmentation is always challenging for both clinicians and researchers, due to complex structures and higher attenuation materials, such as teeth (filling) or metal implants that easily lead to high noise and strong artifacts during scanning. Moreover, the size and shape of the mandible vary to a large extent between individuals. Therefore, mandible segmentation is a tedious and time-consuming task and requires adequate training to be performed properly. With the advancement of computer vision approaches, researchers have developed several algorithms to automatically segment the mandible during the last two decades. The objective of this review was to present the available fully (semi)automatic segmentation methods of the mandible published in different scientific articles. This review provides a vivid description of the scientific advancements to clinicians and researchers in this field to help develop novel automatic methods for clinical applications

    System of gender identification and age estimation from radiography: a review

    Get PDF
    Under extreme conditions postmortem, dental radiography examinations can play an essential role in individual identification. In forensic odontology, individual identification traditionally compares antemortem dental records radiographs with those obtained on postmortem examination. As such, these traditional methods are vulnerable to oversights or mistakes in the individual identification of unidentified bodies. Digital technology can develop forensic odontology well. An automatic individual identification system is needed to support the forensic odontology process more easily and quickly because there are still opportunities to be created. We aimed to review the complete range of recent developments in identifying individuals from panoramic radiographs. We study methods in gender identification, age estimation, radiographic segmentation, performance analysis, and promising future directions

    Inferior Alveolar Canal Automatic Detection with Deep Learning CNNs on CBCTs: Development of a Novel Model and Release of Open-Source Dataset and Algorithm

    Get PDF
    Featured Application Convolutional neural networks can accurately identify the Inferior Alveolar Canal, rapidly generating precise 3D data. The datasets and source code used in this paper are publicly available, allowing the reproducibility of the experiments performed. Introduction: The need of accurate three-dimensional data of anatomical structures is increasing in the surgical field. The development of convolutional neural networks (CNNs) has been helping to fill this gap by trying to provide efficient tools to clinicians. Nonetheless, the lack of a fully accessible datasets and open-source algorithms is slowing the improvements in this field. In this paper, we focus on the fully automatic segmentation of the Inferior Alveolar Canal (IAC), which is of immense interest in the dental and maxillo-facial surgeries. Conventionally, only a bidimensional annotation of the IAC is used in common clinical practice. A reliable convolutional neural network (CNNs) might be timesaving in daily practice and improve the quality of assistance. Materials and methods: Cone Beam Computed Tomography (CBCT) volumes obtained from a single radiological center using the same machine were gathered and annotated. The course of the IAC was annotated on the CBCT volumes. A secondary dataset with sparse annotations and a primary dataset with both dense and sparse annotations were generated. Three separate experiments were conducted in order to evaluate the CNN. The IoU and Dice scores of every experiment were recorded as the primary endpoint, while the time needed to achieve the annotation was assessed as the secondary end-point. Results: A total of 347 CBCT volumes were collected, then divided into primary and secondary datasets. Among the three experiments, an IoU score of 0.64 and a Dice score of 0.79 were obtained thanks to the pre-training of the CNN on the secondary dataset and the creation of a novel deep label propagation model, followed by proper training on the primary dataset. To the best of our knowledge, these results are the best ever published in the segmentation of the IAC. The datasets is publicly available and algorithm is published as open-source software. On average, the CNN could produce a 3D annotation of the IAC in 6.33 s, compared to 87.3 s needed by the radiology technician to produce a bidimensional annotation. Conclusions: To resume, the following achievements have been reached. A new state of the art in terms of Dice score was achieved, overcoming the threshold commonly considered of 0.75 for the use in clinical practice. The CNN could fully automatically produce accurate three-dimensional segmentation of the IAC in a rapid setting, compared to the bidimensional annotations commonly used in the clinical practice and generated in a time-consuming manner. We introduced our innovative deep label propagation method to optimize the performance of the CNN in the segmentation of the IAC. For the first time in this field, the datasets and the source codes used were publicly released, granting reproducibility of the experiments and helping in the improvement of IAC segmentation
    corecore