43 research outputs found

    Automatic Tooth Segmentation from 3D Dental Model using Deep Learning: A Quantitative Analysis of what can be learnt from a Single 3D Dental Model

    Full text link
    3D tooth segmentation is an important task for digital orthodontics. Several Deep Learning methods have been proposed for automatic tooth segmentation from 3D dental models or intraoral scans. These methods require annotated 3D intraoral scans. Manually annotating 3D intraoral scans is a laborious task. One approach is to devise self-supervision methods to reduce the manual labeling effort. Compared to other types of point cloud data like scene point cloud or shape point cloud data, 3D tooth point cloud data has a very regular structure and a strong shape prior. We look at how much representative information can be learnt from a single 3D intraoral scan. We evaluate this quantitatively with the help of ten different methods of which six are generic point cloud segmentation methods whereas the other four are tooth segmentation specific methods. Surprisingly, we find that with a single 3D intraoral scan training, the Dice score can be as high as 0.86 whereas the full training set gives Dice score of 0.94. We conclude that the segmentation methods can learn a great deal of information from a single 3D tooth point cloud scan under suitable conditions e.g. data augmentation. We are the first to quantitatively evaluate and demonstrate the representation learning capability of Deep Learning methods from a single 3D intraoral scan. This can enable building self-supervision methods for tooth segmentation under extreme data limitation scenario by leveraging the available data to the fullest possible extent.Comment: accepted to SIPAIM 202

    3DTeethSeg'22: 3D Teeth Scan Segmentation and Labeling Challenge

    Full text link
    Teeth localization, segmentation, and labeling from intra-oral 3D scans are essential tasks in modern dentistry to enhance dental diagnostics, treatment planning, and population-based studies on oral health. However, developing automated algorithms for teeth analysis presents significant challenges due to variations in dental anatomy, imaging protocols, and limited availability of publicly accessible data. To address these challenges, the 3DTeethSeg'22 challenge was organized in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2022, with a call for algorithms tackling teeth localization, segmentation, and labeling from intraoral 3D scans. A dataset comprising a total of 1800 scans from 900 patients was prepared, and each tooth was individually annotated by a human-machine hybrid algorithm. A total of 6 algorithms were evaluated on this dataset. In this study, we present the evaluation results of the 3DTeethSeg'22 challenge. The 3DTeethSeg'22 challenge code can be accessed at: https://github.com/abenhamadou/3DTeethSeg22_challengeComment: 29 pages, MICCAI 2022 Singapore, Satellite Event, Challeng

    3D Scanning, Imaging, and Printing in Orthodontics

    Get PDF

    Craniofacial Growth Series Volume 56

    Full text link
    https://deepblue.lib.umich.edu/bitstream/2027.42/153991/1/56th volume CF growth series FINAL 02262020.pdfDescription of 56th volume CF growth series FINAL 02262020.pdf : Proceedings of the 46th Annual Moyers Symposium and 44th Moyers Presymposiu

    3D Morphometric Quantification of Maxillae and Palatal Defects for Patients with Unilateral Cleft Lip and Palate via Auto-segmentation

    Get PDF
    The accurate quantification of the complex 3D cleft defect structure is key for optimal treatment planning and patient outcomes. Furthermore, very little is known about the morphometric differences between the affected versus the unaffected maxillary halves. The aim of this study is to characterize the 3D morphometry of the maxillae and cleft defects in non-syndromic patients with unilateral cleft lip and palate. To test the hypothesis that the defect size is positively correlated with the affected maxillary half, CBCT images were acquired from 60 patients presenting with unilateral cleft lip and palate. The machine learning program LINKS was used to segment the maxilla and defect. The height, width, and length of the maxilla and defect were measured from the segmented images. To fully characterize the defect, the distribution probability was mapped from superimposed 3D models, paired t tests were performed for statistical analysis, and a multiple linear regression was completed. The defect side demonstrated a significant decrease in maxillary length, anterior width, and volume with mean measurements of 34.31±2.56mm, 17.83±2.06mm, and 18.02±3.24x103mm3, respectively, and an increased maxillary anterior height with a mean of 25.91±4.12mm as compared to the non-defect side. Defect superimposition displayed a concentrated distribution near the alveolar bone region and anterior maxillary structures appeared to contribute to defect variability.Master of Scienc

    ImplantFormer: Vision Transformer based Implant Position Regression Using Dental CBCT Data

    Full text link
    Implant prosthesis is the most appropriate treatment for dentition defect or dentition loss, which usually involves a surgical guide design process to decide the implant position. However, such design heavily relies on the subjective experiences of dentists. In this paper, a transformer-based Implant Position Regression Network, ImplantFormer, is proposed to automatically predict the implant position based on the oral CBCT data. We creatively propose to predict the implant position using the 2D axial view of the tooth crown area and fit a centerline of the implant to obtain the actual implant position at the tooth root. Convolutional stem and decoder are designed to coarsely extract image features before the operation of patch embedding and integrate multi-level feature maps for robust prediction, respectively. As both long-range relationship and local features are involved, our approach can better represent global information and achieves better location performance. Extensive experiments on a dental implant dataset through five-fold cross-validation demonstrated that the proposed ImplantFormer achieves superior performance than existing methods

    Conformal Predictions Enhanced Expert-guided Meshing with Graph Neural Networks

    Full text link
    Computational Fluid Dynamics (CFD) is widely used in different engineering fields, but accurate simulations are dependent upon proper meshing of the simulation domain. While highly refined meshes may ensure precision, they come with high computational costs. Similarly, adaptive remeshing techniques require multiple simulations and come at a great computational cost. This means that the meshing process is reliant upon expert knowledge and years of experience. Automating mesh generation can save significant time and effort and lead to a faster and more efficient design process. This paper presents a machine learning-based scheme that utilizes Graph Neural Networks (GNN) and expert guidance to automatically generate CFD meshes for aircraft models. In this work, we introduce a new 3D segmentation algorithm that outperforms two state-of-the-art models, PointNet++ and PointMLP, for surface classification. We also present a novel approach to project predictions from 3D mesh segmentation models to CAD surfaces using the conformal predictions method, which provides marginal statistical guarantees and robust uncertainty quantification and handling. We demonstrate that the addition of conformal predictions effectively enables the model to avoid under-refinement, hence failure, in CFD meshing even for weak and less accurate models. Finally, we demonstrate the efficacy of our approach through a real-world case study that demonstrates that our automatically generated mesh is comparable in quality to expert-generated meshes and enables the solver to converge and produce accurate results. Furthermore, we compare our approach to the alternative of adaptive remeshing in the same case study and find that our method is 5 times faster in the overall process of simulation. The code and data for this project are made publicly available at https://github.com/ahnobari/AutoSurf
    corecore