673 research outputs found

    Automatic Segmentation of Mandible from Conventional Methods to Deep Learning-A Review

    Get PDF
    Medical imaging techniques, such as (cone beam) computed tomography and magnetic resonance imaging, have proven to be a valuable component for oral and maxillofacial surgery (OMFS). Accurate segmentation of the mandible from head and neck (H&N) scans is an important step in order to build a personalized 3D digital mandible model for 3D printing and treatment planning of OMFS. Segmented mandible structures are used to effectively visualize the mandible volumes and to evaluate particular mandible properties quantitatively. However, mandible segmentation is always challenging for both clinicians and researchers, due to complex structures and higher attenuation materials, such as teeth (filling) or metal implants that easily lead to high noise and strong artifacts during scanning. Moreover, the size and shape of the mandible vary to a large extent between individuals. Therefore, mandible segmentation is a tedious and time-consuming task and requires adequate training to be performed properly. With the advancement of computer vision approaches, researchers have developed several algorithms to automatically segment the mandible during the last two decades. The objective of this review was to present the available fully (semi)automatic segmentation methods of the mandible published in different scientific articles. This review provides a vivid description of the scientific advancements to clinicians and researchers in this field to help develop novel automatic methods for clinical applications

    Modeling of Craniofacial Anatomy, Variation, and Growth

    Get PDF

    Automatic Segmentation of the Mandible for Three-Dimensional Virtual Surgical Planning

    Get PDF
    Three-dimensional (3D) medical imaging techniques have a fundamental role in the field of oral and maxillofacial surgery (OMFS). 3D images are used to guide diagnosis, assess the severity of disease, for pre-operative planning, per-operative guidance and virtual surgical planning (VSP). In the field of oral cancer, where surgical resection requiring the partial removal of the mandible is a common treatment, resection surgery is often based on 3D VSP to accurately design a resection plan around tumor margins. In orthognathic surgery and dental implant surgery, 3D VSP is also extensively used to precisely guide mandibular surgery. Image segmentation from the radiography images of the head and neck, which is a process to create a 3D volume of the target tissue, is a useful tool to visualize the mandible and quantify geometric parameters. Studies have shown that 3D VSP requires accurate segmentation of the mandible, which is currently performed by medical technicians. Mandible segmentation was usually done manually, which is a time-consuming and poorly reproducible process. This thesis presents four algorithms for mandible segmentation from CT and CBCT and contributes to some novel ideas for the development of automatic mandible segmentation for 3D VSP. We implement the segmentation approaches on head and neck CT/CBCT datasets and then evaluate the performance. Experimental results show that our proposed approaches for mandible segmentation in CT/CBCT datasets exhibit high accuracy

    Advances in Groupwise Image Registration

    Get PDF

    Advances in Groupwise Image Registration

    Get PDF

    Relational Reasoning Network (RRN) for Anatomical Landmarking

    Full text link
    Accurately identifying anatomical landmarks is a crucial step in deformation analysis and surgical planning for craniomaxillofacial (CMF) bones. Available methods require segmentation of the object of interest for precise landmarking. Unlike those, our purpose in this study is to perform anatomical landmarking using the inherent relation of CMF bones without explicitly segmenting them. We propose a new deep network architecture, called relational reasoning network (RRN), to accurately learn the local and the global relations of the landmarks. Specifically, we are interested in learning landmarks in CMF region: mandible, maxilla, and nasal bones. The proposed RRN works in an end-to-end manner, utilizing learned relations of the landmarks based on dense-block units and without the need for segmentation. For a given a few landmarks as input, the proposed system accurately and efficiently localizes the remaining landmarks on the aforementioned bones. For a comprehensive evaluation of RRN, we used cone-beam computed tomography (CBCT) scans of 250 patients. The proposed system identifies the landmark locations very accurately even when there are severe pathologies or deformations in the bones. The proposed RRN has also revealed unique relationships among the landmarks that help us infer several reasoning about informativeness of the landmark points. RRN is invariant to order of landmarks and it allowed us to discover the optimal configurations (number and location) for landmarks to be localized within the object of interest (mandible) or nearby objects (maxilla and nasal). To the best of our knowledge, this is the first of its kind algorithm finding anatomical relations of the objects using deep learning.Comment: 10 pages, 6 Figures, 3 Table

    Applications of a Biomechanical Patient Model for Adaptive Radiation Therapy

    Get PDF
    Biomechanical patient modeling incorporates physical knowledge of the human anatomy into the image processing that is required for tracking anatomical deformations during adaptive radiation therapy, especially particle therapy. In contrast to standard image registration, this enforces bio-fidelic image transformation. In this thesis, the potential of a kinematic skeleton model and soft tissue motion propagation are investigated for crucial image analysis steps in adaptive radiation therapy. The first application is the integration of the kinematic model in a deformable image registration process (KinematicDIR). For monomodal CT scan pairs, the median target registration error based on skeleton landmarks, is smaller than (1.6 ± 0.2) mm. In addition, the successful transferability of this concept to otherwise challenging multimodal registration between CT and CBCT as well as CT and MRI scan pairs is shown to result in median target registration error in the order of 2 mm. This meets the accuracy requirement for adaptive radiation therapy and is especially interesting for MR-guided approaches. Another aspect, emerging in radiotherapy, is the utilization of deep-learning-based organ segmentation. As radiotherapy-specific labeled data is scarce, the training of such methods relies heavily on augmentation techniques. In this work, the generation of synthetically but realistically deformed scans used as Bionic Augmentation in the training phase improved the predicted segmentations by up to 15% in the Dice similarity coefficient, depending on the training strategy. Finally, it is shown that the biomechanical model can be built-up from automatic segmentations without deterioration of the KinematicDIR application. This is essential for use in a clinical workflow
    • …
    corecore