57 research outputs found

    Pose-Aware Instance Segmentation Framework from Cone Beam CT Images for Tooth Segmentation

    Full text link
    Individual tooth segmentation from cone beam computed tomography (CBCT) images is an essential prerequisite for an anatomical understanding of orthodontic structures in several applications, such as tooth reformation planning and implant guide simulations. However, the presence of severe metal artifacts in CBCT images hinders the accurate segmentation of each individual tooth. In this study, we propose a neural network for pixel-wise labeling to exploit an instance segmentation framework that is robust to metal artifacts. Our method comprises of three steps: 1) image cropping and realignment by pose regressions, 2) metal-robust individual tooth detection, and 3) segmentation. We first extract the alignment information of the patient by pose regression neural networks to attain a volume-of-interest (VOI) region and realign the input image, which reduces the inter-overlapping area between tooth bounding boxes. Then, individual tooth regions are localized within a VOI realigned image using a convolutional detector. We improved the accuracy of the detector by employing non-maximum suppression and multiclass classification metrics in the region proposal network. Finally, we apply a convolutional neural network (CNN) to perform individual tooth segmentation by converting the pixel-wise labeling task to a distance regression task. Metal-intensive image augmentation is also employed for a robust segmentation of metal artifacts. The result shows that our proposed method outperforms other state-of-the-art methods, especially for teeth with metal artifacts. The primary significance of the proposed method is two-fold: 1) an introduction of pose-aware VOI realignment followed by a robust tooth detection and 2) a metal-robust CNN framework for accurate tooth segmentation.Comment: 10 pages, 10 figure

    Automatic Segmentation of the Mandible for Three-Dimensional Virtual Surgical Planning

    Get PDF
    Three-dimensional (3D) medical imaging techniques have a fundamental role in the field of oral and maxillofacial surgery (OMFS). 3D images are used to guide diagnosis, assess the severity of disease, for pre-operative planning, per-operative guidance and virtual surgical planning (VSP). In the field of oral cancer, where surgical resection requiring the partial removal of the mandible is a common treatment, resection surgery is often based on 3D VSP to accurately design a resection plan around tumor margins. In orthognathic surgery and dental implant surgery, 3D VSP is also extensively used to precisely guide mandibular surgery. Image segmentation from the radiography images of the head and neck, which is a process to create a 3D volume of the target tissue, is a useful tool to visualize the mandible and quantify geometric parameters. Studies have shown that 3D VSP requires accurate segmentation of the mandible, which is currently performed by medical technicians. Mandible segmentation was usually done manually, which is a time-consuming and poorly reproducible process. This thesis presents four algorithms for mandible segmentation from CT and CBCT and contributes to some novel ideas for the development of automatic mandible segmentation for 3D VSP. We implement the segmentation approaches on head and neck CT/CBCT datasets and then evaluate the performance. Experimental results show that our proposed approaches for mandible segmentation in CT/CBCT datasets exhibit high accuracy

    Synergistic Visualization And Quantitative Analysis Of Volumetric Medical Images

    Get PDF
    The medical diagnosis process starts with an interview with the patient, and continues with the physical exam. In practice, the medical professional may require additional screenings to precisely diagnose. Medical imaging is one of the most frequently used non-invasive screening methods to acquire insight of human body. Medical imaging is not only essential for accurate diagnosis, but also it can enable early prevention. Medical data visualization refers to projecting the medical data into a human understandable format at mediums such as 2D or head-mounted displays without causing any interpretation which may lead to clinical intervention. In contrast to the medical visualization, quantification refers to extracting the information in the medical scan to enable the clinicians to make fast and accurate decisions. Despite the extraordinary process both in medical visualization and quantitative radiology, efforts to improve these two complementary fields are often performed independently and synergistic combination is under-studied. Existing image-based software platforms mostly fail to be used in routine clinics due to lack of a unified strategy that guides clinicians both visually and quan- titatively. Hence, there is an urgent need for a bridge connecting the medical visualization and automatic quantification algorithms in the same software platform. In this thesis, we aim to fill this research gap by visualizing medical images interactively from anywhere, and performing a fast, accurate and fully-automatic quantification of the medical imaging data. To end this, we propose several innovative and novel methods. Specifically, we solve the following sub-problems of the ul- timate goal: (1) direct web-based out-of-core volume rendering, (2) robust, accurate, and efficient learning based algorithms to segment highly pathological medical data, (3) automatic landmark- ing for aiding diagnosis and surgical planning and (4) novel artificial intelligence algorithms to determine the sufficient and necessary data to derive large-scale problems

    Modeling of the human upper airway from multimodal 3D dentofacial images

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Automated tooth crown design with optimized shape and biomechanics properties

    Get PDF
    Despite the large demand for dental restoration each year, the design of crown restorations is mainly performed via manual software operation, which is tedious and subjective. Moreover, the current design process lacks biomechanics optimization, leading to localized stress concentration and reduced working life. To tackle these challenges, we develop a fully automated algorithm for crown restoration based on deformable model fitting and biomechanical optimization. From a library of dental oral scans, a conditional shape model (CSM) is constructed to represent the inter-teeth shape correlation. By matching the CSM to the patient’s oral scan, the optimal crown shape is estimated to coincide with the surrounding teeth. Next, the crown is seamlessly integrated into the finish line of preparation via a surface warping step. Finally, porous internal supporting structures of the crown are generated to avoid excessive localized stresses. This algorithm is validated on clinical oral scan data and achieved less than 2 mm mean surface distance as compared to the manual designs of experienced human operators. The mechanical simulation was conducted to prove that the internal supporting structures lead to uniform stress distribution all over the model

    Optimización en GPU de algoritmos para la mejora del realce y segmentación en imágenes hepáticas

    Get PDF
    This doctoral thesis deepens the GPU acceleration for liver enhancement and segmentation. With this motivation, detailed research is carried out here in a compendium of articles. The work developed is structured in three scientific contributions, the first one is based upon enhancement and tumor segmentation, the second one explores the vessel segmentation and the last is published on liver segmentation. These works are implemented on GPU with significant speedups with great scientific impact and relevance in this doctoral thesis The first work proposes cross-modality based contrast enhancement for tumor segmentation on GPU. To do this, it takes target and guidance images as an input and enhance the low quality target image by applying two dimensional histogram approach. Further it has been observed that the enhanced image provides more accurate tumor segmentation using GPU based dynamic seeded region growing. The second contribution is about fast parallel gradient based seeded region growing where static approach has been proposed and implemented on GPU for accurate vessel segmentation. The third contribution describes GPU acceleration of Chan-Vese model and cross-modality based contrast enhancement for liver segmentation
    corecore