77 research outputs found

    Voxel-based registration of simulated and real patient CBCT data for accurate dental implant pose estimation

    Get PDF
    "Progress in Biomedical Optics and Imaging, vol. 16, nr. 42"The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant's pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant's pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant's main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant's pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67±34μm and 108μm, and angular misfits of 0.15±0.08 and 1.4, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants' pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.This work has been supported by FCT – Fundação para a Ciência e Tecnologia in the scope of the Ph.D. grant SFRH/BD/68270/2010, SFRH/BD/93443/2013 and the project EXPL/BBB-BMD/2146/2013.info:eu-repo/semantics/publishedVersio

    Accuracy and the role of experience in dynamic computer guided dental implant surgery: An in-vitro study

    Get PDF
    Treball Final de Grau d'Odontologia, Facultat de Medicina i Ciències de la Salut, Universitat de Barcelona, Curs: 2017-2018, Director: Rui Pedro Barbosa de FigueiredoBackground: The development of new imaging technologies like Cone-Beam Computer Tomography (CBCT) has allowed a great advance in the pre-surgical implant planning. Computer Assisted Surgery (CAS) in implantology has been described aiming to minimize the differences between the preoperative planning and the final treatment outcome. The dynamic CAS, also known as surgical navigation system, allows to determine the real position of the surgical drill on the reconstructed 3D image of the CBCT. It guides the surgeon, while performing the surgical procedure, to the preoperative planned position. Aim: To assess the accuracy and the role of the surgeon’s experience comparing the implant placement using both freehand and a dynamic navigation system. Materials and methods: A randomized in-vitro study was made. Six resin mandible models and 36 implants were used. Two investigators with different degrees of clinical experience placed implants using either the CAS Navident® system (Navident group) or the conventional freehand method (freehand group). Accuracy assessment was measured by overlapping the virtual presurgical placement of the implant in a CBCT and the real position in the postoperative CBCT. Descriptive and bivariate analysis of the data was made. Results: The Navident group had a significantly higher accuracy for all studied variables, except for the 3D entry and depth deviation. This system significantly enhanced the accuracy of the unexperienced professional in several outcome variables in comparison with the freehand implant placement method. On the other hand, when the implants were placed by the experienced clinician, the Navident® system only allowed to improve the angulation deviation. If both degrees of experiences are compared, significant differences were only found when the freehand method was employed. The implants placed with the Navident® system had similar deviations. Conclusion: The dynamic computer assisted surgery system Navident® allows a more accurate implant placement in comparison with the conventional freehand method, regardless of the surgeon’s experience. However, this system seems to offer important advantages to unexperienced professionals, since they can significantly reduce their deviations, achieving the same results of experienced clinicians

    DEEP LEARNING IN COMPUTER-ASSISTED MAXILLOFACIAL SURGERY

    Get PDF

    Accurate 3D-reconstruction and -navigation for high-precision minimal-invasive interventions

    Get PDF
    The current lateral skull base surgery is largely invasive since it requires wide exposure and direct visualization of anatomical landmarks to avoid damaging critical structures. A multi-port approach aiming to reduce such invasiveness has been recently investigated. Thereby three canals are drilled from the skull surface to the surgical region of interest: the first canal for the instrument, the second for the endoscope, and the third for material removal or an additional instrument. The transition to minimal invasive approaches in the lateral skull base surgery requires sub-millimeter accuracy and high outcome predictability, which results in high requirements for the image acquisition as well as for the navigation. Computed tomography (CT) is a non-invasive imaging technique allowing the visualization of the internal patient organs. Planning optimal drill channels based on patient-specific models requires high-accurate three-dimensional (3D) CT images. This thesis focuses on the reconstruction of high quality CT volumes. Therefore, two conventional imaging systems are investigated: spiral CT scanners and C-arm cone-beam CT (CBCT) systems. Spiral CT scanners acquire volumes with typically anisotropic resolution, i.e. the voxel spacing in the slice-selection-direction is larger than the in-the-plane spacing. A new super-resolution reconstruction approach is proposed to recover images with high isotropic resolution from two orthogonal low-resolution CT volumes. C-arm CBCT systems offers CT-like 3D imaging capabilities while being appropriate for interventional suites. A main drawback of these systems is the commonly encountered CT artifacts due to several limitations in the imaging system, such as the mechanical inaccuracies. This thesis contributes new methods to enhance the CBCT reconstruction quality by addressing two main reconstruction artifacts: the misalignment artifacts caused by mechanical inaccuracies, and the metal-artifacts caused by the presence of metal objects in the scanned region. CBCT scanners are appropriate for intra-operative image-guided navigation. For instance, they can be used to control the drill process based on intra-operatively acquired 2D fluoroscopic images. For a successful navigation, accurate estimate of C-arm pose relative to the patient anatomy and the associated surgical plan is required. A new algorithm has been developed to fulfill this task with high-precision. The performance of the introduced methods is demonstrated on simulated and real data

    Augmented Reality and Artificial Intelligence in Image-Guided and Robot-Assisted Interventions

    Get PDF
    In minimally invasive orthopedic procedures, the surgeon places wires, screws, and surgical implants through the muscles and bony structures under image guidance. These interventions require alignment of the pre- and intra-operative patient data, the intra-operative scanner, surgical instruments, and the patient. Suboptimal interaction with patient data and challenges in mastering 3D anatomy based on ill-posed 2D interventional images are essential concerns in image-guided therapies. State of the art approaches often support the surgeon by using external navigation systems or ill-conditioned image-based registration methods that both have certain drawbacks. Augmented reality (AR) has been introduced in the operating rooms in the last decade; however, in image-guided interventions, it has often only been considered as a visualization device improving traditional workflows. Consequently, the technology is gaining minimum maturity that it requires to redefine new procedures, user interfaces, and interactions. This dissertation investigates the applications of AR, artificial intelligence, and robotics in interventional medicine. Our solutions were applied in a broad spectrum of problems for various tasks, namely improving imaging and acquisition, image computing and analytics for registration and image understanding, and enhancing the interventional visualization. The benefits of these approaches were also discovered in robot-assisted interventions. We revealed how exemplary workflows are redefined via AR by taking full advantage of head-mounted displays when entirely co-registered with the imaging systems and the environment at all times. The proposed AR landscape is enabled by co-localizing the users and the imaging devices via the operating room environment and exploiting all involved frustums to move spatial information between different bodies. The system's awareness of the geometric and physical characteristics of X-ray imaging allows the exploration of different human-machine interfaces. We also leveraged the principles governing image formation and combined it with deep learning and RGBD sensing to fuse images and reconstruct interventional data. We hope that our holistic approaches towards improving the interface of surgery and enhancing the usability of interventional imaging, not only augments the surgeon's capabilities but also augments the surgical team's experience in carrying out an effective intervention with reduced complications

    Image-Guided Interventions Using Cone-Beam CT: Improving Image Quality with Motion Compensation and Task-Based Modeling

    Get PDF
    Cone-beam CT (CBCT) is an increasingly important modality for intraoperative 3D imaging in interventional radiology (IR). However, CBCT exhibits several factors that diminish image quality — notably, the major challenges of patient motion and detectability of low-contrast structures — which motivate the work undertaken in this thesis. A 3D–2D registration method is presented to compensate for rigid patient motion. The method is fiducial-free, works naturally within standard clinical workflow, and is applicable to image-guided interventions in locally rigid anatomy, such as the head and pelvis. A second method is presented to address the challenge of deformable motion, presenting a 3D autofocus concept that is purely image-based and does not require additional fiducials, tracking hardware, or prior images. The proposed method is intended to improve interventional CBCT in scenarios where patient motion may not be sufficiently managed by immobilization and breath-hold, such as the prostate, liver, and lungs. Furthermore, the work aims to improve the detectability of low-contrast structures by computing source–detector trajectories that are optimal to a particular imaging task. The approach is applicable to CBCT systems with the capability for general source–detector positioning, as with a robotic C-arm. A “task-driven” analytical framework is introduced, various objective functions and optimization methods are described, and the method is investigated via simulation and phantom experiments and translated to task-driven source–detector trajectories on a clinical robotic C-arm to demonstrate the potential for improved image quality in intraoperative CBCT. Overall, the work demonstrates how novel optimization-based imaging techniques can address major challenges to CBCT image quality

    Quantitative Analysis of Three-Dimensional Cone-Beam Computed Tomography Using Image Quality Phantoms

    Get PDF
    In the clinical setting, weight-bearing static 2D radiographic imaging and supine 3D radiographic imaging modalities are used to evaluate radiographic changes such as, joint space narrowing, subchondral sclerosis, and osteophyte formation. These respective imaging modalities cannot distinguish between tissues with similar densities (2D imaging), and do not accurately represent functional joint loading (supine 3D imaging). Recent advances in cone-beam CT (CBCT) have allowed for scanner designs that can obtain weight-bearing 3D volumetric scans. The purpose of this thesis was to analyze, design, and implement advanced imaging techniques to quantify image quality parameters of reconstructed image volumes generated by a commercially-available CBCT scanner, and a novel ceiling-mounted CBCT scanner. In addition, imperfections during rotation of the novel ceiling-mounted CBCT scanner were characterized using a 3D printed calibration object with a modification to the single marker bead method, and prospective geometric calibration matrices

    Synergistic Visualization And Quantitative Analysis Of Volumetric Medical Images

    Get PDF
    The medical diagnosis process starts with an interview with the patient, and continues with the physical exam. In practice, the medical professional may require additional screenings to precisely diagnose. Medical imaging is one of the most frequently used non-invasive screening methods to acquire insight of human body. Medical imaging is not only essential for accurate diagnosis, but also it can enable early prevention. Medical data visualization refers to projecting the medical data into a human understandable format at mediums such as 2D or head-mounted displays without causing any interpretation which may lead to clinical intervention. In contrast to the medical visualization, quantification refers to extracting the information in the medical scan to enable the clinicians to make fast and accurate decisions. Despite the extraordinary process both in medical visualization and quantitative radiology, efforts to improve these two complementary fields are often performed independently and synergistic combination is under-studied. Existing image-based software platforms mostly fail to be used in routine clinics due to lack of a unified strategy that guides clinicians both visually and quan- titatively. Hence, there is an urgent need for a bridge connecting the medical visualization and automatic quantification algorithms in the same software platform. In this thesis, we aim to fill this research gap by visualizing medical images interactively from anywhere, and performing a fast, accurate and fully-automatic quantification of the medical imaging data. To end this, we propose several innovative and novel methods. Specifically, we solve the following sub-problems of the ul- timate goal: (1) direct web-based out-of-core volume rendering, (2) robust, accurate, and efficient learning based algorithms to segment highly pathological medical data, (3) automatic landmark- ing for aiding diagnosis and surgical planning and (4) novel artificial intelligence algorithms to determine the sufficient and necessary data to derive large-scale problems
    • …
    corecore