Image-Guided Interventions Using Cone-Beam CT: Improving Image Quality with Motion Compensation and Task-Based Modeling

Abstract

Cone-beam CT (CBCT) is an increasingly important modality for intraoperative 3D imaging in interventional radiology (IR). However, CBCT exhibits several factors that diminish image quality — notably, the major challenges of patient motion and detectability of low-contrast structures — which motivate the work undertaken in this thesis. A 3D–2D registration method is presented to compensate for rigid patient motion. The method is fiducial-free, works naturally within standard clinical workflow, and is applicable to image-guided interventions in locally rigid anatomy, such as the head and pelvis. A second method is presented to address the challenge of deformable motion, presenting a 3D autofocus concept that is purely image-based and does not require additional fiducials, tracking hardware, or prior images. The proposed method is intended to improve interventional CBCT in scenarios where patient motion may not be sufficiently managed by immobilization and breath-hold, such as the prostate, liver, and lungs. Furthermore, the work aims to improve the detectability of low-contrast structures by computing source–detector trajectories that are optimal to a particular imaging task. The approach is applicable to CBCT systems with the capability for general source–detector positioning, as with a robotic C-arm. A “task-driven” analytical framework is introduced, various objective functions and optimization methods are described, and the method is investigated via simulation and phantom experiments and translated to task-driven source–detector trajectories on a clinical robotic C-arm to demonstrate the potential for improved image quality in intraoperative CBCT. Overall, the work demonstrates how novel optimization-based imaging techniques can address major challenges to CBCT image quality

    Similar works