595 research outputs found

    A Learning-based Method for Online Adjustment of C-arm Cone-Beam CT Source Trajectories for Artifact Avoidance

    Full text link
    During spinal fusion surgery, screws are placed close to critical nerves suggesting the need for highly accurate screw placement. Verifying screw placement on high-quality tomographic imaging is essential. C-arm Cone-beam CT (CBCT) provides intraoperative 3D tomographic imaging which would allow for immediate verification and, if needed, revision. However, the reconstruction quality attainable with commercial CBCT devices is insufficient, predominantly due to severe metal artifacts in the presence of pedicle screws. These artifacts arise from a mismatch between the true physics of image formation and an idealized model thereof assumed during reconstruction. Prospectively acquiring views onto anatomy that are least affected by this mismatch can, therefore, improve reconstruction quality. We propose to adjust the C-arm CBCT source trajectory during the scan to optimize reconstruction quality with respect to a certain task, i.e. verification of screw placement. Adjustments are performed on-the-fly using a convolutional neural network that regresses a quality index for possible next views given the current x-ray image. Adjusting the CBCT trajectory to acquire the recommended views results in non-circular source orbits that avoid poor images, and thus, data inconsistencies. We demonstrate that convolutional neural networks trained on realistically simulated data are capable of predicting quality metrics that enable scene-specific adjustments of the CBCT source trajectory. Using both realistically simulated data and real CBCT acquisitions of a semi-anthropomorphic phantom, we show that tomographic reconstructions of the resulting scene-specific CBCT acquisitions exhibit improved image quality particularly in terms of metal artifacts. Since the optimization objective is implicitly encoded in a neural network, the proposed approach overcomes the need for 3D information at run-time.Comment: 12 page

    Object Specific Trajectory Optimization for Industrial X-ray Computed Tomography

    Get PDF
    In industrial settings, X-ray computed tomography scans are a common tool for inspection of objects. Often the object can not be imaged using standard circular or helical trajectories because of constraints in space or time. Compared to medical applications the variance in size and materials is much larger. Adapting the acquisition trajectory to the object is beneficial and sometimes inevitable. There are currently no sophisticated methods for this adoption. Typically the operator places the object according to his best knowledge. We propose a detectability index based optimization algorithm which determines the scan trajectory on the basis of a CAD-model of the object. The detectability index is computed solely from simulated projections for multiple user defined features. By adapting the features the algorithm is adapted to different imaging tasks. Performance of simulated and measured data was qualitatively and quantitatively assessed. The results illustrate that our algorithm not only allows more accurate detection of features, but also delivers images with high overall quality in comparison to standard trajectory reconstructions. This work enables to reduce the number of projections and in consequence scan time by introducing an optimization algorithm to compose an object specific trajectory

    Automated Image-Based Procedures for Adaptive Radiotherapy

    Get PDF

    Robotically Steered Needles: A Survey of Neurosurgical Applications and Technical Innovations

    Get PDF
    This paper surveys both the clinical applications and main technical innovations related to steered needles, with an emphasis on neurosurgery. Technical innovations generally center on curvilinear robots that can adopt a complex path that circumvents critical structures and eloquent brain tissue. These advances include several needle-steering approaches, which consist of tip-based, lengthwise, base motion-driven, and tissue-centered steering strategies. This paper also describes foundational mathematical models for steering, where potential fields, nonholonomic bicycle-like models, spring models, and stochastic approaches are cited. In addition, practical path planning systems are also addressed, where we cite uncertainty modeling in path planning, intraoperative soft tissue shift estimation through imaging scans acquired during the procedure, and simulation-based prediction. Neurosurgical scenarios tend to emphasize straight needles so far, and span deep-brain stimulation (DBS), stereoelectroencephalography (SEEG), intracerebral drug delivery (IDD), stereotactic brain biopsy (SBB), stereotactic needle aspiration for hematoma, cysts and abscesses, and brachytherapy as well as thermal ablation of brain tumors and seizure-generating regions. We emphasize therapeutic considerations and complications that have been documented in conjunction with these applications

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Novel PET Systems and Image Reconstruction with Actively Controlled Geometry

    Get PDF
    Positron Emission Tomography (PET) provides in vivo measurement of imaging ligands that are labeled with positron emitting radionuclide. Since its invention, most PET scanners have been designed to have a group of gamma ray detectors arranged in a ring geometry, accommodating the whole patient body. Virtual Pinhole PET incorporates higher resolution detectors being placed close to the Region-of-Interest (ROI) within the imaging Field-of-View (FOV) of the whole-body scanner, providing better image resolution and contrast recover. To further adapt this technology to a wider range of diseases, we proposed a second generation of virtual pinhole PET using actively controlled high resolution detectors integrated on a robotic arm. When the whole system is integrated to a commercial PET scanner, we achieved positioning repeatability within 0.5 mm. Monte Carlo simulation shows that by focusing the high-resolution detectors to a specific organ of interest, we can achieve better resolution, sensitivity and contrast recovery. In another direction, we proposed a portable, versatile and low cost PET imaging system for Point-of-Care (POC) applications. It consists of one or more movable detectors in coincidence with a detector array behind a patient. The movable detectors make it possible for the operator to control the scanning trajectory freely to achieve optimal coverage and sensitivity for patient specific imaging tasks. Since this system does not require a conventional full ring geometry, it can be built portable and low cost for bed-side or intraoperative use. We developed a proof-of-principle prototype that consists of a compact high resolution silicon photomultiplier detector mounted on a hand-held probe and a half ring of conventional detectors. The probe is attached to a MicroScribe device, which tracks the location and orientation of the probe as it moves. We also performed Monte Carlo simulations for two POC PET geometries with Time-of-Flight (TOF) capability. To support the development of such PET systems with unconventional geometries, a fully 3D image reconstruction framework has been developed for PET systems with arbitrary geometry. For POC PET and the second generation robotic Virtual Pinhole PET, new challenges emerge and our targeted applications require more efficiently image reconstruction that provides imaging results in near real time. Inspired by the previous work, we developed a list mode GPU-based image reconstruction framework with the capability to model dynamically changing geometry. Ordered-Subset MAP-EM algorithm is implemented on multi-GPU platform to achieve fast reconstruction in the order of seconds per iteration, under practical data rate. We tested this using both experimental and simulation data, for whole body PET scanner and unconventional PET scanners. Future application of adaptive imaging requires near real time performance for large statistics, which requires additional acceleration of this framework

    Augmented Reality and Artificial Intelligence in Image-Guided and Robot-Assisted Interventions

    Get PDF
    In minimally invasive orthopedic procedures, the surgeon places wires, screws, and surgical implants through the muscles and bony structures under image guidance. These interventions require alignment of the pre- and intra-operative patient data, the intra-operative scanner, surgical instruments, and the patient. Suboptimal interaction with patient data and challenges in mastering 3D anatomy based on ill-posed 2D interventional images are essential concerns in image-guided therapies. State of the art approaches often support the surgeon by using external navigation systems or ill-conditioned image-based registration methods that both have certain drawbacks. Augmented reality (AR) has been introduced in the operating rooms in the last decade; however, in image-guided interventions, it has often only been considered as a visualization device improving traditional workflows. Consequently, the technology is gaining minimum maturity that it requires to redefine new procedures, user interfaces, and interactions. This dissertation investigates the applications of AR, artificial intelligence, and robotics in interventional medicine. Our solutions were applied in a broad spectrum of problems for various tasks, namely improving imaging and acquisition, image computing and analytics for registration and image understanding, and enhancing the interventional visualization. The benefits of these approaches were also discovered in robot-assisted interventions. We revealed how exemplary workflows are redefined via AR by taking full advantage of head-mounted displays when entirely co-registered with the imaging systems and the environment at all times. The proposed AR landscape is enabled by co-localizing the users and the imaging devices via the operating room environment and exploiting all involved frustums to move spatial information between different bodies. The system's awareness of the geometric and physical characteristics of X-ray imaging allows the exploration of different human-machine interfaces. We also leveraged the principles governing image formation and combined it with deep learning and RGBD sensing to fuse images and reconstruct interventional data. We hope that our holistic approaches towards improving the interface of surgery and enhancing the usability of interventional imaging, not only augments the surgeon's capabilities but also augments the surgical team's experience in carrying out an effective intervention with reduced complications
    corecore