275 research outputs found

    Virtual Reality Aided Mobile C-arm Positioning for Image-Guided Surgery

    Get PDF
    Image-guided surgery (IGS) is the minimally invasive procedure based on the pre-operative volume in conjunction with intra-operative X-ray images which are commonly captured by mobile C-arms for the confirmation of surgical outcomes. Although currently some commercial navigation systems are employed, one critical issue of such systems is the neglect regarding the radiation exposure to the patient and surgeons. In practice, when one surgical stage is finished, several X-ray images have to be acquired repeatedly by the mobile C-arm to obtain the desired image. Excessive radiation exposure may increase the risk of some complications. Therefore, it is necessary to develop a positioning system for mobile C-arms, and achieve one-time imaging to avoid the additional radiation exposure. In this dissertation, a mobile C-arm positioning system is proposed with the aid of virtual reality (VR). The surface model of patient is reconstructed by a camera mounted on the mobile C-arm. A novel registration method is proposed to align this model and pre-operative volume based on a tracker, so that surgeons can visualize the hidden anatomy directly from the outside view and determine a reference pose of C-arm. Considering the congested operating room, the C-arm is modeled as manipulator with a movable base to maneuver the image intensifier to the desired pose. In the registration procedure above, intensity-based 2D/3D registration is used to transform the pre-operative volume into the coordinate system of tracker. Although it provides a high accuracy, the small capture range hinders its clinical use due to the initial guess. To address such problem, a robust and fast initialization method is proposed based on the automatic tracking based initialization and multi-resolution estimation in frequency domain. This hardware-software integrated approach provides almost optimal transformation parameters for intensity-based registration. To determine the pose of mobile C-arm, high-quality visualization is necessary to locate the pathology in the hidden anatomy. A novel dimensionality reduction method based on sparse representation is proposed for the design of multi-dimensional transfer function in direct volume rendering. It not only achieves the similar performance to the conventional methods, but also owns the capability to deal with the large data sets

    A subspace-based resolution-enhancing image reconstruction method for few-view differential phase-contrast tomography

    Get PDF
    It is well known that properly designed image reconstruction methods can facilitate reductions in imaging doses and data-acquisition times in tomographic imaging. The ability to do so is particularly important for emerging modalities, such as differential x-ray phase-contrast tomography (D-XPCT), which are currently limited by these factors. An important application of D-XPCT is high-resolution imaging of biomedical samples. However, reconstructing high-resolution images from few-view tomographic measurements remains a challenging task due to the high-frequency information loss caused by data incompleteness. In this work, a subspace-based reconstruction strategy is proposed and investigated for use in few-view D-XPCT image reconstruction. By adopting a two-step approach, the proposed method can simultaneously recover high-frequency details within a certain region of interest while suppressing noise and/or artifacts globally. The proposed method is investigated by the use of few-view experimental data acquired by an edge-illumination D-XPCT scanner

    Dictionary learning for data recovery in positron emission tomography

    Get PDF
    Compressed sensing (CS) aims to recover images from fewer measurements than that governed by the Nyquist sampling theorem. Most CS methods use analytical predefined sparsifying domains such as total variation, wavelets, curvelets, and finite transforms to perform this task. In this study, we evaluated the use of dictionary learning (DL) as a sparsifying domain to reconstruct PET images from partially sampled data, and compared the results to the partially and fully sampled image (baseline).A CS model based on learning an adaptive dictionary over image patches was developed to recover missing observations in PET data acquisition. The recovery was done iteratively in two steps: a dictionary learning step and an image reconstruction step. Two experiments were performed to evaluate the proposed CS recovery algorithm: an IEC phantom study and five patient studies. In each case, 11% of the detectors of a GE PET/CT system were removed and the acquired sinogram data were recovered using the proposed DL algorithm. The recovered images (DL) as well as the partially sampled images (with detector gaps) for both experiments were then compared to the baseline. Comparisons were done by calculating RMSE, contrast recovery and SNR in ROIs drawn in the background, and spheres of the phantom as well as patient lesions.For the phantom experiment, the RMSE for the DL recovered images were 5.8% when compared with the baseline images while it was 17.5% for the partially sampled images. In the patients' studies, RMSE for the DL recovered images were 3.8%, while it was 11.3% for the partially sampled images. Our proposed CS with DL is a good approach to recover partially sampled PET data. This approach has implications toward reducing scanner cost while maintaining accurate PET image quantification

    Learning Regularization Parameter-Maps for Variational Image Reconstruction Using Deep Neural Networks and Algorithm Unrolling

    Get PDF
    We introduce a method for the fast estimation of data-adapted, spatially and temporally dependent regularization parameter-maps for variational image reconstruction, focusing on total variation (TV) minimization. The proposed approach is inspired by recent developments in algorithm unrolling using deep neural networks (NNs) and relies on two distinct subnetworks. The first subnetwork estimates the regularization parameter-map from the input data. The second subnetwork unrolls iterations of an iterative algorithm which approximately solves the corresponding TV-minimization problem incorporating the previously estimated regularization parameter-map. The overall network is then trained end-to-end in a supervised learning fashion using pairs of clean and corrupted data but crucially without the need for access to labels for the optimal regularization parameter-maps. We first prove consistency of the unrolled scheme by showing that the unrolled minimizing energy functional used for the supervised learning -converges, as tends to infinity, to the corresponding functional that incorporates the exact solution map of the TV-minimization problem. Then, we apply and evaluate the proposed method on a variety of large-scale and dynamic imaging problems with retrospectively simulated measurement data for which the automatic computation of such regularization parameters has been so far challenging using the state-of-the-art methods: a 2D dynamic cardiac magnetic resonance imaging (MRI) reconstruction problem, a quantitative brain MRI reconstruction problem, a low-dose computed tomography problem, and a dynamic image denoising problem. The proposed method consistently improves the TV reconstructions using scalar regularization parameters, and the obtained regularization parameter-maps adapt well to imaging problems and data by leading to the preservation of detailed features. Although the choice of the regularization parameter-maps is data-driven and based on NNs, the subsequent reconstruction algorithm is interpretable since it inherits the properties (e.g., convergence guarantees) of the iterative reconstruction method from which the network is implicitly defined
    corecore