1,836 research outputs found

    A Deep Learning Reconstruction Framework for Differential Phase-Contrast Computed Tomography with Incomplete Data

    Full text link
    Differential phase-contrast computed tomography (DPC-CT) is a powerful analysis tool for soft-tissue and low-atomic-number samples. Limited by the implementation conditions, DPC-CT with incomplete projections happens quite often. Conventional reconstruction algorithms are not easy to deal with incomplete data. They are usually involved with complicated parameter selection operations, also sensitive to noise and time-consuming. In this paper, we reported a new deep learning reconstruction framework for incomplete data DPC-CT. It is the tight coupling of the deep learning neural network and DPC-CT reconstruction algorithm in the phase-contrast projection sinogram domain. The estimated result is the complete phase-contrast projection sinogram not the artifacts caused by the incomplete data. After training, this framework is determined and can reconstruct the final DPC-CT images for a given incomplete phase-contrast projection sinogram. Taking the sparse-view DPC-CT as an example, this framework has been validated and demonstrated with synthetic and experimental data sets. Embedded with DPC-CT reconstruction, this framework naturally encapsulates the physical imaging model of DPC-CT systems and is easy to be extended to deal with other challengs. This work is helpful to push the application of the state-of-the-art deep learning theory in the field of DPC-CT

    Machine-learning-based nonlinear decomposition of CT images for metal artifact reduction

    Full text link
    Computed tomography (CT) images containing metallic objects commonly show severe streaking and shadow artifacts. Metal artifacts are caused by nonlinear beam-hardening effects combined with other factors such as scatter and Poisson noise. In this paper, we propose an implant-specific method that extracts beam-hardening artifacts from CT images without affecting the background image. We found that in cases where metal is inserted in the water (or tissue), the generated beam-hardening artifacts can be approximately extracted by subtracting artifacts generated exclusively by metals. We used a deep learning technique to train nonlinear representations of beam-hardening artifacts arising from metals, which appear as shadows and streaking artifacts. The proposed network is not designed to identify ground-truth CT images (i.e., the CT image before its corruption by metal artifacts). Consequently, these images are not required for training. The proposed method was tested on a dataset consisting of real CT scans of pelvises containing simulated hip prostheses. The results demonstrate that the proposed deep learning method successfully extracts both shadowing and streaking artifacts

    Spatio-Temporal Deep Learning-Based Undersampling Artefact Reduction for 2D Radial Cine MRI with Limited Data

    Full text link
    In this work we reduce undersampling artefacts in two-dimensional (2D2D) golden-angle radial cine cardiac MRI by applying a modified version of the U-net. We train the network on 2D2D spatio-temporal slices which are previously extracted from the image sequences. We compare our approach to two 2D2D and a 3D3D Deep Learning-based post processing methods and to three iterative reconstruction methods for dynamic cardiac MRI. Our method outperforms the 2D2D spatially trained U-net and the 2D2D spatio-temporal U-net. Compared to the 3D3D spatio-temporal U-net, our method delivers comparable results, but with shorter training times and less training data. Compared to the Compressed Sensing-based methods ktkt-FOCUSS and a total variation regularised reconstruction approach, our method improves image quality with respect to all reported metrics. Further, it achieves competitive results when compared to an iterative reconstruction method based on adaptive regularization with Dictionary Learning and total variation, while only requiring a small fraction of the computational time. A persistent homology analysis demonstrates that the data manifold of the spatio-temporal domain has a lower complexity than the spatial domain and therefore, the learning of a projection-like mapping is facilitated. Even when trained on only one single subject without data-augmentation, our approach yields results which are similar to the ones obtained on a large training dataset. This makes the method particularly suitable for training a network on limited training data. Finally, in contrast to the spatial 2D2D U-net, our proposed method is shown to be naturally robust with respect to image rotation in image space and almost achieves rotation-equivariance where neither data-augmentation nor a particular network design are required.Comment: To be published in IEEE Transactions on Medical Imagin

    Monochromatic CT Image Reconstruction from Current-Integrating Data via Deep Learning

    Full text link
    In clinical CT, the x-ray source emits polychromatic x-rays, which are detected in the current-integrating mode. This physical process is accurately described by an energy-dependent non-linear integral model on the basis of the Beer-Lambert law. However, the non-linear model is too complicated to be directly solved for the image reconstruction, and is often approximated to a linear integral model in the form of the Radon transform, basically ignoring energy-dependent information. This model approximation would generate inaccurate quantification of attenuation image and significant beam-hardening artifacts. In this paper, we develop a deep-learning-based CT image reconstruction method to address the mismatch of computing model to physical model. Our method learns a nonlinear transformation from big data to correct measured projection data to accurately match the linear integral model, realize monochromatic imaging and overcome beam hardening effectively. The deep-learning network is trained and tested using clinical dual-energy dataset to demonstrate the feasibility of the proposed methodology. Results show that the proposed method can achieve a high accuracy of the projection correction with a relative error of less than 0.2%

    Deconvolution-Based Backproject-Filter (BPF) Computed Tomography Image Reconstruction Method Using Deep Learning Technique

    Full text link
    For conventional computed tomography (CT) image reconstruction tasks, the most popular method is the so-called filtered-back-projection (FBP) algorithm. In it, the acquired Radon projections are usually filtered first by a ramp kernel before back-projected to generate CT images. In this work, as a contrary, we realized the idea of image-domain backproject-filter (BPF) CT image reconstruction using the deep learning techniques for the first time. With a properly designed convolutional neural network (CNN), preliminary results demonstrate that it is feasible to reconstruct CT images with maintained high spatial resolution and accurate pixel values from the highly blurred back-projection image, i.e., laminogram. In addition, experimental results also show that this deconvolution-based CT image reconstruction network has the potential to reduce CT image noise (up to 20%), indicating that patient radiation dose may be reduced. Due to these advantages, this proposed CNN-based image-domain BPF type CT image reconstruction scheme provides promising prospects in generating high spatial resolution, low-noise CT images for future clinical applications

    Deep Neural Network Assisted Iterative Reconstruction Method for Low Dose CT

    Full text link
    Low Dose Computed Tomography suffers from a high amount of noise and/or undersampling artefacts in the reconstructed image. In the current article, a Deep Learning technique is exploited as a regularization term for the iterative reconstruction method SIRT. While SIRT minimizes the error in the sinogram space, the proposed regularization model additionally steers intermediate SIRT reconstructions towards the desired output. Extensive evaluations demonstrate the superior outcomes of the proposed method compared to the state of the art techniques. Comparing the forward projection of the reconstructed image with the original signal shows a higher fidelity to the sinogram space for the current approach amongst other learning based methods

    Deep-neural-network based sinogram synthesis for sparse-view CT image reconstruction

    Full text link
    Recently, a number of approaches to low-dose computed tomography (CT) have been developed and deployed in commercialized CT scanners. Tube current reduction is perhaps the most actively explored technology with advanced image reconstruction algorithms. Sparse data sampling is another viable option to the low-dose CT, and sparse-view CT has been particularly of interest among the researchers in CT community. Since analytic image reconstruction algorithms would lead to severe image artifacts, various iterative algorithms have been developed for reconstructing images from sparsely view-sampled projection data. However, iterative algorithms take much longer computation time than the analytic algorithms, and images are usually prone to different types of image artifacts that heavily depend on the reconstruction parameters. Interpolation methods have also been utilized to fill the missing data in the sinogram of sparse-view CT thus providing synthetically full data for analytic image reconstruction. In this work, we introduce a deep-neural-network-enabled sinogram synthesis method for sparse-view CT, and show its outperformance to the existing interpolation methods and also to the iterative image reconstruction approach

    A Gentle Introduction to Deep Learning in Medical Image Processing

    Full text link
    This paper tries to give a gentle introduction to deep learning in medical image processing, proceeding from theoretical foundations to applications. We first discuss general reasons for the popularity of deep learning, including several major breakthroughs in computer science. Next, we start reviewing the fundamental basics of the perceptron and neural networks, along with some fundamental theory that is often omitted. Doing so allows us to understand the reasons for the rise of deep learning in many application domains. Obviously medical image processing is one of these areas which has been largely affected by this rapid progress, in particular in image detection and recognition, image segmentation, image registration, and computer-aided diagnosis. There are also recent trends in physical simulation, modelling, and reconstruction that have led to astonishing results. Yet, some of these approaches neglect prior knowledge and hence bear the risk of producing implausible results. These apparent weaknesses highlight current limitations of deep learning. However, we also briefly discuss promising approaches that might be able to resolve these problems in the future.Comment: Accepted by Journal of Medical Physics; Final Version after revie

    Virtual Reality Aided Mobile C-arm Positioning for Image-Guided Surgery

    Get PDF
    Image-guided surgery (IGS) is the minimally invasive procedure based on the pre-operative volume in conjunction with intra-operative X-ray images which are commonly captured by mobile C-arms for the confirmation of surgical outcomes. Although currently some commercial navigation systems are employed, one critical issue of such systems is the neglect regarding the radiation exposure to the patient and surgeons. In practice, when one surgical stage is finished, several X-ray images have to be acquired repeatedly by the mobile C-arm to obtain the desired image. Excessive radiation exposure may increase the risk of some complications. Therefore, it is necessary to develop a positioning system for mobile C-arms, and achieve one-time imaging to avoid the additional radiation exposure. In this dissertation, a mobile C-arm positioning system is proposed with the aid of virtual reality (VR). The surface model of patient is reconstructed by a camera mounted on the mobile C-arm. A novel registration method is proposed to align this model and pre-operative volume based on a tracker, so that surgeons can visualize the hidden anatomy directly from the outside view and determine a reference pose of C-arm. Considering the congested operating room, the C-arm is modeled as manipulator with a movable base to maneuver the image intensifier to the desired pose. In the registration procedure above, intensity-based 2D/3D registration is used to transform the pre-operative volume into the coordinate system of tracker. Although it provides a high accuracy, the small capture range hinders its clinical use due to the initial guess. To address such problem, a robust and fast initialization method is proposed based on the automatic tracking based initialization and multi-resolution estimation in frequency domain. This hardware-software integrated approach provides almost optimal transformation parameters for intensity-based registration. To determine the pose of mobile C-arm, high-quality visualization is necessary to locate the pathology in the hidden anatomy. A novel dimensionality reduction method based on sparse representation is proposed for the design of multi-dimensional transfer function in direct volume rendering. It not only achieves the similar performance to the conventional methods, but also owns the capability to deal with the large data sets

    Data Consistent Artifact Reduction for Limited Angle Tomography with Deep Learning Prior

    Full text link
    Robustness of deep learning methods for limited angle tomography is challenged by two major factors: a) due to insufficient training data the network may not generalize well to unseen data; b) deep learning methods are sensitive to noise. Thus, generating reconstructed images directly from a neural network appears inadequate. We propose to constrain the reconstructed images to be consistent with the measured projection data, while the unmeasured information is complemented by learning based methods. For this purpose, a data consistent artifact reduction (DCAR) method is introduced: First, a prior image is generated from an initial limited angle reconstruction via deep learning as a substitute for missing information. Afterwards, a conventional iterative reconstruction algorithm is applied, integrating the data consistency in the measured angular range and the prior information in the missing angular range. This ensures data integrity in the measured area, while inaccuracies incorporated by the deep learning prior lie only in areas where no information is acquired. The proposed DCAR method achieves significant image quality improvement: for 120-degree cone-beam limited angle tomography more than 10% RMSE reduction in noise-free case and more than 24% RMSE reduction in noisy case compared with a state-of-the-art U-Net based method.Comment: Accepted by MICCAI MLMIR worksho
    corecore