3 research outputs found

    4D X-Ray CT Reconstruction using Multi-Slice Fusion

    Full text link
    There is an increasing need to reconstruct objects in four or more dimensions corresponding to space, time and other independent parameters. The best 4D reconstruction algorithms use regularized iterative reconstruction approaches such as model based iterative reconstruction (MBIR), which depends critically on the quality of the prior modeling. Recently, Plug-and-Play methods have been shown to be an effective way to incorporate advanced prior models using state-of-the-art denoising algorithms designed to remove additive white Gaussian noise (AWGN). However, state-of-the-art denoising algorithms such as BM4D and deep convolutional neural networks (CNNs) are primarily available for 2D and sometimes 3D images. In particular, CNNs are difficult and computationally expensive to implement in four or more dimensions, and training may be impossible if there is no associated high-dimensional training data. In this paper, we present Multi-Slice Fusion, a novel algorithm for 4D and higher-dimensional reconstruction, based on the fusion of multiple low-dimensional denoisers. Our approach uses multi-agent consensus equilibrium (MACE), an extension of Plug-and-Play, as a framework for integrating the multiple lower-dimensional prior models. We apply our method to the problem of 4D cone-beam X-ray CT reconstruction for Non Destructive Evaluation (NDE) of moving parts. This is done by solving the MACE equations using lower-dimensional CNN denoisers implemented in parallel on a heterogeneous cluster. Results on experimental CT data demonstrate that Multi-Slice Fusion can substantially improve the quality of reconstructions relative to traditional 4D priors, while also being practical to implement and train.Comment: 8 pages, 8 figures, IEEE International Conference on Computational Photography 2019, Toky

    High Speed Imaging Via Advanced Modeling

    No full text
    There is an increasing need to accurately image objects at a high temporal resolution for different applications in order to analyze the underlying physical, chemical, or biological processes. In this thesis, we use advanced models exploiting the image structure and the measurement process in order to achieve an improved temporal resolution. The thesis is divided into three chapters, each corresponding to a different imaging application. In the first chapter, we propose a novel method to localize neurons in fluorescence microscopy images. Accurate localization of neurons enables us to scan only the neuron locations instead of the full brain volume and thus improve the temporal resolution of neuron activity monitoring. We formulate the neuron localization problem as an inverse problem where we reconstruct an image that encodes the location of the neuron centers. The sparsity of the neuron centers serves as a prior model, while the forward model comprises of shape models estimated from training data. In the second chapter, we introduce multi-slice fusion, a novel framework to incorporate advanced prior models for inverse problems spanning many dimensions such as 4D computed tomography (CT) reconstruction. State of the art 4D reconstruction methods use model based iterative reconstruction (MBIR), but it depends critically on the quality of the prior modeling. Incorporating deep convolutional neural networks (CNNs) in the 4D reconstruction problem is difficult due to computational difficulties and lack of high-dimensional training data. Multi-Slice Fusion integrates the tomographic forward model with multiple low dimensional CNN denoisers along different planes to produce a 4D regularized reconstruction. The improved regularization in multi-slice fusion allows each time-frame to be reconstructed from fewer measurements, resulting in an improved temporal resolution in the reconstruction. Experimental results on sparse-view and limited-angle CT data demonstrate that Multi-Slice Fusion can substantially improve the quality of reconstructions relative to traditional methods, while also being practical to implement and train. In the final chapter, we introduce CodEx, a synergistic combination of coded acquisition and a non-convex Bayesian reconstruction for improving acquisition speed in computed tomography (CT). In an ideal “step-and-shoot” tomographic acquisition, the object is rotated to each desired angle, and the view is taken. However, step-and-shoot acquisition is slow and can waste photons, so in practice the object typically rotates continuously in time, leading to views that are blurry. This blur can then result in reconstructions with severe motion artifacts. CodEx works by encoding the acquisition with a known binary code that the reconstruction algorithm then inverts. The CodEx reconstruction method uses the alternating direction method of multipliers (ADMM) to split the inverse problem into iterative deblurring and reconstruction sub-problems, making reconstruction practical. CodEx allows for a fast data acquisition leading to a good temporal resolution in the reconstruction
    corecore