131 research outputs found

    Stage-by-stage Wavelet Optimization Refinement Diffusion Model for Sparse-View CT Reconstruction

    Full text link
    Diffusion models have emerged as potential tools to tackle the challenge of sparse-view CT reconstruction, displaying superior performance compared to conventional methods. Nevertheless, these prevailing diffusion models predominantly focus on the sinogram or image domains, which can lead to instability during model training, potentially culminating in convergence towards local minimal solutions. The wavelet trans-form serves to disentangle image contents and features into distinct frequency-component bands at varying scales, adeptly capturing diverse directional structures. Employing the Wavelet transform as a guiding sparsity prior significantly enhances the robustness of diffusion models. In this study, we present an innovative approach named the Stage-by-stage Wavelet Optimization Refinement Diffusion (SWORD) model for sparse-view CT reconstruction. Specifically, we establish a unified mathematical model integrating low-frequency and high-frequency generative models, achieving the solution with optimization procedure. Furthermore, we perform the low-frequency and high-frequency generative models on wavelet's decomposed components rather than sinogram or image domains, ensuring the stability of model training. Our method rooted in established optimization theory, comprising three distinct stages, including low-frequency generation, high-frequency refinement and domain transform. Our experimental results demonstrate that the proposed method outperforms existing state-of-the-art methods both quantitatively and qualitatively

    Low-dose CBCT reconstruction via joint non-local total variation denoising and cubic B-spline interpolation

    Get PDF
    This study develops an improved Feldkamp-Davis-Kress (FDK) reconstruction algorithm using non-local total variation (NLTV) denoising and a cubic B-spline interpolation-based backprojector to enhance the image quality of low-dose cone-beam computed tomography (CBCT). The NLTV objective function is minimized on all log-transformed projections using steepest gradient descent optimization with an adaptive control of the step size to augment the difference between a real structure and noise. The proposed algorithm was evaluated using a phantom data set acquired from a low-dose protocol with lower milliampere-seconds (mAs).The combination of NLTV minimization and cubic B-spline interpolation rendered the enhanced reconstruction images with significantly reduced noise compared to conventional FDK and local total variation with anisotropic penalty. The artifacts were remarkably suppressed in the reconstructed images. Quantitative analysis of reconstruction images using low-dose projections acquired from low mAs showed a contrast-to-noise ratio with spatial resolution comparable to images reconstructed using projections acquired from high mAs. The proposed approach produced the lowest RMSE and the highest correlation. These results indicate that the proposed algorithm enables application of the conventional FDK algorithm for low mAs image reconstruction in low-dose CBCT imaging, thereby eliminating the need for more computationally demanding algorithms. The substantial reductions in radiation exposure associated with the low mAs projection acquisition may facilitate wider practical applications of daily online CBCT imaging.ope

    Autonomous Electron Tomography Reconstruction with Machine Learning

    Full text link
    Modern electron tomography has progressed to higher resolution at lower doses by leveraging compressed sensing methods that minimize total variation (TV). However, these sparsity-emphasized reconstruction algorithms introduce tunable parameters that greatly influence the reconstruction quality. Here, Pareto front analysis shows that high-quality tomograms are reproducibly achieved when TV minimization is heavily weighted. However, in excess, compressed sensing tomography creates overly smoothed 3D reconstructions. Adding momentum into the gradient descent during reconstruction reduces the risk of over-smoothing and better ensures that compressed sensing is well behaved. For simulated data, the tedious process of tomography parameter selection is efficiently solved using Bayesian optimization with Gaussian processes. In combination, Bayesian optimization with momentum-based compressed sensing greatly reduces the required compute time-an 80% reduction was observed for the 3D reconstruction of SrTiO3_3 nanocubes. Automated parameter selection is necessary for large scale tomographic simulations that enable the 3D characterization of a wider range of inorganic and biological materials.Comment: 8 pages, 4 figure

    Neural Network Methods for Radiation Detectors and Imaging

    Full text link
    Recent advances in image data processing through machine learning and especially deep neural networks (DNNs) allow for new optimization and performance-enhancement schemes for radiation detectors and imaging hardware through data-endowed artificial intelligence. We give an overview of data generation at photon sources, deep learning-based methods for image processing tasks, and hardware solutions for deep learning acceleration. Most existing deep learning approaches are trained offline, typically using large amounts of computational resources. However, once trained, DNNs can achieve fast inference speeds and can be deployed to edge devices. A new trend is edge computing with less energy consumption (hundreds of watts or less) and real-time analysis potential. While popularly used for edge computing, electronic-based hardware accelerators ranging from general purpose processors such as central processing units (CPUs) to application-specific integrated circuits (ASICs) are constantly reaching performance limits in latency, energy consumption, and other physical constraints. These limits give rise to next-generation analog neuromorhpic hardware platforms, such as optical neural networks (ONNs), for high parallel, low latency, and low energy computing to boost deep learning acceleration
    corecore