554 research outputs found

    GENFIRE: A generalized Fourier iterative reconstruction algorithm for high-resolution 3D imaging

    Get PDF
    Tomography has made a radical impact on diverse fields ranging from the study of 3D atomic arrangements in matter to the study of human health in medicine. Despite its very diverse applications, the core of tomography remains the same, that is, a mathematical method must be implemented to reconstruct the 3D structure of an object from a number of 2D projections. In many scientific applications, however, the number of projections that can be measured is limited due to geometric constraints, tolerable radiation dose and/or acquisition speed. Thus it becomes an important problem to obtain the best-possible reconstruction from a limited number of projections. Here, we present the mathematical implementation of a tomographic algorithm, termed GENeralized Fourier Iterative REconstruction (GENFIRE). By iterating between real and reciprocal space, GENFIRE searches for a global solution that is concurrently consistent with the measured data and general physical constraints. The algorithm requires minimal human intervention and also incorporates angular refinement to reduce the tilt angle error. We demonstrate that GENFIRE can produce superior results relative to several other popular tomographic reconstruction techniques by numerical simulations, and by experimentally by reconstructing the 3D structure of a porous material and a frozen-hydrated marine cyanobacterium. Equipped with a graphical user interface, GENFIRE is freely available from our website and is expected to find broad applications across different disciplines.Comment: 18 pages, 6 figure

    Automatic Differentiation for Inverse Problems in X-ray Imaging and Microscopy

    Get PDF
    Computational techniques allow breaking the limits of traditional imaging methods, such as time restrictions, resolution, and optics flaws. While simple computational methods can be enough for highly controlled microscope setups or just for previews, an increased level of complexity is instead required for advanced setups, acquisition modalities or where uncertainty is high; the need for complex computational methods clashes with rapid design and execution. In all these cases, Automatic Differentiation, one of the subtopics of Artificial Intelligence, may offer a functional solution, but only if a GPU implementation is available. In this paper, we show how a framework built to solve just one optimisation problem can be employed for many different X-ray imaging inverse problems

    Compressed Sensing Based Reconstruction Algorithm for X-ray Dose Reduction in Synchrotron Source Micro Computed Tomography

    Get PDF
    Synchrotron computed tomography requires a large number of angular projections to reconstruct tomographic images with high resolution for detailed and accurate diagnosis. However, this exposes the specimen to a large amount of x-ray radiation. Furthermore, this increases scan time and, consequently, the likelihood of involuntary specimen movements. One approach for decreasing the total scan time and radiation dose is to reduce the number of projection views needed to reconstruct the images. However, the aliasing artifacts appearing in the image due to the reduced number of projection data, visibly degrade the image quality. According to the compressed sensing theory, a signal can be accurately reconstructed from highly undersampled data by solving an optimization problem, provided that the signal can be sparsely represented in a predefined transform domain. Therefore, this thesis is mainly concerned with designing compressed sensing-based reconstruction algorithms to suppress aliasing artifacts while preserving spatial resolution in the resulting reconstructed image. First, the reduced-view synchrotron computed tomography reconstruction is formulated as a total variation regularized compressed sensing problem. The Douglas-Rachford Splitting and the randomized Kaczmarz methods are utilized to solve the optimization problem of the compressed sensing formulation. In contrast with the first part, where consistent simulated projection data are generated for image reconstruction, the reduced-view inconsistent real ex-vivo synchrotron absorption contrast micro computed tomography bone data are used in the second part. A gradient regularized compressed sensing problem is formulated, and the Douglas-Rachford Splitting and the preconditioned conjugate gradient methods are utilized to solve the optimization problem of the compressed sensing formulation. The wavelet image denoising algorithm is used as the post-processing algorithm to attenuate the unwanted staircase artifact generated by the reconstruction algorithm. Finally, a noisy and highly reduced-view inconsistent real in-vivo synchrotron phase-contrast computed tomography bone data are used for image reconstruction. A combination of prior image constrained compressed sensing framework, and the wavelet regularization is formulated, and the Douglas-Rachford Splitting and the preconditioned conjugate gradient methods are utilized to solve the optimization problem of the compressed sensing formulation. The prior image constrained compressed sensing framework takes advantage of the prior image to promote the sparsity of the target image. It may lead to an unwanted staircase artifact when applied to noisy and texture images, so the wavelet regularization is used to attenuate the unwanted staircase artifact generated by the prior image constrained compressed sensing reconstruction algorithm. The visual and quantitative performance assessments with the reduced-view simulated and real computed tomography data from canine prostate tissue, rat forelimb, and femoral cortical bone samples, show that the proposed algorithms have fewer artifacts and reconstruction errors than other conventional reconstruction algorithms at the same x-ray dose

    PyHST2: an hybrid distributed code for high speed tomographic reconstruction with iterative reconstruction and a priori knowledge capabilities

    Full text link
    We present the PyHST2 code which is in service at ESRF for phase-contrast and absorption tomography. This code has been engineered to sustain the high data flow typical of the third generation synchrotron facilities (10 terabytes per experiment) by adopting a distributed and pipelined architecture. The code implements, beside a default filtered backprojection reconstruction, iterative reconstruction techniques with a-priori knowledge. These latter are used to improve the reconstruction quality or in order to reduce the required data volume and reach a given quality goal. The implemented a-priori knowledge techniques are based on the total variation penalisation and a new recently found convex functional which is based on overlapping patches. We give details of the different methods and their implementations while the code is distributed under free license. We provide methods for estimating, in the absence of ground-truth data, the optimal parameters values for a-priori techniques

    Ultrafast Radiographic Imaging and Tracking: An overview of instruments, methods, data, and applications

    Full text link
    Ultrafast radiographic imaging and tracking (U-RadIT) use state-of-the-art ionizing particle and light sources to experimentally study sub-nanosecond dynamic processes in physics, chemistry, biology, geology, materials science and other fields. These processes, fundamental to nuclear fusion energy, advanced manufacturing, green transportation and others, often involve one mole or more atoms, and thus are challenging to compute by using the first principles of quantum physics or other forward models. One of the central problems in U-RadIT is to optimize information yield through, e.g. high-luminosity X-ray and particle sources, efficient imaging and tracking detectors, novel methods to collect data, and large-bandwidth online and offline data processing, regulated by the underlying physics, statistics, and computing power. We review and highlight recent progress in: a.) Detectors; b.) U-RadIT modalities; c.) Data and algorithms; and d.) Applications. Hardware-centric approaches to U-RadIT optimization are constrained by detector material properties, low signal-to-noise ratio, high cost and long development cycles of critical hardware components such as ASICs. Interpretation of experimental data, including comparisons with forward models, is frequently hindered by sparse measurements, model and measurement uncertainties, and noise. Alternatively, U-RadIT makes increasing use of data science and machine learning algorithms, including experimental implementations of compressed sensing. Machine learning and artificial intelligence approaches, refined by physics and materials information, may also contribute significantly to data interpretation, uncertainty quantification and U-RadIT optimization.Comment: 51 pages, 31 figures; Overview of ultrafast radiographic imaging and tracking as a part of ULITIMA 2023 conference, Mar. 13-16,2023, Menlo Park, CA, US

    Deep Learning for Automated Experimentation in Scanning Transmission Electron Microscopy

    Full text link
    Machine learning (ML) has become critical for post-acquisition data analysis in (scanning) transmission electron microscopy, (S)TEM, imaging and spectroscopy. An emerging trend is the transition to real-time analysis and closed-loop microscope operation. The effective use of ML in electron microscopy now requires the development of strategies for microscopy-centered experiment workflow design and optimization. Here, we discuss the associated challenges with the transition to active ML, including sequential data analysis and out-of-distribution drift effects, the requirements for the edge operation, local and cloud data storage, and theory in the loop operations. Specifically, we discuss the relative contributions of human scientists and ML agents in the ideation, orchestration, and execution of experimental workflows and the need to develop universal hyper languages that can apply across multiple platforms. These considerations will collectively inform the operationalization of ML in next-generation experimentation.Comment: Review Articl

    Neural Network Methods for Radiation Detectors and Imaging

    Full text link
    Recent advances in image data processing through machine learning and especially deep neural networks (DNNs) allow for new optimization and performance-enhancement schemes for radiation detectors and imaging hardware through data-endowed artificial intelligence. We give an overview of data generation at photon sources, deep learning-based methods for image processing tasks, and hardware solutions for deep learning acceleration. Most existing deep learning approaches are trained offline, typically using large amounts of computational resources. However, once trained, DNNs can achieve fast inference speeds and can be deployed to edge devices. A new trend is edge computing with less energy consumption (hundreds of watts or less) and real-time analysis potential. While popularly used for edge computing, electronic-based hardware accelerators ranging from general purpose processors such as central processing units (CPUs) to application-specific integrated circuits (ASICs) are constantly reaching performance limits in latency, energy consumption, and other physical constraints. These limits give rise to next-generation analog neuromorhpic hardware platforms, such as optical neural networks (ONNs), for high parallel, low latency, and low energy computing to boost deep learning acceleration

    Distributed optimization for nonrigid nano-tomography

    Full text link
    Resolution level and reconstruction quality in nano-computed tomography (nano-CT) are in part limited by the stability of microscopes, because the magnitude of mechanical vibrations during scanning becomes comparable to the imaging resolution, and the ability of the samples to resist beam damage during data acquisition. In such cases, there is no incentive in recovering the sample state at different time steps like in time-resolved reconstruction methods, but instead the goal is to retrieve a single reconstruction at the highest possible spatial resolution and without any imaging artifacts. Here we propose a joint solver for imaging samples at the nanoscale with projection alignment, unwarping and regularization. Projection data consistency is regulated by dense optical flow estimated by Farneback's algorithm, leading to sharp sample reconstructions with less artifacts. Synthetic data tests show robustness of the method to Poisson and low-frequency background noise. Applicability of the method is demonstrated on two large-scale nano-imaging experimental data sets.Comment: Manuscript and supplementary materia

    Efficient sampling strategies for x-ray micro computed tomography with an intensity-modulated beam

    Get PDF
    The term "cycloidal CT" refers to a family of efficient sampling strategies that can be applied to x-ray micro-computed tomography (CT) systems which operate with an intensity-modulated beam. Such a beam can be employed to provide access to a phase contrast channel and high spatial resolutions (a few um). Phase contrast can offer better image contrast of samples which have traditionally been "invisible” to x-rays due to their weak attenuation, and high resolutions help view crucial details in samples. Cycloidal sampling strategies provide images more quickly than the gold standard in the field ("dithering”). I conceived and compared four practical implementation strategies for cycloidal CT, three of which are "flyscans” (the sample moves continuously). Flyscans acquire images of similar resolution to dithering with no overheads, reducing acquisition time to exposure time. I also developed a "knife-edge” position tracking method which tracks subpixel motions of the sample stage. This information can be used to facilitate, automate, and improve the reconstruction of cycloidal data. I analysed the effects of different levels of dose on the signal-to-noise ratio (SNR) of an image acquired with cycloidal CT. The results show that cycloidal images yield the same SNR as dithered images with less dose, although a more extensive study is required. Finally, I explored the potential of using cycloidal CT for intraoperative specimen imaging and tissue engineering. My results are encouraging for tissue engineering; for intraoperative imaging, the cycloidal images did not show comparable resolution to the dithered images, although that is possibly linked to issues with the dataset. Overall, my work has provided a benchmark for the implementation and application of cycloidal CT for the first time. Besides a summary of my research, this thesis is meant to be a comprehensive guide for facilitating uptake of cycloidal CT within the scientific community and beyond
    • …
    corecore