10 research outputs found

    Comparison of different image reconstruction algorithms for Digital Breast Tomosynthesis and assessment of their potential to reduce radiation dose

    Get PDF
    Tese de mestrado, Engenharia Física, 2022, Universidade de Lisboa, Faculdade de CiênciasDigital Breast Tomosynthesis is a three-dimensional medical imaging technique that allows the view of sectional parts of the breast. Obtaining multiple slices of the breast constitutes an advantage in contrast to conventional mammography examination in view of the increased potential in breast cancer detectability. Conventional mammography, despite being a screening success, has undesirable specificity, sensitivity, and high recall rates owing to the overlapping of tissues. Although this new technique promises better diagnostic results, the acquisition methods and image reconstruction algorithms are still under research. Several articles suggest the use of analytic algorithms. However, more recent articles highlight the iterative algorithm’s potential for increasing image quality when compared to the former. The scope of this dissertation was to test the hypothesis of achieving higher quality images using iterative algorithms acquired with lower doses than those using analytic algorithms. In a first stage, the open-source Tomographic Iterative GPU-based Reconstruction (TIGRE) Toolbox for fast and accurate 3D x-ray image reconstruction was used to reconstruct the images acquired using an acrylic phantom. The algorithms used from the toolbox were the Feldkamp, Davis, and Kress, the Simultaneous Algebraic Reconstruction Technique, and the Maximum Likelihood Expectation Maximization algorithm. In a second and final state, the possibility of further reducing the radiation dose using image postprocessing tools was evaluated. A Total Variation Minimization filter was applied to the images reconstructed with the TIGRE toolbox algorithm that provided the best image quality. These were then compared to the images of the commercial unit used for the image acquisitions. With the use of image quality parameters, it was found that the Maximum Likelihood Expectation Maximization algorithm performance was the best of the three for lower radiation doses, especially with the filter. In sum, the result showed the potential of the algorithm in obtaining images with quality for low doses

    On Krylov Methods for Large Scale CBCT Reconstruction

    Full text link
    Krylov subspace methods are a powerful family of iterative solvers for linear systems of equations, which are commonly used for inverse problems due to their intrinsic regularization properties. Moreover, these methods are naturally suited to solve large-scale problems, as they only require matrix-vector products with the system matrix (and its adjoint) to compute approximate solutions, and they display a very fast convergence. Even if this class of methods has been widely researched and studied in the numerical linear algebra community, its use in applied medical physics and applied engineering is still very limited. e.g. in realistic large-scale Computed Tomography (CT) problems, and more specifically in Cone Beam CT (CBCT). This work attempts to breach this gap by providing a general framework for the most relevant Krylov subspace methods applied to 3D CT problems, including the most well-known Krylov solvers for non-square systems (CGLS, LSQR, LSMR), possibly in combination with Tikhonov regularization, and methods that incorporate total variation (TV) regularization. This is provided within an open source framework: the Tomographic Iterative GPU-based Reconstruction (TIGRE) toolbox, with the idea of promoting accessibility and reproducibility of the results for the algorithms presented. Finally, numerical results in synthetic and real-world 3D CT applications (medical CBCT and {\mu}-CT datasets) are provided to showcase and compare the different Krylov subspace methods presented in the paper, as well as their suitability for different kinds of problems.Comment: submitte

    Tomosipo: fast, flexible, and convenient 3D tomography for complex scanning geometries in Python

    Get PDF
    Tomography is a powerful tool for reconstructing the interior of an object from a series of projection images. Typically, the source and detector traverse a standard path (e.g., circular, helical). Recently, various techniques have emerged that use more complex acquisition geometries. Current software packages require significant handwork, or lack the flexibility to handle such geometries. Therefore, software is needed that can concisely represent, visualize, and compute reconstructions of complex acquisition geometries. We present tomosipo, a Python package that provides these capabilities in a concise and intuitive way. Case studies demonstrate the power and flexibility of tomosipo

    Robust Single-view Cone-beam X-ray Pose Estimation with Neural Tuned Tomography (NeTT) and Masked Neural Radiance Fields (mNeRF)

    Full text link
    Many tasks performed in image-guided, mini-invasive, medical procedures can be cast as pose estimation problems, where an X-ray projection is utilized to reach a target in 3D space. Expanding on recent advances in the differentiable rendering of optically reflective materials, we introduce new methods for pose estimation of radiolucent objects using X-ray projections, and we demonstrate the critical role of optimal view synthesis in performing this task. We first develop an algorithm (DiffDRR) that efficiently computes Digitally Reconstructed Radiographs (DRRs) and leverages automatic differentiation within TensorFlow. Pose estimation is performed by iterative gradient descent using a loss function that quantifies the similarity of the DRR synthesized from a randomly initialized pose and the true fluoroscopic image at the target pose. We propose two novel methods for high-fidelity view synthesis, Neural Tuned Tomography (NeTT) and masked Neural Radiance Fields (mNeRF). Both methods rely on classic Cone-Beam Computerized Tomography (CBCT); NeTT directly optimizes the CBCT densities, while the non-zero values of mNeRF are constrained by a 3D mask of the anatomic region segmented from CBCT. We demonstrate that both NeTT and mNeRF distinctly improve pose estimation within our framework. By defining a successful pose estimate to be a 3D angle error of less than 3 deg, we find that NeTT and mNeRF can achieve similar results, both with overall success rates more than 93%. However, the computational cost of NeTT is significantly lower than mNeRF in both training and pose estimation. Furthermore, we show that a NeTT trained for a single subject can generalize to synthesize high-fidelity DRRs and ensure robust pose estimations for all other subjects. Therefore, we suggest that NeTT is an attractive option for robust pose estimation using fluoroscopic projections

    Deep learning for tomographic reconstruction with limited data

    Get PDF
    Tomography is a powerful technique to non-destructively determine the interior structure of an object.Usually, a series of projection images (e.g.\ X-ray images) is acquired from a range of different positions.from these projection images, a reconstruction of the object's interior is computed. Many advanced applications require fast acquisition, effectively limiting the number of projection images and imposing a level of noise on these images. These limitations result in artifacts (deficiencies) in the reconstructed images. Recently, deep neural networks have emerged as a powerful technique to remove these limited-data artifacts from reconstructed images, often outperformingconventional state-of-the-art techniques. To perform this task, the networks are typically trained on a dataset of paired low-quality and high-quality images of similar objects. This is a major obstacle to their use in many practical applications. In this thesis, we explore techniques to employ deep learning in advanced experiments where measuring additional objects is not possible.Financial support was provided by the Netherlands Organisation for Scientific Research (NWO), programme 639.073.506Number theory, Algebra and Geometr

    Efficient Computing for Three-Dimensional Quantitative Phase Imaging

    Get PDF
    Quantitative Phase Imaging (QPI) is a powerful imaging technique for measuring the refractive index distribution of transparent objects such as biological cells and optical fibers. The quantitative, non-invasive approach of QPI provides preeminent advantages in biomedical applications and the characterization of optical fibers. Tomographic Deconvolution Phase Microscopy (TDPM) is a promising 3D QPI method that combines diffraction tomography, deconvolution, and through-focal scanning with object rotation to achieve isotropic spatial resolution. However, due to the large data size, 3D TDPM has a drawback in that it requires extensive computation power and time. In order to overcome this shortcoming, CPU/GPU parallel computing and application-specific embedded systems can be utilized. In this research, OpenMP Tasking and CUDA Streaming with Unified Memory (TSUM) is proposed to speed up the tomographic angle computations in 3D TDPM. TSUM leverages CPU multithreading and GPU computing on a System on a Chip (SoC) with unified memory. Unified memory eliminates data transfer between CPU and GPU memories, which is a major bottleneck in GPU computing. This research presents a speedup of 3D TDPM with TSUM for a large dataset and demonstrates the potential of TSUM in realizing real-time 3D TDPM.M.S

    Arbitrarily large tomography with iterative algorithms on multiple GPUs using the TIGRE toolbox

    No full text
    3D tomographic imaging requires the computation of solutions to very large inverse problems. In many applications, iterative algorithms provide superior results, however, memory limits in available computing hardware restrict the size of problems that can be solved. For this reason, iterative methods are not normally used to reconstruct typical data sets acquired with lab based CT systems. We thus use state of the art techniques such as dual buffering to develop an efficient strategy to compute the required operations for iterative reconstruction. This allows the iterative reconstruction of volumetric images of arbitrary size using any number of GPUs, each with arbitrarily small memory. Strategies for both the forward and backprojection operators are presented, along with two regularization approaches that are easily generalized to other projection types or regularizers. The proposed improvement also accelerates reconstruction of smaller images on single or multiple GPU systems, providing faster code for time-critical applications. The resulting algorithm has been added to the TIGRE toolbox, a repository for iterative reconstruction algorithms for general CT, but this memory-saving and problem-splitting strategy can be easily adapted for use with other GPU-based tomographic reconstruction code.</p
    corecore