990 research outputs found

    A constrained, total-variation minimization algorithm for low-intensity X-ray CT

    Full text link
    Purpose: We develop an iterative image-reconstruction algorithm for application to low-intensity computed tomography (CT) projection data, which is based on constrained, total-variation (TV) minimization. The algorithm design focuses on recovering structure on length scales comparable to a detector-bin width. Method: Recovering the resolution on the scale of a detector bin, requires that pixel size be much smaller than the bin width. The resulting image array contains many more pixels than data, and this undersampling is overcome with a combination of Fourier upsampling of each projection and the use of constrained, TV-minimization, as suggested by compressive sensing. The presented pseudo-code for solving constrained, TV-minimization is designed to yield an accurate solution to this optimization problem within 100 iterations. Results: The proposed image-reconstruction algorithm is applied to a low-intensity scan of a rabbit with a thin wire, to test resolution. The proposed algorithm is compared with filtered back-projection (FBP). Conclusion: The algorithm may have some advantage over FBP in that the resulting noise-level is lowered at equivalent contrast levels of the wire.Comment: This article has been submitted to "Medical Physics" on 9/13/201

    New pixellation scheme for CT algebraic reconstruction to exploit matrix symmetries

    Full text link
    In this article we propose a new pixellation scheme which makes it possible to speed up the time of reconstruction. This proposal consists in splitting the field of view of the scanner into as many circular sectors as rotation positions of the detector. The sectors are pixellated using circular pixels whose size is always smaller than the resolution needed. The geometry of the pixels and the arrangement on circular sectors make it possible to compute the entire matrix from only one position of the scanner. Therefore, the size of the matrix decreases as many times as the number of rotations. This results in a significant reduction of the system matrix which allows algebraic methods to be applied within a reasonable time of reconstruction and speeds up the time of matrix generation. The new model is studied by means of analytical CT simulations which are reconstructed using the Maximum Likelihood Emission Maximization algorithm for transmission tomography and is compared to the cartesian pixellation model. Therefore, two different grids of pixels were developed for the same scanner geometry: one that employs circular pixels within a cartesian grid and another that divides the field of view into a polar grid which is composed by identical sectors, with circular pixels too. The results of both models are that polar matrix is built in a few seconds and the cartesian one needs several hours, the size of the matrix is significantly smaller than the circular one, and the time of reconstruction per iteration using the same iterative method is less in the polar pixel model than in the square pixel model. Several figures of merit have been computed in order to compare the original phantom with the reconstructed images. Finally, we can conclude that both reconstructions have been proved to have enough quality but, the polar pixel model is more efficient than the square pixel model. © 2008 Elsevier Ltd. All rights reserved.Mora Mora, MTC.; Rodríguez Álvarez, MJ.; Romero Bauset, JV. (2008). New pixellation scheme for CT algebraic reconstruction to exploit matrix symmetries. Computers and Mathematics with Applications. 56(3):717-726. doi:10.1016/j.camwa.2008.02.019S71772656

    Algebraic filter approach for fast approximation of nonlinear tomographic reconstruction methods

    Get PDF
    We present a computational approach for fast approximation of nonlinear tomographic reconstruction methods by filtered backprojection (FBP) methods. Algebraic reconstruction algorithms are the methods of choice in a wide range of tomographic applications, yet they require significant computation time, restricting their usefulness. We build upon recent work on the approximation of linear algebraic reconstruction methods and extend the approach to the approximation of nonlinear reconstruction methods which are common in practice. We demonstrate that if a blueprint image is available that is sufficiently similar to the scanned object, our approach can compute reconstructions that approximate iterative nonlinear methods, yet have the same speed as FBP

    Inversion of the star transform

    Full text link
    We define the star transform as a generalization of the broken ray transform introduced by us in previous work. The advantages of using the star transform include the possibility to reconstruct the absorption and the scattering coefficients of the medium separately and simultaneously (from the same data) and the possibility to utilize scattered radiation which, in the case of the conventional X-ray tomography, is discarded. In this paper, we derive the star transform from physical principles, discuss its mathematical properties and analyze numerical stability of inversion. In particular, it is shown that stable inversion of the star transform can be obtained only for configurations involving odd number of rays. Several computationally-efficient inversion algorithms are derived and tested numerically.Comment: Accepted to Inverse Problems in this for

    Application of constrained optimisation techniques in electrical impedance tomography

    Get PDF
    A Constrained Optimisation technique is described for the reconstruction of temporal resistivity images. The approach solves the Inverse problem by optimising a cost function under constraints, in the form of normalised boundary potentials. Mathematical models have been developed for two different data collection methods for the chosen criterion. Both of these models express the reconstructed image in terms of one dimensional (I-D) Lagrange multiplier functions. The reconstruction problem becomes one of estimating these 1-D functions from the normalised boundary potentials. These models are based on a cost criterion of the minimisation of the variance between the reconstructed resistivity distribution and the true resistivity distribution. The methods presented In this research extend the algorithms previously developed for X-ray systems. Computational efficiency is enhanced by exploiting the structure of the associated system matrices. The structure of the system matrices was preserved in the Electrical Impedance Tomography (EIT) implementations by applying a weighting due to non-linear current distribution during the backprojection of the Lagrange multiplier functions. In order to obtain the best possible reconstruction it is important to consider the effects of noise in the boundary data. This is achieved by using a fast algorithm which matches the statistics of the error in the approximate inverse of the associated system matrix with the statistics of the noise error in the boundary data. This yields the optimum solution with the available boundary data. Novel approaches have been developed to produce the Lagrange multiplier functions. Two alternative methods are given for the design of VLSI implementations of hardware accelerators to improve computational efficiencies. These accelerators are designed to implement parallel geometries and are modelled using a verification description language to assess their performance capabilities

    Quality bounds for binary tomography with arbitrary projection matrices

    Get PDF
    Binary tomography deals with the problem of reconstructing a binary image from a set of its projections. The problem of finding binary solutions of underdetermined linear systems is, in general, very difficult and many such solutions may exist. In a previous paper we developed error bounds on differences between solutions of binary tomography problems restricted to projection models where the corresponding matrix has constant column sums. In this paper, we present a series of computable bounds that can be used with any projection model. In fact, th

    Network Flow Algorithms for Discrete Tomography

    Get PDF
    Tomography is a powerful technique to obtain images of the interior of an object in a nondestructive way. First, a series of projection images (e.g., X-ray images) is acquired and subsequently a reconstruction of the interior is computed from the available project data. The algorithms that are used to compute such reconstructions are known as tomographic reconstruction algorithms. Discrete tomography is concerned with the tomographic reconstruction of images that are known to contain only a few different gray levels. By using this knowledge in the reconstruction algorithm it is often possible to reduce the number of projections required to compute an accurate reconstruction, compared to algorithms that do not use prior knowledge. This thesis deals with new reconstruction algorithms for discrete tomography. In particular, the first five chapters are about reconstruction algorithms based on network flow methods. These algorithms make use of an elegant correspondence between certain types of tomography problems and network flow problems from the field of Operations Research. Chapter 6 deals with a problem that occurs in the application of discrete tomography to the reconstruction of nanocrystals from projections obtained by electron microscopy.The research for this thesis has been financially supported by the Netherlands Organisation for Scientific Research (NWO), project 613.000.112.UBL - phd migration 201
    corecore