96 research outputs found

    PYRO-NN: Python Reconstruction Operators in Neural Networks

    Full text link
    Purpose: Recently, several attempts were conducted to transfer deep learning to medical image reconstruction. An increasingly number of publications follow the concept of embedding the CT reconstruction as a known operator into a neural network. However, most of the approaches presented lack an efficient CT reconstruction framework fully integrated into deep learning environments. As a result, many approaches are forced to use workarounds for mathematically unambiguously solvable problems. Methods: PYRO-NN is a generalized framework to embed known operators into the prevalent deep learning framework Tensorflow. The current status includes state-of-the-art parallel-, fan- and cone-beam projectors and back-projectors accelerated with CUDA provided as Tensorflow layers. On top, the framework provides a high level Python API to conduct FBP and iterative reconstruction experiments with data from real CT systems. Results: The framework provides all necessary algorithms and tools to design end-to-end neural network pipelines with integrated CT reconstruction algorithms. The high level Python API allows a simple use of the layers as known from Tensorflow. To demonstrate the capabilities of the layers, the framework comes with three baseline experiments showing a cone-beam short scan FDK reconstruction, a CT reconstruction filter learning setup, and a TV regularized iterative reconstruction. All algorithms and tools are referenced to a scientific publication and are compared to existing non deep learning reconstruction frameworks. The framework is available as open-source software at \url{https://github.com/csyben/PYRO-NN}. Conclusions: PYRO-NN comes with the prevalent deep learning framework Tensorflow and allows to setup end-to-end trainable neural networks in the medical image reconstruction context. We believe that the framework will be a step towards reproducible researchComment: V1: Submitted to Medical Physics, 11 pages, 7 figure

    Modeling the Distance-Dependent Blurring in Transmission Imaging in the Ordered-Subset Transmission (OSTR) Algorithm by Using an Unmatched Projector/Backprojector Pair

    Full text link
    In SPECT, accurate emission reconstruction requires attenuation compensation with high-quality attenuation maps. Resolution loss in transmission maps could cause blurring and artifacts in emission reconstruction. For a transmission system employing parallel-hole collimators and a sheet source, distance-dependent blurring is caused by the non-ideal source and camera collimations, and the finite intrinsic resolution of the detector. These can be approximately modeled by an incremental-blurring model. To compensate for this blurring in iterative transmission reconstruction, we incorporated the incremental blurring model in the forward projector of the OSTR algorithm but did not include the blur in the backprojector. To evaluate our approach, we simulated transmission projections of the MCAT phantom using a ray-tracing projector, in which the rays coming out from a source point form a narrow cone. The geometric blurring due to the non-ideal source and camera collimations was simulated by multiplying the counts along each cone-beam ray with a weight calculated from the overall geometric response function (assumed a two-dimensional Gaussian function), and then summing the weighted counts into projections. The resulting projections were convolved with the intrinsic response (another two-dimensional Gaussian) to simulate the total system blurring of transmission imaging. Poisson noise was then added to the projection data. We also acquired two sets of transmission measurements of an air-filled Data Spectrum Deluxe SPECT phantom on a Prism 2000 scanning-line-source transmission system. We reconstructed the simulations using the OSTR algorithm, with and without modeling of the incremental blur in the projector. The scaling parameter of the penalty prior was optimized in each case by minimizing the root-mean-square error (RMSE). Reconstructions showed that modeling the incremental blur improved the resolution of the attenuation map and quantitative accuracy.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85829/1/Fessler211.pd

    Easy implementation of advanced tomography algorithms using the ASTRA toolbox with Spot operators

    Get PDF
    Mathematical scripting languages are commonly used to develop new tomographic reconstruction algorithms. For large experimental datasets, high performance parallel (GPU) implementations are essential, requiring a re-implementation of the algorithm using a language that is closer to the computing hardware. In this paper, we introduce a new Matlab interface to the ASTRA toolbox, a high performance toolbox for building tomographic reconstruction algorithms. By exposing the ASTRA linear tomography operators through a standard Matlab matrix syntax, existing and new reconstruction algorithms implemented in Matlab can now be applied directly to large experimental datasets. This is achieved by using the Spot toolbox, which wraps external code for linear operations into Matlab objects that can be used as matrices. We provide a series of examples that demonstrate how this Spot operator can be used in combination with existing algorithms implemented in Matlab and how it can be used for rapid development of new algorithms, resulting in direct applicability to large-scale experimental datasets

    An FPGA-based 3D backprojector

    Get PDF
    Subject of this thesis is the hardware architecture for the X-ray Computer Tomography. The main aim of the work is the development of a scalable, high-performance hardware for the reconstruction of a volume from cone-beam projections. A modified Feldkamp cone-beam reconstruction algorithm (Cylindrical algorithm) was used. The modifications of the original algorithm: parallelization and pipelining of the reconstruction, were formalized. Special attention was paid to the architecture of the memory system and to the schedule of the memory accesses.The developed architecture contains all steps of the reconstruction from cone-beam projections: filtering of the detector data, weighted backprojection and on-line geometry computations. The architecture was evaluated for the Xilinx Field Programmable Gate Array (FPGA). The simulations showed that the speed-up of the reconstruction of a volume is about an order of a magnitude compared to the currently available PC implementations.Gegenstand dieser Dissertation ist die Hardware-Architektur fĂŒr die Röntgen-Computertomographie. Das Hauptziel der Arbeit ist die Entwicklung einer skalierbaren, leistungsstarken Hardware fĂŒr die Rekonstruktion des Objektvolumens bei der Kegelstrahlprojektion. Dazu wurde ein modifizierter Feldkamp-Kegelstrahl-Rekonstruktionsalgorithmus benutzt (Zylinder-Algorithmus). Die Abwandlungen des Original-Algorithmus, Parallelisierung und Pipelining der Rekonstruktion, werden formal beschrieben. Besonderes Augenmerk wurde auf die Architektur des Speichersystems und das Timing des Speicherzugriffes gelegt. Die entwickelte Architektur enthĂ€lt alle Schritte der Rekonstruktion von Kegelstrahlprojektionen: die Filterung der Detektordaten, die gewichtete RĂŒckprojektion und Echtzeit-Geometrieberechnungen. Die Architektur wurde fĂŒr ein Field Programmable Gate Array (FPGA) der Firma Xilinx evaluiert. Die Simulationen zeigten, dass die zur Rekonstruktion des Objektvolumens benötigte Zeit im Vergleich zu konventionellen PC-Implementierungen um eine GrĂ¶ĂŸenordnung verkĂŒrzt wurde

    Incorporation of System Resolution Compensation (RC) in the Ordered-Subset Transmission (OSTR) Algorithm for Transmission Imaging in SPECT

    Full text link
    In order to reconstruct attenuation maps with improved spatial resolution and quantitative accuracy, we developed an approximate method of incorporating system resolution compensation (RC) in the ordered-subset transmission (OSTR) algorithm for transmission reconstruction. Our method approximately models the blur caused by the finite intrinsic detector resolution, the nonideal source collimation and detector collimation. We derived the formulation using the optimization transfer principle as in the derivation of the OSTR algorithm. The formulation includes one forward-blur step and one back-blur step, which do not severely slow down reconstruction. The formulation could be applicable to various transmission geometries, such as point-source, line-source, and sheet-source systems. Through computer simulations of the MCAT phantom and transmission measurements of the air-filled Data Spectrum Deluxe single photo emission computed tomography (SPECT) Phantom on a system which employed a cone-beam geometry and a system which employed a scanning-line-source geometry, we showed that incorporation of RC increased spatial resolution and improved the quantitative accuracy of reconstruction. In simulation studies, attenuation maps reconstructed with RC correction improved the quantitative accuracy of emission reconstruction.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86030/1/Fessler42.pd

    Fourier-Based Forward and Back-Projectors in Iterative Fan-Beam Tomographic Image Reconstruction

    Full text link
    Fourier-based forward and back-projection methods can reduce computation in iterative tomographic image reconstruction. Recently, an optimized nonuniform fast Fourier transform (NUFFT) approach was shown to yield accurate parallel-beam projections. In this paper, we extend the NUFFT approach to describe an O(N2 log N) projector/backprojector pair for fan-beam transmission tomography. Simulations and experiments with real CT data show that fan-beam Fourier-based forward and back-projection methods can reduce computation for iterative reconstruction while still providing accuracy comparable to their O(N3) space-based counterparts.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86013/1/Fessler45.pd

    An FPGA-based 3D backprojector

    Get PDF
    Subject of this thesis is the hardware architecture for the X-ray Computer Tomography. The main aim of the work is the development of a scalable, high-performance hardware for the reconstruction of a volume from cone-beam projections. A modified Feldkamp cone-beam reconstruction algorithm (Cylindrical algorithm) was used. The modifications of the original algorithm: parallelization and pipelining of the reconstruction, were formalized. Special attention was paid to the architecture of the memory system and to the schedule of the memory accesses.The developed architecture contains all steps of the reconstruction from cone-beam projections: filtering of the detector data, weighted backprojection and on-line geometry computations. The architecture was evaluated for the Xilinx Field Programmable Gate Array (FPGA). The simulations showed that the speed-up of the reconstruction of a volume is about an order of a magnitude compared to the currently available PC implementations.Gegenstand dieser Dissertation ist die Hardware-Architektur fĂŒr die Röntgen-Computertomographie. Das Hauptziel der Arbeit ist die Entwicklung einer skalierbaren, leistungsstarken Hardware fĂŒr die Rekonstruktion des Objektvolumens bei der Kegelstrahlprojektion. Dazu wurde ein modifizierter Feldkamp-Kegelstrahl-Rekonstruktionsalgorithmus benutzt (Zylinder-Algorithmus). Die Abwandlungen des Original-Algorithmus, Parallelisierung und Pipelining der Rekonstruktion, werden formal beschrieben. Besonderes Augenmerk wurde auf die Architektur des Speichersystems und das Timing des Speicherzugriffes gelegt. Die entwickelte Architektur enthĂ€lt alle Schritte der Rekonstruktion von Kegelstrahlprojektionen: die Filterung der Detektordaten, die gewichtete RĂŒckprojektion und Echtzeit-Geometrieberechnungen. Die Architektur wurde fĂŒr ein Field Programmable Gate Array (FPGA) der Firma Xilinx evaluiert. Die Simulationen zeigten, dass die zur Rekonstruktion des Objektvolumens benötigte Zeit im Vergleich zu konventionellen PC-Implementierungen um eine GrĂ¶ĂŸenordnung verkĂŒrzt wurde

    Accelerated iterative image reconstruction for cone-beam computed tomography through Big Data frameworks

    Get PDF
    One of the latest trends in Computed Tomography (CT) is the reduction of the radiation dose delivered to patients through the decrease of the amount of acquired data. This reduction results in artifacts in the final images if conventional reconstruction methods are used, making it advisable to employ iterative algorithms to enhance image quality. Most approaches are built around two main operators, backprojection and projection, which are computationally expensive. In this work, we present an implementation of those operators for iterative reconstruction methods exploiting the Big Data paradigm. We define an architecture based on Apache Spark that supports both Graphical Processing Units (GPU) and CPU-based architectures. The aforementioned are parallelized using a partitioning scheme based on the division of the volume and irregular data structures in order to reduce the cost of communication and computation of the final images. Our solution accelerates the execution of the two most computational expensive components with Apache Spark, improving the programming experience of new iterative reconstruction algorithms and the maintainability of the source code increasing the level of abstraction for non-experienced high performance programmers. Through an experimental evaluation, we show that we can obtain results up to 10 faster for projection and 21 faster for backprojection when using a GPU-based cluster compared to a traditional multi-core version. Although a linear speed up was not reached, the proposed approach can be a good alternative for porting previous medical image reconstruction applications already implemented in C/C++ or even with CUDA or OpenCL programming models. Our solution enables the automatic detection of the GPU devices and execution on CPU and GPU tasks at the same time under the same system, using all the available resources.This work was supported by the NIH, United States under Grant R01-HL-098686 and Grant U01 EB018753, the Spanish Ministerio de Economia y Competitividad (projects TEC2013-47270-R, RTC-2014-3028 and TIN2016-79637-P), the Spanish Ministerio de Educacion (grant FPU14/03875), the Spanish Ministerio de Ciencia, Innovacion y Universidades (Instituto de Salud Carlos III, project DTS17/00122; Agencia Estatal de Investigacion, project DPI2016-79075-R-AEI/FEDER, UE), co-funded by European Regional Development Fund (ERDF), ‘‘A way of making Europe’’. The CNIC is supported by the Ministerio de Ciencia, Spain, Innovacion y Universidades, Spain and the Pro CNIC Foundation, Spain, and is a Severo Ochoa Center of Excellence, Spain (SEV-2015-0505). Finally, this research was partially supported by Madrid regional Government, Spain under the grant ’’Convergencia Big data-Hpc: de los sensores a las Aplicaciones. (CABAHLA-CM)’’. Ref: S2018/TCS-4423

    3D Forward and Back-Projection for X-Ray CT Using Separable Footprints

    Full text link
    Iterative methods for 3D image reconstruction have the potential to improve image quality over conventional filtered back projection (FBP) in X-ray computed tomography (CT). However, the computation burden of 3D cone-beam forward and back-projectors is one of the greatest challenges facing practical adoption of iterative methods for X-ray CT. Moreover, projector accuracy is also important for iterative methods. This paper describes two new separable footprint (SF) projector methods that approximate the voxel footprint functions as 2D separable functions. Because of the separability of these footprint functions, calculating their integrals over a detector cell is greatly simplified and can be implemented efficiently. The SF-TR projector uses trapezoid functions in the transaxial direction and rectangular functions in the axial direction, whereas the SF-TT projector uses trapezoid functions in both directions. Simulations and experiments showed that both SF projector methods are more accurate than the distance-driven (DD) projector, which is a current state-of-the-art method in the field. The SF-TT projector is more accurate than the SF-TR projector for rays associated with large cone angles. The SF-TR projector has similar computation speed with the DD projector and the SF-TT projector is about two times slower.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85876/1/Fessler5.pd

    Compressed Sensing Based Reconstruction Algorithm for X-ray Dose Reduction in Synchrotron Source Micro Computed Tomography

    Get PDF
    Synchrotron computed tomography requires a large number of angular projections to reconstruct tomographic images with high resolution for detailed and accurate diagnosis. However, this exposes the specimen to a large amount of x-ray radiation. Furthermore, this increases scan time and, consequently, the likelihood of involuntary specimen movements. One approach for decreasing the total scan time and radiation dose is to reduce the number of projection views needed to reconstruct the images. However, the aliasing artifacts appearing in the image due to the reduced number of projection data, visibly degrade the image quality. According to the compressed sensing theory, a signal can be accurately reconstructed from highly undersampled data by solving an optimization problem, provided that the signal can be sparsely represented in a predefined transform domain. Therefore, this thesis is mainly concerned with designing compressed sensing-based reconstruction algorithms to suppress aliasing artifacts while preserving spatial resolution in the resulting reconstructed image. First, the reduced-view synchrotron computed tomography reconstruction is formulated as a total variation regularized compressed sensing problem. The Douglas-Rachford Splitting and the randomized Kaczmarz methods are utilized to solve the optimization problem of the compressed sensing formulation. In contrast with the first part, where consistent simulated projection data are generated for image reconstruction, the reduced-view inconsistent real ex-vivo synchrotron absorption contrast micro computed tomography bone data are used in the second part. A gradient regularized compressed sensing problem is formulated, and the Douglas-Rachford Splitting and the preconditioned conjugate gradient methods are utilized to solve the optimization problem of the compressed sensing formulation. The wavelet image denoising algorithm is used as the post-processing algorithm to attenuate the unwanted staircase artifact generated by the reconstruction algorithm. Finally, a noisy and highly reduced-view inconsistent real in-vivo synchrotron phase-contrast computed tomography bone data are used for image reconstruction. A combination of prior image constrained compressed sensing framework, and the wavelet regularization is formulated, and the Douglas-Rachford Splitting and the preconditioned conjugate gradient methods are utilized to solve the optimization problem of the compressed sensing formulation. The prior image constrained compressed sensing framework takes advantage of the prior image to promote the sparsity of the target image. It may lead to an unwanted staircase artifact when applied to noisy and texture images, so the wavelet regularization is used to attenuate the unwanted staircase artifact generated by the prior image constrained compressed sensing reconstruction algorithm. The visual and quantitative performance assessments with the reduced-view simulated and real computed tomography data from canine prostate tissue, rat forelimb, and femoral cortical bone samples, show that the proposed algorithms have fewer artifacts and reconstruction errors than other conventional reconstruction algorithms at the same x-ray dose
    • 

    corecore