277 research outputs found

    System Characterizations and Optimized Reconstruction Methods for Novel X-ray Imaging

    Get PDF
    In the past decade there have been many new emerging X-ray based imaging technologies developed for different diagnostic purposes or imaging tasks. However, there exist one or more specific problems that prevent them from being effectively or efficiently employed. In this dissertation, four different novel X-ray based imaging technologies are discussed, including propagation-based phase-contrast (PB-XPC) tomosynthesis, differential X-ray phase-contrast tomography (D-XPCT), projection-based dual-energy computed radiography (DECR), and tetrahedron beam computed tomography (TBCT). System characteristics are analyzed or optimized reconstruction methods are proposed for these imaging modalities. In the first part, we investigated the unique properties of propagation-based phase-contrast imaging technique when combined with the X-ray tomosynthesis. Fourier slice theorem implies that the high frequency components collected in the tomosynthesis data can be more reliably reconstructed. It is observed that the fringes or boundary enhancement introduced by the phase-contrast effects can serve as an accurate indicator of the true depth position in the tomosynthesis in-plane image. In the second part, we derived a sub-space framework to reconstruct images from few-view D-XPCT data set. By introducing a proper mask, the high frequency contents of the image can be theoretically preserved in a certain region of interest. A two-step reconstruction strategy is developed to mitigate the risk of subtle structures being oversmoothed when the commonly used total-variation regularization is employed in the conventional iterative framework. In the thirt part, we proposed a practical method to improve the quantitative accuracy of the projection-based dual-energy material decomposition. It is demonstrated that applying a total-projection-length constraint along with the dual-energy measurements can achieve a stabilized numerical solution of the decomposition problem, thus overcoming the disadvantages of the conventional approach that was extremely sensitive to noise corruption. In the final part, we described the modified filtered backprojection and iterative image reconstruction algorithms specifically developed for TBCT. Special parallelization strategies are designed to facilitate the use of GPU computing, showing demonstrated capability of producing high quality reconstructed volumetric images with a super fast computational speed. For all the investigations mentioned above, both simulation and experimental studies have been conducted to demonstrate the feasibility and effectiveness of the proposed methodologies

    Recent advances in x-ray cone-beam computed laminography

    No full text
    X-ray computed tomography is a well established volume imaging technique used routinely in medical diagnosis, industrial non-destructive testing, and a wide range of scientific fields. Traditionally, computed tomography uses scanning geometries with a single axis of rotation together with reconstruction algorithms specifically designed for this setup. Recently there has however been increasing interest in more complex scanning geometries. These include so called X-ray computed laminography systems capable of imaging specimens with large lateral dimensions, or large aspect ratios, neither of which are well suited to conventional CT scanning procedures. Developments throughout this field have thus been rapid, including the introduction of novel system trajectories, the application and refinement of various reconstruction methods, and the use of recently developed computational hardware and software techniques to accelerate reconstruction times. Here we examine the advances made in the last several years and consider their impact on the state of the art

    CUDA accelerated cone‐beam reconstruction

    Get PDF
    Cone-Beam Computed Tomography (CBCT) is an imaging method that reconstructs a 3D representation of the object from its 2D X-ray images. It is an important diagnostic tool in the medical field, especially dentistry. However, most 3D reconstruction algorithms are computationally intensive and time consuming; this limitation constrains the use of CBCT. In recent years, high-end graphics cards, such as the ones powered by NVIDIA graphics processing units (GPUs), are able to perform general purpose computation. Due to the highly parallel nature of the 3D reconstruction algorithms, it is possible to implement these algorithms on the GPU to reduce the processing time to the level that is practical. Two of the most popular 3D Cone-Beam reconstruction algorithms are the Feldkamp-Davis-Kress algorithm (FDK) and the Algebraic Reconstruction Technique (ART). FDK is fast to construct 3D images, but the quality of its images is lower than the quality of ART images. However, ART requires significantly more computation. Material ART is a recently developed algorithm that uses beam-hardening correction to eliminate artifacts. In this thesis, these three algorithms were implemented on the NVIDIA\u27s CUDA platform. These CUDA based algorithms were tested on three different graphics cards, using phantom and real data. The test results show significant speedup when compared to the CPU software implementation. The speedup is sufficient to allow a moderate cost personal computer with NVIDIA graphics card to process CBCT images in real-time

    Iterative Reconstruction of Cone-Beam Micro-CT Data

    Get PDF
    The use of x-ray computed tomography (CT) scanners has become widespread in both clinical and preclinical contexts. CT scanners can be used to noninvasively test for anatom- ical anomalies as well as to diagnose and monitor disease progression. However, the data acquired by a CT scanner must be reconstructed prior to use and interpretation. A recon- struction algorithm processes the data and outputs a three dimensional image representing the x-ray attenuation properties of the scanned object. The algorithms in most widespread use today are based on filtered backprojection (FBP) methods. These algorithms are rela- tively fast and work well on high quality data, but cannot easily handle data with missing projections or considerable amounts of noise. On the other hand, iterative reconstruction algorithms may offer benefits in such cases, but the computational burden associated with iterative reconstructions is prohibitive. In this work, we address this computational burden and present methods that make iterative reconstruction of high-resolution CT data possible in a reasonable amount of time. Our proposed techniques include parallelization, ordered subsets, reconstruction region restriction, and a modified version of the SIRT algorithm that reduces the overall run-time. When combining all of these techniques, we can reconstruct a 512 × 512 × 1022 image from acquired micro-CT data in less than thirty minutes

    Application of constrained optimisation techniques in electrical impedance tomography

    Get PDF
    A Constrained Optimisation technique is described for the reconstruction of temporal resistivity images. The approach solves the Inverse problem by optimising a cost function under constraints, in the form of normalised boundary potentials. Mathematical models have been developed for two different data collection methods for the chosen criterion. Both of these models express the reconstructed image in terms of one dimensional (I-D) Lagrange multiplier functions. The reconstruction problem becomes one of estimating these 1-D functions from the normalised boundary potentials. These models are based on a cost criterion of the minimisation of the variance between the reconstructed resistivity distribution and the true resistivity distribution. The methods presented In this research extend the algorithms previously developed for X-ray systems. Computational efficiency is enhanced by exploiting the structure of the associated system matrices. The structure of the system matrices was preserved in the Electrical Impedance Tomography (EIT) implementations by applying a weighting due to non-linear current distribution during the backprojection of the Lagrange multiplier functions. In order to obtain the best possible reconstruction it is important to consider the effects of noise in the boundary data. This is achieved by using a fast algorithm which matches the statistics of the error in the approximate inverse of the associated system matrix with the statistics of the noise error in the boundary data. This yields the optimum solution with the available boundary data. Novel approaches have been developed to produce the Lagrange multiplier functions. Two alternative methods are given for the design of VLSI implementations of hardware accelerators to improve computational efficiencies. These accelerators are designed to implement parallel geometries and are modelled using a verification description language to assess their performance capabilities

    Flame front propagation velocity measurement and in-cylinder combustion reconstruction using POET

    Get PDF
    The objective of this thesis is to develop an intelligent diagnostic technique POET (Passive Optical Emission Tomography) for the investigation of in cylinder combustion chemiluminescence. As a non-intrusive optical system, the POET system employs 40 fibre optic cables connected to 40 PMTs (Photo Multiplier Tube) to monitor the combustion process and flame front propagation in a modified commercial OHV (Over Head Valve) Pro 206 IC engine. The POET approach overcomes several limitations of present combustion research methods using a combination of fibre optic detection probes, photomultipliers and a tomographic diagnostics. The fibre optic probes are placed on a specially designed cylinder head gasket for non-invasively inserting cylinder. Each independent probe can measure the turbulent chemiluminescence of combustion flame front at up to 20 kHz. The resultant intensities can then be gathered tomographically using MART (Multiplicative Algebraic Reconstruction Technique) software to reconstruct an image of the complete flame-front. The approach is essentially a lensless imaging technique, which has the advantage of not requiring a specialized engine construction with conventional viewing ports to visualize the combustion image. The fibre optic system, through the use of 40, 2m long thermally isolated fibre optic cables can withstand combustion temperatures and is immune from electronic noise, typically generated by the spark plug. The POET system uses a MART tomographic methodology to reconstruct the turbulent combustion process. The data collected has been reconstructed to produce a temporal and spatial image of the combustion flame front. The variations of lame turbulence are monitored by sequences of reconstructed images. Therefore, the POET diagnostic technique reduces the complications of classic flame front propagation measurement systems and successfully demonstrates the in-cylinder combustion process. In this thesis, a series of calibration exercises have been performed to ensure that the photomultipliers of the POET system have sufficient temporal and spatial resolution to quantitatively map the flow velocity turbulence and chemiluminescence of the flame front. In the results, the flame has been analyzed using UV filters and blue filters to monitor the modified natural gas fuel engine. The flame front propagation speed has been evaluated and it is, on average, 12 m/s at 2280 rpm. Sequences of images have been used to illustrate the combustion explosion process at different rpm

    Itera- tive Reconstruction Framework for High-Resolution X-ray CT Data

    Get PDF
    Small animal medical imaging has become an important tool for researchers as it allows noninvasively screening animal models for pathologies as well as monitoring dis- ease progression and therapy response. Currently, clinical CT scanners typically use a Filtered Backprojection (FBP) based method for image reconstruction. This algorithm is fast and generally produces acceptable results, but has several drawbacks. Firstly, it is based upon line integrals, which do not accurately describe the process of X-ray attenuation. Secondly, noise in the projection data is not properly modeled with FBP. On the other hand, iterative algorithms allow the integration of more complicated sys- tem models as well as robust scatter and noise correction techniques. Unfortunately, the iterative algorithms also have much greater computational demands than their FBP counterparts. In this thesis, we develop a framework to support iterative reconstruc- tions of high-resolution X-ray CT data. This includes exploring various system models and algorithms as well as developing techniques to manage the significant computa- tional and system storage requirements of the iterative algorithms. Issues related to the development of this framework as well as preliminary results are presented
    corecore