15,660 research outputs found

    Image reconstruction and processing for stationary digital tomosynthesis systems

    Get PDF
    Digital tomosynthesis (DTS) is an emerging x-ray imaging technique for disease and cancer screening. DTS takes a small number of x-ray projections to generate pseudo-3D images, it has a lower radiation and a lower cost compared to the Computed Tomography (CT) and an improved diagnostic accuracy compared to the 2D radiography. Our research group has developed a carbon nanotube (CNT) based x-ray source. This technology enables packing multiple x-ray sources into one single x-ray source array. Based on this technology, our group built several stationary digital tomosynthesis (s-DTS) systems, which have a faster scanning time and no source motion blur. One critical step in both tomosynthesis and CT is image reconstruction, which generates a 3D image from the 2D measurement. For tomosynthesis, the conventional reconstruction method runs fast but fails in image quality. A better iterative method exists, however, it is too time-consuming to be used in clinics. The goal of this work is to develop fast iterative image reconstruction algorithm and other image processing techniques for the stationary digital tomosynthesis system, improving the image quality affected by the hardware limitation. Fast iterative reconstruction algorithm, named adapted fan volume reconstruction (AFVR), was developed for the s-DTS. AFVR is shown to be an order of magnitude faster than the current iterative reconstruction algorithms and produces better images over the classical filtered back projection (FBP) method. AFVR was implemented for the stationary digital breast tomosynthesis system (s-DBT), the stationary digital chest tomosynthesis system (s-DCT) and the stationary intraoral dental tomosynthesis system (s-IOT). Next, scatter correction technique for stationary digital tomosynthesis was investigated. A new algorithm for estimating scatter profile was developed, which has been shown to improve the image quality substantially. Finally, the quantitative imaging was investigated, where the s-DCT system was used to assess the coronary artery calcium score.Doctor of Philosoph

    Towards quantitative computed tomography

    Get PDF
    Computed tomography is introduced along with an overview of its diverse applications in many scientific endeavours. A unified approach for the treatment of scattering from linear scalar wave motion is introduced. The assumptions under which wave motion within a medium can be characterised by concourses of rays are presented along with comment on the validity of these assumptions. Early and conventional theory applied for modelling the behaviour of rays, within media for which ray assumptions are valid, are reviewed. A new computerised method is described for reconstruction of a refractive index distribution from time-of-flight measurements of radiation/waves passing through the distribution and taken on a known boundary surrounding it. The reconstruction method, aimed at solving the bent-ray computed tomography (CT) problem, is based on a novel ray description which doesn't require the ray paths to be known. This allows the refractive index to be found by iterative solution of a set of linear equations, rather than through the computationally intensive procedure of ray tracing, which normally accompanies iterative solutions to problems of this type. The preliminary results show that this method is capable of handling appreciable spatial refractive index variations in large bodies. A review containing theory and techniques for image reconstruction from projections is presented, along with their historical development. The mathematical derivation of a recently developed reconstruction technique, the method of linograms is considered. An idea, termed the plethora of views idea, which aims to improve quantitative CT image reconstruction, is introduced. The theoretical foundation for this is the idea that when presented with a plethora of projections, by which is meant a number greater than that required to reconstruct the known region of support of an image, so that the permissible reconstruction region can be extended, then the intensity of the reconstructed distribution should be negligible throughout the extended region. Any reconstruction within the extended region, that departs from what would be termed negligible, is deduced to have been caused by imperfections of the projections. The implicit expectation of novel schemes which are presented for improving CT image reconstruction, is that contributions within the extended region can be utilised to ameliorate the effects of the imperfections on the reconstruction where the distribution is known to be contained. Preliminary experimental results are reported for an iterative algorithm proposed to correct a plethora of X-ray CT projection data containing imperfections. An extended definition is presented for the consistency of projections, termed spatial consistency, that incorporates the region with which the projection data is consistent. Using this definition and an associated definition, spatial inconsistency, an original technique is proposed and reported on for the recovery of inconsistencies that are contained in the projection data over a narrow range of angles

    Latest developments in the improvement and quantification of high resolution X-ray tomography data

    Get PDF
    X-ray Computed Tomography (CT) is a powerful tool to visualize the internal structure of objects. Although X-ray CT is often used for medical purposes, it has many applications in the academic and industrial world. X-ray CT is a non destructive tool which provides the possibility to obtain a three dimensional (3D) representation of the investigated object. The currently available high resolution systems can achieve resolutions of less than one micrometer which makes it a valuable technique for various scientific and industrial applications. At the Centre for X-ray Tomography of the Ghent University (UGCT) research is performed on the improvement and application of high resolution X-ray CT (µCT). Important aspects of this research are the development of state of the art high resolution CT scanners and the development of software for controlling the scanners, reconstruction software and analysis software. UGCT works closely together with researchers from various research fields and each of them have their specific requirements. To obtain the best possible results in any particular case, the scanners are developed in a modular way, which allows for optimizations, modifications and improvements during use. Another way of improving the image quality lies in optimization of the reconstruction software, which is why the software package Octopus was developed in house. Once a scanned volume is reconstructed, an important challenge lies in the interpretation of the obtained data. For this interpretation visualization alone is often insufficient and quantitative information is needed. As researchers from different fields have different needs with respect to quantification of their data, UGCT developed the 3D software analysis package Morpho+ for analysing all kinds of samples. The research presented in this work focuses on improving the accuracy and extending the amount of the quantitative information which can be extracted from µCT data. Even if a perfect analysis algorithm would exist, it would be impossible to accurately quantify data of which the image quality is insufficient. As image quality can significantly be improved with the aid of adequate reconstruction techniques, the research presented in this work focuses on analysis as well as reconstruction software. As the datasets obtained with µCT at UGCT are of substantial size, the possibility to process large datasets in a limited amount of time is crucial in the development of new algorithms. The contributions of the author can be subdivided in three major aspects of the processing of CT data: The modification of iterative reconstruction algorithms, the extension and optimization of 3D analysis algorithms and the development of a new algorithm for discrete tomography. These topics are discussed in more detail below. A main aspect in the improvement of image quality is the reduction of artefacts which often occur in µCT such as noise-, cone beam- and beam hardening artefacts. Cone beam artefacts are a result of the cone beam geometry which is often used in laboratory based µCT and beam hardening is a consequence of the polychromaticity of the beam. Although analytical reconstruction algorithms based on filtered back projection are still most commonly used for the reconstruction of µCT datasets, there is another approach which is becoming a valuable alternative: iterative reconstruction algorithms. Iterative algorithms are inherently better at coping with the previously mentioned artefacts. Additionally iterative algorithms can improve image quality in case the number of available projections or the angular range is limited. In chapter 3 the possibility to modify these algorithms to further improve image quality is investigated. It is illustrated that streak artefacts which can occur when metals are present in a sample can be significantly reduced by modifying the reconstruction algorithm. Additionally, it is demonstrated that the incorporation of an initial solution (if available) allows reducing the required number of projections for a second slightly modified sample. To reduce beam hardening artefacts, the physics of the process is modelled and incorporated in the iterative reconstruction algorithm, which results in an easy to use and efficient algorithm for the reduction of beam hardening artefacts and requires no prior knowledge about the sample. In chapter 4 the 3D analysis process is described. In the scope of this work, algorithms of the 3D-analysis software package Morpho+ were optimized and new methods were added to the program, focusing on quantifying connectivity and shape of the phases and elements in the sample, as well as obtaining accurate segmentation, which is essential step in the analysis process is the segmentation of the reconstructed sample. Evidently, the different phases in the sample need to be separated from one another. However, often a second segmentation step is needed in order to separate the different elements present in a volume, such as pores in a pore network, or to separate elements which are physically separated but appear to be connected on the reconstructed images to limited resolution and/or limited contrast of the scan. The latter effect often occurs in the process of identifying different grains in a geological sample. Algorithms which are available for this second segmentation step often result in over-segmentation, i.e. elements are not only separated from one another but also separations inside a single element occur. To overcome this effect an algorithm is presented to semi-automically rejoin the separated parts of a single element. Additionally, Morpho+ was extended with tools to extract information about the connectivity of a sample, which is difficult to quantify but important for samples from various research fields. The connectivity can be described with the aid of the calculation of the Euler Number and tortuosity. Moreover, the number of neighbouring objects of each object can be determined and the connections between objects can be quantified. It is now also possible to extract a skeleton, which describes the basic structure of the volume. A calculation of several shape parameters was added to the program as well, resulting in the possibility to visualize the different objects on a disc-rod diagram. The many possibilities to characterize reconstructed samples with the aid of Morpho+ are illustrated on several applications. As mentioned in the previous section, an important aspect for correctly quantifying µCT data is the correct segmentation of the different phases present in the sample. Often it is the case that a sample consists of only one or a limited number of materials (and surrounding air). In this case this prior knowledge about the sample can be incorporated in the reconstruction algorithm. These kind of algorithms are referred to as discrete reconstruction algorithms, which are used when only a limited number of projections is available. Chapter 5 deals with discrete reconstruction algorithms. One of these algorithms is the Discrete Algebraic Reconstruction Technique, which combines iterative with discrete reconstruction and has shown excellent results. DART requires knowledge about the attenuation coefficient(s) and segmentation threshold(s) of the material(s). For µCT applications (resulting in large datasets) reconstruction times can significantly increase when DART is used in comparison with standard iterative reconstruction, as DART requires more iterations. This complicates the practical applicability of DART for routine applications at UGCT. Therefore a modified algorithm (based on the DART algorithm) for reconstruction of samples consisting out of only one material and surrounding air was developed in the scope of this work, which is referred to as the Experimental Discrete Algebraic Reconstruction Technique (EDART). The goal of this algorithm is to obtain better reconstruction results in comparison with standard iterative reconstruction algorithms, without significantly increasing reconstruction time. Moreover, a fast and intuitive technique to estimate the attenuation coefficient and threshold was developed as a part of the EDART algorithm. In chapter 5 it is illustrated that EDART provides improved image quality for both phantom and real data, in comparison with standard iterative reconstruction algorithms, when only a limited number of projections is available. The algorithms presented in this work can be subsequently applied but can also be combined with one another. It is for example illustrated in chapter 5 that the beam hardening correction method can also be incorporated in the EDART algorithm. The combination of the introduced methods allows for an improvement in the process of extracting accurate quantitative information from µCT data

    State of the art: iterative CT reconstruction techniques

    Get PDF
    Owing to recent advances in computing power, iterative reconstruction (IR) algorithms have become a clinically viable option in computed tomographic (CT) imaging. Substantial evidence is accumulating about the advantages of IR algorithms over established analytical methods, such as filtered back projection. IR improves image quality through cyclic image processing. Although all available solutions share the common mechanism of artifact reduction and/or potential for radiation dose savings, chiefly due to image noise suppression, the magnitude of these effects depends on the specific IR algorithm. In the first section of this contribution, the technical bases of IR are briefly reviewed and the currently available algorithms released by the major CT manufacturers are described. In the second part, the current status of their clinical implementation is surveyed. Regardless of the applied IR algorithm, the available evidence attests to the substantial potential of IR algorithms for overcoming traditional limitations in CT imaging
    corecore