151 research outputs found

    The ASTRA Toolbox: A platform for advanced algorithm development in electron tomography

    Get PDF
    We present the ASTRA Toolbox as an open platform for 3D image reconstruction in tomography. Most of the software tools that are currently used in electron tomography offer limited flexibility with respect to the geometrical parameters of the acquisition model and the algorithms used for reconstruction. The ASTRA Toolbox provides an extensive set of fast and flexible building blocks that can be used to develop advanced reconstruction algorithms, effectively removing these limitations. We demonstrate this flexibility, the resulting reconstruction quality, and the computational efficiency of this toolbox by a series of experiments, based on experimental dual-axis tilt series

    Improved compressed sensing algorithm for sparse-view CT

    Get PDF
    In computed tomography (CT) there are many situations where reconstruction may need to be performed with sparse-view data. In sparse-view CT imaging, strong streak artifacts may appear in conventionally reconstructed images due to the limited sampling rate, compromising image quality. Compressed sensing (CS) algorithm has shown potential to accurately recover images from highly undersampled data. In the past few years, total variation (TV)-base compressed sensing algorithms have been proposed to suppress the streak artifact in CT image reconstruction. In this paper, we formulate the problem of CT imaging under transform sparsity and sparse-view constraints, and propose a novel compressed sensing-based algorithm for CT image reconstruction from few-view data, in which we simultaneously minimize the ℓ1 norm, total variation and a least square measure. The main feature of our algorithm is the use of two sparsity transforms: discrete wavelet transform and discrete gradient transform, both of which are proven to be powerful sparsity transforms. Experiments with simulated and real projections were performed to evaluate and validate the proposed algorithm. The reconstructions using the proposed approach have less streak artifacts and reconstruction errors than other conventional methods

    X-ray CT on the GPU

    Get PDF
    Nondestructive testing (NDT) is a collection of analysis techniques used by scientists and technologists as a way of analyzing the interior of an object without damaging the object. Since the analysis is done without damaging the object, NDT is an extremely valuable technique used in various industries for troubleshooting and research. CNDE has a long history of working with a variety of industrial sectors which include Aerospace (commercial and military aviation) and Defense Systems (ground vehicles and personnel protection); Energy (nuclear, wind, fossil); Infrastructure and Transportation (bridges, roadways, dams, levees); and Petro-Chemical (offshore, processing, fuel transport piping) to provide cost-effective tools and solutions. X-ray tomography is the procedure of using X-rays for generating tomographic slices of the required object. The object is bombarded with X-rays and the scanned image intensity values are collected on a detector. A significant drawback in X-ray tomography is the amount of data collected. It is generally huge in the order of gigabytes and hence the processing of data presents a big challenge. One way to speed up the processing of data is to run the programs on a cluster. CNDE uses a 64 node Beowulf cluster to do the reconstruction of an image. However with the advent of the GPU (Graphic Processing Unit) we have a far more cost efficient and time efficient hardware to run the reconstruction algorithm. The GPU can be fitted into a single PC, costs 10 times less than the cluster and also has a longer life time. This thesis has two major components to it. One of it is the desvelopment of new preprocessing and post processing techniques (includes filters, hot pixel removal etc.) to improve the quality of the input data and the other is the implementation of these techniques as well as the reconstruction program on the GPU using CUDA. Speedup on the GPU is not just a matter of porting the developed algorithms in parallel onto the hardware like in a cluster. GPU architecture is extremely complex and involves the usage of many different types of memory each with its own advantages and disadvantages and also many other optimization techniques for accessing and processing the data. These new techniques as well as the introduction of GPU are a significant addition to X-ray program here at CNDE

    A computationally efficient reconstruction algorithm for circular cone-beam computed tomography using shallow neural networks

    Get PDF
    Circular cone-beam (CCB) Computed Tomography (CT) has become an integral part of industrial quality control, materials science and medical imaging. The need to acquire and process each scan in a short time naturally leads to trade-offs between speed and reconstruction quality, creating a need for fast reconstruction algorithms capable of creating accurate reconstructions from limited data. In this paper we introduce the Neural Network Feldkamp-Davis-Kress (NN-FDK) algorithm. This algorithm adds a machine learning component to the FDK algorithm to improve its reconstruction accuracy while maintaining its computational efficiency. Moreover, the NN-FDK algorithm is designed such that it has low training data requirements and is fast to train. This ensures that the proposed algorithm can be used to improve image quality in high throughput CT scanning settings, where FDK is currently used to keep pace with the acquisition speed using readily available computational resources. We compare the NN-FDK algorithm to two standard CT reconstruction algorithms and to two popular deep neural networks trained to remove reconstruction artifacts from the 2D slices of an FDK reconstruction. We show that the NN-FDK reconstruction algorithm is substantially faster in computing a reconstruction than all the tested alternative methods except for the standard FDK algorithm and we show it can compute accurate CCB CT reconstructions in cases of high noise, a low number of projection angles or large cone angles. Moreover, we show that the training time of an NN-FDK network is orders of magnitude lower than the considered deep neural networks, with only a slight reduction in reconstruction accuracy

    SYRMEP Tomo Project: a graphical user interface for customizing CT reconstruction workflows

    Get PDF
    8When considering the acquisition of experimental synchrotron radiation (SR) X-ray CT data, the reconstruction workflow cannot be limited to the essential computational steps of flat fielding and filtered back projection (FBP). More refined image processing is often required, usually to compensate artifacts and enhance the quality of the reconstructed images. In principle, it would be desirable to optimize the reconstruction workflow at the facility during the experiment (beamtime). However, several practical factors affect the image reconstruction part of the experiment and users are likely to conclude the beamtime with sub-optimal reconstructed images. Through an example of application, this article presentsSYRMEP Tomo Project(STP), an open-source software tool conceived to let users design custom CT reconstruction workflows. STP has been designed for post-beamtime (off-line use) and for a new reconstruction of past archived data at user's home institution where simple computing resources are available. Releases of the software can be downloaded at the Elettra Scientific Computing group GitHub repository https://github.com/ElettraSciComp/STP-Gui.openopenBrun, Francesco; Massimi, Lorenzo; Fratini, Michela; Dreossi, Diego; Billé, Fulvio; Accardo, Agostino; Pugliese, Roberto; Cedola, AlessiaBrun, Francesco; Massimi, Lorenzo; Fratini, Michela; Dreossi, Diego; Billé, Fulvio; Accardo, Agostino; Pugliese, Roberto; Cedola, Alessi

    Latest developments in the improvement and quantification of high resolution X-ray tomography data

    Get PDF
    X-ray Computed Tomography (CT) is a powerful tool to visualize the internal structure of objects. Although X-ray CT is often used for medical purposes, it has many applications in the academic and industrial world. X-ray CT is a non destructive tool which provides the possibility to obtain a three dimensional (3D) representation of the investigated object. The currently available high resolution systems can achieve resolutions of less than one micrometer which makes it a valuable technique for various scientific and industrial applications. At the Centre for X-ray Tomography of the Ghent University (UGCT) research is performed on the improvement and application of high resolution X-ray CT (µCT). Important aspects of this research are the development of state of the art high resolution CT scanners and the development of software for controlling the scanners, reconstruction software and analysis software. UGCT works closely together with researchers from various research fields and each of them have their specific requirements. To obtain the best possible results in any particular case, the scanners are developed in a modular way, which allows for optimizations, modifications and improvements during use. Another way of improving the image quality lies in optimization of the reconstruction software, which is why the software package Octopus was developed in house. Once a scanned volume is reconstructed, an important challenge lies in the interpretation of the obtained data. For this interpretation visualization alone is often insufficient and quantitative information is needed. As researchers from different fields have different needs with respect to quantification of their data, UGCT developed the 3D software analysis package Morpho+ for analysing all kinds of samples. The research presented in this work focuses on improving the accuracy and extending the amount of the quantitative information which can be extracted from µCT data. Even if a perfect analysis algorithm would exist, it would be impossible to accurately quantify data of which the image quality is insufficient. As image quality can significantly be improved with the aid of adequate reconstruction techniques, the research presented in this work focuses on analysis as well as reconstruction software. As the datasets obtained with µCT at UGCT are of substantial size, the possibility to process large datasets in a limited amount of time is crucial in the development of new algorithms. The contributions of the author can be subdivided in three major aspects of the processing of CT data: The modification of iterative reconstruction algorithms, the extension and optimization of 3D analysis algorithms and the development of a new algorithm for discrete tomography. These topics are discussed in more detail below. A main aspect in the improvement of image quality is the reduction of artefacts which often occur in µCT such as noise-, cone beam- and beam hardening artefacts. Cone beam artefacts are a result of the cone beam geometry which is often used in laboratory based µCT and beam hardening is a consequence of the polychromaticity of the beam. Although analytical reconstruction algorithms based on filtered back projection are still most commonly used for the reconstruction of µCT datasets, there is another approach which is becoming a valuable alternative: iterative reconstruction algorithms. Iterative algorithms are inherently better at coping with the previously mentioned artefacts. Additionally iterative algorithms can improve image quality in case the number of available projections or the angular range is limited. In chapter 3 the possibility to modify these algorithms to further improve image quality is investigated. It is illustrated that streak artefacts which can occur when metals are present in a sample can be significantly reduced by modifying the reconstruction algorithm. Additionally, it is demonstrated that the incorporation of an initial solution (if available) allows reducing the required number of projections for a second slightly modified sample. To reduce beam hardening artefacts, the physics of the process is modelled and incorporated in the iterative reconstruction algorithm, which results in an easy to use and efficient algorithm for the reduction of beam hardening artefacts and requires no prior knowledge about the sample. In chapter 4 the 3D analysis process is described. In the scope of this work, algorithms of the 3D-analysis software package Morpho+ were optimized and new methods were added to the program, focusing on quantifying connectivity and shape of the phases and elements in the sample, as well as obtaining accurate segmentation, which is essential step in the analysis process is the segmentation of the reconstructed sample. Evidently, the different phases in the sample need to be separated from one another. However, often a second segmentation step is needed in order to separate the different elements present in a volume, such as pores in a pore network, or to separate elements which are physically separated but appear to be connected on the reconstructed images to limited resolution and/or limited contrast of the scan. The latter effect often occurs in the process of identifying different grains in a geological sample. Algorithms which are available for this second segmentation step often result in over-segmentation, i.e. elements are not only separated from one another but also separations inside a single element occur. To overcome this effect an algorithm is presented to semi-automically rejoin the separated parts of a single element. Additionally, Morpho+ was extended with tools to extract information about the connectivity of a sample, which is difficult to quantify but important for samples from various research fields. The connectivity can be described with the aid of the calculation of the Euler Number and tortuosity. Moreover, the number of neighbouring objects of each object can be determined and the connections between objects can be quantified. It is now also possible to extract a skeleton, which describes the basic structure of the volume. A calculation of several shape parameters was added to the program as well, resulting in the possibility to visualize the different objects on a disc-rod diagram. The many possibilities to characterize reconstructed samples with the aid of Morpho+ are illustrated on several applications. As mentioned in the previous section, an important aspect for correctly quantifying µCT data is the correct segmentation of the different phases present in the sample. Often it is the case that a sample consists of only one or a limited number of materials (and surrounding air). In this case this prior knowledge about the sample can be incorporated in the reconstruction algorithm. These kind of algorithms are referred to as discrete reconstruction algorithms, which are used when only a limited number of projections is available. Chapter 5 deals with discrete reconstruction algorithms. One of these algorithms is the Discrete Algebraic Reconstruction Technique, which combines iterative with discrete reconstruction and has shown excellent results. DART requires knowledge about the attenuation coefficient(s) and segmentation threshold(s) of the material(s). For µCT applications (resulting in large datasets) reconstruction times can significantly increase when DART is used in comparison with standard iterative reconstruction, as DART requires more iterations. This complicates the practical applicability of DART for routine applications at UGCT. Therefore a modified algorithm (based on the DART algorithm) for reconstruction of samples consisting out of only one material and surrounding air was developed in the scope of this work, which is referred to as the Experimental Discrete Algebraic Reconstruction Technique (EDART). The goal of this algorithm is to obtain better reconstruction results in comparison with standard iterative reconstruction algorithms, without significantly increasing reconstruction time. Moreover, a fast and intuitive technique to estimate the attenuation coefficient and threshold was developed as a part of the EDART algorithm. In chapter 5 it is illustrated that EDART provides improved image quality for both phantom and real data, in comparison with standard iterative reconstruction algorithms, when only a limited number of projections is available. The algorithms presented in this work can be subsequently applied but can also be combined with one another. It is for example illustrated in chapter 5 that the beam hardening correction method can also be incorporated in the EDART algorithm. The combination of the introduced methods allows for an improvement in the process of extracting accurate quantitative information from µCT data
    corecore