1,142 research outputs found

    Fast imaging in non-standard X-ray computed tomography geometries

    Get PDF

    Algorithm for the reconstruction of dynamic objects in CT-scanning using optical flow

    Full text link
    Computed Tomography is a powerful imaging technique that allows non-destructive visualization of the interior of physical objects in different scientific areas. In traditional reconstruction techniques the object of interest is mostly considered to be static, which gives artefacts if the object is moving during the data acquisition. In this paper we present a method that, given only scan results of multiple successive scans, can estimate the motion and correct the CT-images for this motion assuming that the motion field is smooth over the complete domain using optical flow. The proposed method is validated on simulated scan data. The main contribution is that we show we can use the optical flow technique from imaging to correct CT-scan images for motion

    Applications in GNSS water vapor tomography

    Get PDF
    Algebraic reconstruction algorithms are iterative algorithms that are used in many area including medicine, seismology or meteorology. These algorithms are known to be highly computational intensive. This may be especially troublesome for real-time applications or when processed by conventional low-cost personnel computers. One of these real time applications is the reconstruction of water vapor images from Global Navigation Satellite System (GNSS) observations. The parallelization of algebraic reconstruction algorithms has the potential to diminish signi cantly the required resources permitting to obtain valid solutions in time to be used for nowcasting and forecasting weather models. The main objective of this dissertation was to present and analyse diverse shared memory libraries and techniques in CPU and GPU for algebraic reconstruction algorithms. It was concluded that the parallelization compensates over sequential implementations. Overall the GPU implementations were found to be only slightly faster than the CPU implementations, depending on the size of the problem being studied. A secondary objective was to develop a software to perform the GNSS water vapor reconstruction using the implemented parallel algorithms. This software has been developed with success and diverse tests were made namely with synthetic and real data, the preliminary results shown to be satisfactory. This dissertation was written in the Space & Earth Geodetic Analysis Laboratory (SEGAL) and was carried out in the framework of the Structure of Moist convection in high-resolution GNSS observations and models (SMOG) (PTDC/CTE-ATM/119922/2010) project funded by FCT.Algoritmos de reconstrução algébrica são algoritmos iterativos que são usados em muitas áreas incluindo medicina, sismologia ou meteorologia. Estes algoritmos são conhecidos por serem bastante exigentes computacionalmente. Isto pode ser especialmente complicado para aplicações de tempo real ou quando processados por computadores pessoais de baixo custo. Uma destas aplicações de tempo real é a reconstrução de imagens de vapor de água a partir de observações de sistemas globais de navegação por satélite. A paralelização dos algoritmos de reconstrução algébrica permite que se reduza significativamente os requisitos computacionais permitindo obter soluções válidas para previsão meteorológica num curto espaço de tempo. O principal objectivo desta dissertação é apresentar e analisar diversas bibliotecas e técnicas multithreading para a reconstrução algébrica em CPU e GPU. Foi concluído que a paralelização compensa sobre a implementações sequenciais. De um modo geral as implementações GPU obtiveram resultados relativamente melhores que implementações em CPU, isto dependendo do tamanho do problema a ser estudado. Um objectivo secundário era desenvolver uma aplicação que realizasse a reconstrução de imagem de vapor de água através de sistemas globais de navegação por satélite de uma forma paralela. Este software tem sido desenvolvido com sucesso e diversos testes foram realizados com dados sintéticos e dados reais, os resultados preliminares foram satisfatórios. Esta dissertação foi escrita no Space & Earth Geodetic Analysis Laboratory (SEGAL) e foi realizada de acordo com o projecto Structure 01' Moist convection in high-resolution GNSS observations and models (SMOG) (PTDC / CTE-ATM/ 11992212010) financiado pelo FCT.Fundação para a Ciência e a Tecnologia (FCT

    Improved compressed sensing algorithm for sparse-view CT

    Get PDF
    In computed tomography (CT) there are many situations where reconstruction may need to be performed with sparse-view data. In sparse-view CT imaging, strong streak artifacts may appear in conventionally reconstructed images due to the limited sampling rate, compromising image quality. Compressed sensing (CS) algorithm has shown potential to accurately recover images from highly undersampled data. In the past few years, total variation (TV)-base compressed sensing algorithms have been proposed to suppress the streak artifact in CT image reconstruction. In this paper, we formulate the problem of CT imaging under transform sparsity and sparse-view constraints, and propose a novel compressed sensing-based algorithm for CT image reconstruction from few-view data, in which we simultaneously minimize the ℓ1 norm, total variation and a least square measure. The main feature of our algorithm is the use of two sparsity transforms: discrete wavelet transform and discrete gradient transform, both of which are proven to be powerful sparsity transforms. Experiments with simulated and real projections were performed to evaluate and validate the proposed algorithm. The reconstructions using the proposed approach have less streak artifacts and reconstruction errors than other conventional methods

    Heuristic 3d Reconstruction Of Irregular Spaced Lidar

    Get PDF
    As more data sources have become abundantly available, an increased interest in 3D reconstruction has emerged in the image processing academic community. Applications for 3D reconstruction of urban and residential buildings consist of urban planning, network planning for mobile communication, tourism information systems, spatial analysis of air pollution and noise nuisance, microclimate investigations, and Geographical Information Systems (GISs). Previous, classical, 3D reconstruction algorithms solely utilized aerial photography. With the advent of LIDAR systems, current algorithms explore using captured LIDAR data as an additional feasible source of information for 3D reconstruction. Preprocessing techniques are proposed for the development of an autonomous 3D Reconstruction algorithm. The algorithm is designed for autonomously deriving three dimensional models of urban and residential buildings from raw LIDAR data. First, a greedy insertion triangulation algorithm, modified with a proposed noise filtering technique, triangulates the raw LIDAR data. The normal vectors of those triangles are then passed to an unsupervised clustering algorithm – Fuzzy Simplified Adaptive Resonance Theory (Fuzzy SART). Fuzzy SART returns a rough grouping of coplanar triangles. A proposed multiple regression algorithm then further refines the coplanar grouping by further removing outliers and deriving an improved planar segmentation of the raw LIDAR data. Finally, further refinement is achieved by calculating the intersection of the best fit roof planes and moving nearby points close to that intersection to exist at the intersection, resulting in straight roof ridges. The end result of the aforementioned techniques culminates in a well defined model approximating the considered building depicted by the LIDAR data

    Improving Image Reconstruction for Digital Breast Tomosynthesis

    Full text link
    Digital breast tomosynthesis (DBT) has been developed to reduce the issue of overlapping tissue in conventional 2-D mammography for breast cancer screening and diagnosis. In the DBT procedure, the patient’s breast is compressed with a paddle and a sequence of x-ray projections is taken within a small angular range. Tomographic reconstruction algorithms are then applied to these projections, generating tomosynthesized image slices of the breast, such that radiologists can read the breast slice by slice. Studies have shown that DBT can reduce both false-negative diagnoses of breast cancer and false-positive recalls compared to mammography alone. This dissertation focuses on improving image quality for DBT reconstruction. Chapter I briefly introduces the concept of DBT and the inspiration of my study. Chapter II covers the background of my research including the concept of image reconstruction, the geometry of our experimental DBT system and figures of merit for image quality. Chapter III introduces our study of the segmented separable footprint (SG) projector. By taking into account the finite size of detector element, the SG projector improves the accuracy of forward projections in iterative image reconstruction. Due to the more efficient access to memory, the SG projector is also faster than the traditional ray-tracing (RT) projector. We applied the SG projector to regular and subpixel reconstructions and demonstrated its effectiveness. Chapter IV introduces a new DBT reconstruction method with detector blur and correlated noise modeling, called the SQS-DBCN algorithm. The SQS-DBCN algorithm is able to significantly enhance microcalcifications (MC) in DBT while preserving the appearance of the soft tissue and mass margin. Comparisons between the SQS-DBCN algorithm and several modified versions of the SQS-DBCN algorithm indicate the importance of modeling different components of the system physics at the same time. Chapter V investigates truncated projection artifact (TPA) removal algorithms. Among the three algorithms we proposed, the pre-reconstruction-based projection view (PV) extrapolation method provides the best performance. Possible improvements of the other two TPA removal algorithms have been discussed. Chapter VI of this dissertation examines the effect of source blur on DBT reconstruction. Our analytical calculation demonstrates that the point spread function (PSF) of source blur is highly shift-variant. We used CatSim to simulate digital phantoms. Analysis on the reconstructed images demonstrates that a typical finite-sized focal spot (~ 0.3 mm) will not affect the image quality if the x-ray tube is stationary during the data acquisition. For DBT systems with continuous-motion data acquisition, the motion of the x-ray tube is the main cause of the effective source blur and will cause loss in the contrast of objects. Therefore modeling the source blur for these DBT systems could potentially improve the reconstructed image quality. The final chapter of this dissertation discusses a few future studies that are inspired by my PhD research.PHDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/144059/1/jiabei_1.pd

    Latest developments in the improvement and quantification of high resolution X-ray tomography data

    Get PDF
    X-ray Computed Tomography (CT) is a powerful tool to visualize the internal structure of objects. Although X-ray CT is often used for medical purposes, it has many applications in the academic and industrial world. X-ray CT is a non destructive tool which provides the possibility to obtain a three dimensional (3D) representation of the investigated object. The currently available high resolution systems can achieve resolutions of less than one micrometer which makes it a valuable technique for various scientific and industrial applications. At the Centre for X-ray Tomography of the Ghent University (UGCT) research is performed on the improvement and application of high resolution X-ray CT (µCT). Important aspects of this research are the development of state of the art high resolution CT scanners and the development of software for controlling the scanners, reconstruction software and analysis software. UGCT works closely together with researchers from various research fields and each of them have their specific requirements. To obtain the best possible results in any particular case, the scanners are developed in a modular way, which allows for optimizations, modifications and improvements during use. Another way of improving the image quality lies in optimization of the reconstruction software, which is why the software package Octopus was developed in house. Once a scanned volume is reconstructed, an important challenge lies in the interpretation of the obtained data. For this interpretation visualization alone is often insufficient and quantitative information is needed. As researchers from different fields have different needs with respect to quantification of their data, UGCT developed the 3D software analysis package Morpho+ for analysing all kinds of samples. The research presented in this work focuses on improving the accuracy and extending the amount of the quantitative information which can be extracted from µCT data. Even if a perfect analysis algorithm would exist, it would be impossible to accurately quantify data of which the image quality is insufficient. As image quality can significantly be improved with the aid of adequate reconstruction techniques, the research presented in this work focuses on analysis as well as reconstruction software. As the datasets obtained with µCT at UGCT are of substantial size, the possibility to process large datasets in a limited amount of time is crucial in the development of new algorithms. The contributions of the author can be subdivided in three major aspects of the processing of CT data: The modification of iterative reconstruction algorithms, the extension and optimization of 3D analysis algorithms and the development of a new algorithm for discrete tomography. These topics are discussed in more detail below. A main aspect in the improvement of image quality is the reduction of artefacts which often occur in µCT such as noise-, cone beam- and beam hardening artefacts. Cone beam artefacts are a result of the cone beam geometry which is often used in laboratory based µCT and beam hardening is a consequence of the polychromaticity of the beam. Although analytical reconstruction algorithms based on filtered back projection are still most commonly used for the reconstruction of µCT datasets, there is another approach which is becoming a valuable alternative: iterative reconstruction algorithms. Iterative algorithms are inherently better at coping with the previously mentioned artefacts. Additionally iterative algorithms can improve image quality in case the number of available projections or the angular range is limited. In chapter 3 the possibility to modify these algorithms to further improve image quality is investigated. It is illustrated that streak artefacts which can occur when metals are present in a sample can be significantly reduced by modifying the reconstruction algorithm. Additionally, it is demonstrated that the incorporation of an initial solution (if available) allows reducing the required number of projections for a second slightly modified sample. To reduce beam hardening artefacts, the physics of the process is modelled and incorporated in the iterative reconstruction algorithm, which results in an easy to use and efficient algorithm for the reduction of beam hardening artefacts and requires no prior knowledge about the sample. In chapter 4 the 3D analysis process is described. In the scope of this work, algorithms of the 3D-analysis software package Morpho+ were optimized and new methods were added to the program, focusing on quantifying connectivity and shape of the phases and elements in the sample, as well as obtaining accurate segmentation, which is essential step in the analysis process is the segmentation of the reconstructed sample. Evidently, the different phases in the sample need to be separated from one another. However, often a second segmentation step is needed in order to separate the different elements present in a volume, such as pores in a pore network, or to separate elements which are physically separated but appear to be connected on the reconstructed images to limited resolution and/or limited contrast of the scan. The latter effect often occurs in the process of identifying different grains in a geological sample. Algorithms which are available for this second segmentation step often result in over-segmentation, i.e. elements are not only separated from one another but also separations inside a single element occur. To overcome this effect an algorithm is presented to semi-automically rejoin the separated parts of a single element. Additionally, Morpho+ was extended with tools to extract information about the connectivity of a sample, which is difficult to quantify but important for samples from various research fields. The connectivity can be described with the aid of the calculation of the Euler Number and tortuosity. Moreover, the number of neighbouring objects of each object can be determined and the connections between objects can be quantified. It is now also possible to extract a skeleton, which describes the basic structure of the volume. A calculation of several shape parameters was added to the program as well, resulting in the possibility to visualize the different objects on a disc-rod diagram. The many possibilities to characterize reconstructed samples with the aid of Morpho+ are illustrated on several applications. As mentioned in the previous section, an important aspect for correctly quantifying µCT data is the correct segmentation of the different phases present in the sample. Often it is the case that a sample consists of only one or a limited number of materials (and surrounding air). In this case this prior knowledge about the sample can be incorporated in the reconstruction algorithm. These kind of algorithms are referred to as discrete reconstruction algorithms, which are used when only a limited number of projections is available. Chapter 5 deals with discrete reconstruction algorithms. One of these algorithms is the Discrete Algebraic Reconstruction Technique, which combines iterative with discrete reconstruction and has shown excellent results. DART requires knowledge about the attenuation coefficient(s) and segmentation threshold(s) of the material(s). For µCT applications (resulting in large datasets) reconstruction times can significantly increase when DART is used in comparison with standard iterative reconstruction, as DART requires more iterations. This complicates the practical applicability of DART for routine applications at UGCT. Therefore a modified algorithm (based on the DART algorithm) for reconstruction of samples consisting out of only one material and surrounding air was developed in the scope of this work, which is referred to as the Experimental Discrete Algebraic Reconstruction Technique (EDART). The goal of this algorithm is to obtain better reconstruction results in comparison with standard iterative reconstruction algorithms, without significantly increasing reconstruction time. Moreover, a fast and intuitive technique to estimate the attenuation coefficient and threshold was developed as a part of the EDART algorithm. In chapter 5 it is illustrated that EDART provides improved image quality for both phantom and real data, in comparison with standard iterative reconstruction algorithms, when only a limited number of projections is available. The algorithms presented in this work can be subsequently applied but can also be combined with one another. It is for example illustrated in chapter 5 that the beam hardening correction method can also be incorporated in the EDART algorithm. The combination of the introduced methods allows for an improvement in the process of extracting accurate quantitative information from µCT data

    TOMOGRAPHIC IMAGE RECONSTRUCTION: IMPLEMENTATION, OPTIMIZATION AND COMPARISON IN DIGITAL BREAST TOMOSYNTHESIS

    Get PDF
    Conventional 2D mammography was the most effective approach to detecting early stage breast cancer in the past decades of years. Tomosynthetic breast imaging is a potentially more valuable 3D technique for breast cancer detection. The limitations of current tomosynthesis systems include a longer scanning time than a conventional digital X-ray modality and a low spatial resolution due to the movement of the single X-ray source. Dr.Otto Zhou\u27s group proposed the concept of stationary digital breast tomosynthesis (s-DBT) using a Carbon Nano-Tube (CNT) based X-ray source array. Instead of mechanically moving a single X-ray tube, s-DBT applies a stationary X-ray source array, which generates X-ray beams from different view angles by electronically activating the individual source prepositioned at the corresponding view angle, therefore eliminating the focal spot motion blurring from sources. The scanning speed is determined only by the detector readout time and the number of sources regardless of the angular coverage spans, such that the blur from patient\u27s motion can be reduced due to the quick scan. S-DBT is potentially a promising modality to improve the early breast cancer detection by providing decent image quality with fast scan and low radiation dose. DBT system acquires a limited number of noisy 2D projections over a limited angular range and then mathematically reconstructs a 3D breast. 3D reconstruction is faced with the challenges of cone-beam and flat-panel geometry, highly incomplete sampling and huge reconstructed volume. In this research, we investigated several representative reconstruction methods such as Filtered backprojection method (FBP), Simultaneous algebraic reconstruction technique (SART) and Maximum likelihood (ML). We also compared our proposed statistical iterative reconstruction (IR) with particular prior and computational technique to these representative methods. Of all available reconstruction methods in this research, our proposed statistical IR appears particularly promising since it provides the flexibility of accurate physical noise modeling and geometric system description. In the following chapters, we present multiple key techniques of statistical IR to tomosynthesis imaging data to demonstrate significant image quality improvement over conventional techniques. These techniques include the physical modeling with a local voxel-pair based prior with the flexibility in its parameters to fine-tune image quality, the pre-computed parameter κ incorporated with the prior to remove the data dependence and to achieve a predictable resolution property, an effective ray-driven technique to compute the forward and backprojection and an over-sampled ray-driven method to perform high resolution reconstruction with a practical region of interest (ROI) technique. In addition, to solve the estimation problem with a fast computation, we also present a semi-quantitative method to optimize the relaxation parameter in a relaxed order-subsets framework and an optimization transfer based algorithm framework which potentially allows less iterations to achieve an acceptable convergence. The phantom data is acquired with the s-DBT prototype system to assess the performance of these particular techniques and compare our proposed method to those representatives. The value of IR is demonstrated in improving the detectability of low contrast and tiny micro-calcification, in reducing cross plane artifacts, in improving resolution and lowering noise in reconstructed images. In particular, noise power spectrum analysis (NPS) indicates a superior noise spectral property of our proposed statistical IR, especially in the high frequency range. With the decent noise property, statistical IR also provides a remarkable reconstruction MTF in general and in different areas within a focus plane. Although computational load remains a significant challenge for practical development, combined with the advancing computational techniques such as graphic computing, the superior image quality provided by statistical IR will be realized to benefit the diagnostics in real clinical applications

    A framework for advanced processing of dynamic X-ray micro-CT data

    Get PDF

    Rapid, Reliable Tissue Fractionation Algorithm for Commercial Scale Biorefineries

    Get PDF
    Increasing demand, limited supply, and the impact on the environment raise significant concerns about the consumption of fossil fuels. Because of this, global economies are facing two significant energy challenges: i) securing the supply of reliable and affordable energy and ii) achieving the transformation to a low-carbon, high-efficiency, and sustainable energy system. Recently, there has been growing interest in developing portable transportation fuels from biomass in order to reduce the petroleum consumption in the transportation sector - a major contributor to greenhouse gas emission. A cost-effective conversion process to produce biofuels from lignocellulosic biomass material relies not just on the material quality, but also on the biorefinery’s ability to measure the quality of the source biomass. The quality of the feedstock is crucial for a commercially viable conversion platform. This research mainly focuses on developing sensing techniques using 3D X-ray imaging to study quality factors like material composition, ash content and moisture content which affect the conversion efficiency, equipment wear, and product yield in the bioethanol production in a real-time or near real-time basis
    corecore