228 research outputs found

    Enhanced image reconstruction of electrical impedance tomography using simultaneous algebraic reconstruction technique and K-means clustering

    Get PDF
    Electrical impedance tomography (EIT), as a non-ionizing tomography method, has been widely used in various fields of application, such as engineering and medical fields. This study applies an iterative process to reconstruct EIT images using the simultaneous algebraic reconstruction technique (SART) algorithm combined with K-means clustering. The reconstruction started with defining the finite element method (FEM) model and filtering the measurement data with a Butterworth low-pass filter. The next step is solving the inverse problem in the EIT case with the SART algorithm. The results of the SART algorithm approach were classified using the K-means clustering and thresholding. The reconstruction results were evaluated with the peak signal noise ratio (PSNR), structural similarity indices (SSIM), and normalized root mean square error (NRMSE). They were compared with the one-step gauss-newton (GN) and total variation regularization based on iteratively reweighted least-squares (TV-IRLS) methods. The evaluation shows that the average PSNR and SSIM of the proposed reconstruction method are the highest of the other methods, each being 24.24 and 0.94; meanwhile, the average NRMSE value is the lowest, which is 0.04. The performance evaluation also shows that the proposed method is faster than the other methods

    Latest developments in the improvement and quantification of high resolution X-ray tomography data

    Get PDF
    X-ray Computed Tomography (CT) is a powerful tool to visualize the internal structure of objects. Although X-ray CT is often used for medical purposes, it has many applications in the academic and industrial world. X-ray CT is a non destructive tool which provides the possibility to obtain a three dimensional (3D) representation of the investigated object. The currently available high resolution systems can achieve resolutions of less than one micrometer which makes it a valuable technique for various scientific and industrial applications. At the Centre for X-ray Tomography of the Ghent University (UGCT) research is performed on the improvement and application of high resolution X-ray CT (µCT). Important aspects of this research are the development of state of the art high resolution CT scanners and the development of software for controlling the scanners, reconstruction software and analysis software. UGCT works closely together with researchers from various research fields and each of them have their specific requirements. To obtain the best possible results in any particular case, the scanners are developed in a modular way, which allows for optimizations, modifications and improvements during use. Another way of improving the image quality lies in optimization of the reconstruction software, which is why the software package Octopus was developed in house. Once a scanned volume is reconstructed, an important challenge lies in the interpretation of the obtained data. For this interpretation visualization alone is often insufficient and quantitative information is needed. As researchers from different fields have different needs with respect to quantification of their data, UGCT developed the 3D software analysis package Morpho+ for analysing all kinds of samples. The research presented in this work focuses on improving the accuracy and extending the amount of the quantitative information which can be extracted from µCT data. Even if a perfect analysis algorithm would exist, it would be impossible to accurately quantify data of which the image quality is insufficient. As image quality can significantly be improved with the aid of adequate reconstruction techniques, the research presented in this work focuses on analysis as well as reconstruction software. As the datasets obtained with µCT at UGCT are of substantial size, the possibility to process large datasets in a limited amount of time is crucial in the development of new algorithms. The contributions of the author can be subdivided in three major aspects of the processing of CT data: The modification of iterative reconstruction algorithms, the extension and optimization of 3D analysis algorithms and the development of a new algorithm for discrete tomography. These topics are discussed in more detail below. A main aspect in the improvement of image quality is the reduction of artefacts which often occur in µCT such as noise-, cone beam- and beam hardening artefacts. Cone beam artefacts are a result of the cone beam geometry which is often used in laboratory based µCT and beam hardening is a consequence of the polychromaticity of the beam. Although analytical reconstruction algorithms based on filtered back projection are still most commonly used for the reconstruction of µCT datasets, there is another approach which is becoming a valuable alternative: iterative reconstruction algorithms. Iterative algorithms are inherently better at coping with the previously mentioned artefacts. Additionally iterative algorithms can improve image quality in case the number of available projections or the angular range is limited. In chapter 3 the possibility to modify these algorithms to further improve image quality is investigated. It is illustrated that streak artefacts which can occur when metals are present in a sample can be significantly reduced by modifying the reconstruction algorithm. Additionally, it is demonstrated that the incorporation of an initial solution (if available) allows reducing the required number of projections for a second slightly modified sample. To reduce beam hardening artefacts, the physics of the process is modelled and incorporated in the iterative reconstruction algorithm, which results in an easy to use and efficient algorithm for the reduction of beam hardening artefacts and requires no prior knowledge about the sample. In chapter 4 the 3D analysis process is described. In the scope of this work, algorithms of the 3D-analysis software package Morpho+ were optimized and new methods were added to the program, focusing on quantifying connectivity and shape of the phases and elements in the sample, as well as obtaining accurate segmentation, which is essential step in the analysis process is the segmentation of the reconstructed sample. Evidently, the different phases in the sample need to be separated from one another. However, often a second segmentation step is needed in order to separate the different elements present in a volume, such as pores in a pore network, or to separate elements which are physically separated but appear to be connected on the reconstructed images to limited resolution and/or limited contrast of the scan. The latter effect often occurs in the process of identifying different grains in a geological sample. Algorithms which are available for this second segmentation step often result in over-segmentation, i.e. elements are not only separated from one another but also separations inside a single element occur. To overcome this effect an algorithm is presented to semi-automically rejoin the separated parts of a single element. Additionally, Morpho+ was extended with tools to extract information about the connectivity of a sample, which is difficult to quantify but important for samples from various research fields. The connectivity can be described with the aid of the calculation of the Euler Number and tortuosity. Moreover, the number of neighbouring objects of each object can be determined and the connections between objects can be quantified. It is now also possible to extract a skeleton, which describes the basic structure of the volume. A calculation of several shape parameters was added to the program as well, resulting in the possibility to visualize the different objects on a disc-rod diagram. The many possibilities to characterize reconstructed samples with the aid of Morpho+ are illustrated on several applications. As mentioned in the previous section, an important aspect for correctly quantifying µCT data is the correct segmentation of the different phases present in the sample. Often it is the case that a sample consists of only one or a limited number of materials (and surrounding air). In this case this prior knowledge about the sample can be incorporated in the reconstruction algorithm. These kind of algorithms are referred to as discrete reconstruction algorithms, which are used when only a limited number of projections is available. Chapter 5 deals with discrete reconstruction algorithms. One of these algorithms is the Discrete Algebraic Reconstruction Technique, which combines iterative with discrete reconstruction and has shown excellent results. DART requires knowledge about the attenuation coefficient(s) and segmentation threshold(s) of the material(s). For µCT applications (resulting in large datasets) reconstruction times can significantly increase when DART is used in comparison with standard iterative reconstruction, as DART requires more iterations. This complicates the practical applicability of DART for routine applications at UGCT. Therefore a modified algorithm (based on the DART algorithm) for reconstruction of samples consisting out of only one material and surrounding air was developed in the scope of this work, which is referred to as the Experimental Discrete Algebraic Reconstruction Technique (EDART). The goal of this algorithm is to obtain better reconstruction results in comparison with standard iterative reconstruction algorithms, without significantly increasing reconstruction time. Moreover, a fast and intuitive technique to estimate the attenuation coefficient and threshold was developed as a part of the EDART algorithm. In chapter 5 it is illustrated that EDART provides improved image quality for both phantom and real data, in comparison with standard iterative reconstruction algorithms, when only a limited number of projections is available. The algorithms presented in this work can be subsequently applied but can also be combined with one another. It is for example illustrated in chapter 5 that the beam hardening correction method can also be incorporated in the EDART algorithm. The combination of the introduced methods allows for an improvement in the process of extracting accurate quantitative information from µCT data

    Robust inversion and detection techniques for improved imaging performance

    Full text link
    Thesis (Ph.D.)--Boston UniversityIn this thesis we aim to improve the performance of information extraction from imaging systems through three thrusts. First, we develop improved image formation methods for physics-based, complex-valued sensing problems. We propose a regularized inversion method that incorporates prior information about the underlying field into the inversion framework for ultrasound imaging. We use experimental ultrasound data to compute inversion results with the proposed formulation and compare it with conventional inversion techniques to show the robustness of the proposed technique to loss of data. Second, we propose methods that combine inversion and detection in a unified framework to improve imaging performance. This framework is applicable for cases where the underlying field is label-based such that each pixel of the underlying field can only assume values from a discrete, limited set. We consider this unified framework in the context of combinatorial optimization and propose graph-cut based methods that would result in label-based images, thereby eliminating the need for a separate detection step. Finally, we propose a robust method of object detection from microscopic nanoparticle images. In particular, we focus on a portable, low cost interferometric imaging platform and propose robust detection algorithms using tools from computer vision. We model the electromagnetic image formation process and use this model to create an enhanced detection technique. The effectiveness of the proposed technique is demonstrated using manually labeled ground-truth data. In addition, we extend these tools to develop a detection based autofocusing algorithm tailored for the high numerical aperture interferometric microscope

    Applications in GNSS water vapor tomography

    Get PDF
    Algebraic reconstruction algorithms are iterative algorithms that are used in many area including medicine, seismology or meteorology. These algorithms are known to be highly computational intensive. This may be especially troublesome for real-time applications or when processed by conventional low-cost personnel computers. One of these real time applications is the reconstruction of water vapor images from Global Navigation Satellite System (GNSS) observations. The parallelization of algebraic reconstruction algorithms has the potential to diminish signi cantly the required resources permitting to obtain valid solutions in time to be used for nowcasting and forecasting weather models. The main objective of this dissertation was to present and analyse diverse shared memory libraries and techniques in CPU and GPU for algebraic reconstruction algorithms. It was concluded that the parallelization compensates over sequential implementations. Overall the GPU implementations were found to be only slightly faster than the CPU implementations, depending on the size of the problem being studied. A secondary objective was to develop a software to perform the GNSS water vapor reconstruction using the implemented parallel algorithms. This software has been developed with success and diverse tests were made namely with synthetic and real data, the preliminary results shown to be satisfactory. This dissertation was written in the Space & Earth Geodetic Analysis Laboratory (SEGAL) and was carried out in the framework of the Structure of Moist convection in high-resolution GNSS observations and models (SMOG) (PTDC/CTE-ATM/119922/2010) project funded by FCT.Algoritmos de reconstrução algébrica são algoritmos iterativos que são usados em muitas áreas incluindo medicina, sismologia ou meteorologia. Estes algoritmos são conhecidos por serem bastante exigentes computacionalmente. Isto pode ser especialmente complicado para aplicações de tempo real ou quando processados por computadores pessoais de baixo custo. Uma destas aplicações de tempo real é a reconstrução de imagens de vapor de água a partir de observações de sistemas globais de navegação por satélite. A paralelização dos algoritmos de reconstrução algébrica permite que se reduza significativamente os requisitos computacionais permitindo obter soluções válidas para previsão meteorológica num curto espaço de tempo. O principal objectivo desta dissertação é apresentar e analisar diversas bibliotecas e técnicas multithreading para a reconstrução algébrica em CPU e GPU. Foi concluído que a paralelização compensa sobre a implementações sequenciais. De um modo geral as implementações GPU obtiveram resultados relativamente melhores que implementações em CPU, isto dependendo do tamanho do problema a ser estudado. Um objectivo secundário era desenvolver uma aplicação que realizasse a reconstrução de imagem de vapor de água através de sistemas globais de navegação por satélite de uma forma paralela. Este software tem sido desenvolvido com sucesso e diversos testes foram realizados com dados sintéticos e dados reais, os resultados preliminares foram satisfatórios. Esta dissertação foi escrita no Space & Earth Geodetic Analysis Laboratory (SEGAL) e foi realizada de acordo com o projecto Structure 01' Moist convection in high-resolution GNSS observations and models (SMOG) (PTDC / CTE-ATM/ 11992212010) financiado pelo FCT.Fundação para a Ciência e a Tecnologia (FCT

    Itera- tive Reconstruction Framework for High-Resolution X-ray CT Data

    Get PDF
    Small animal medical imaging has become an important tool for researchers as it allows noninvasively screening animal models for pathologies as well as monitoring dis- ease progression and therapy response. Currently, clinical CT scanners typically use a Filtered Backprojection (FBP) based method for image reconstruction. This algorithm is fast and generally produces acceptable results, but has several drawbacks. Firstly, it is based upon line integrals, which do not accurately describe the process of X-ray attenuation. Secondly, noise in the projection data is not properly modeled with FBP. On the other hand, iterative algorithms allow the integration of more complicated sys- tem models as well as robust scatter and noise correction techniques. Unfortunately, the iterative algorithms also have much greater computational demands than their FBP counterparts. In this thesis, we develop a framework to support iterative reconstruc- tions of high-resolution X-ray CT data. This includes exploring various system models and algorithms as well as developing techniques to manage the significant computa- tional and system storage requirements of the iterative algorithms. Issues related to the development of this framework as well as preliminary results are presented

    Approche problème inverse pour l’alignement de séries en tomographie électronique

    Get PDF
    International audienceIn the refining industry, morphological measurements of particles have become an essential part in the characterization catalyst supports. Through these parameters, one can infer the specific physicochemical properties of the studied materials. One of the main acquisition techniques is electron tomography (or nanotomography). 3D volumes are reconstructed from sets of projections from different angles made by a Transmission Electron Microscope (TEM). This technique provides a real three-dimensional information at the nanometric scale. A major issue in this method is the misalignment of the projections that contributes to the reconstruction. The current alignment techniques usually employ fiducial markers such as gold particles for a correct alignment of the images. When the use of markers is not possible, the correlation between adjacent projections is used to align them. However, this method sometimes fails. In this paper, we propose a new method based on the inverse problem approach where a certain criterion is minimized using a variant of the Nelder and Mead simplex algorithm. The proposed approach is composed of two steps. The first step consists of an initial alignment process, which relies on the minimization of a cost function based on robust statistics measuring the similarity of a projection to its previous projections in the series. It reduces strong shifts resulting from the acquisition between successive projections. In the second step, the pre-registered projections are used to initialize an iterative alignment-refinement process which alternates between (i) volume reconstructions and (ii) registrations of measured projections onto simulated projections computed from the volume reconstructed in (i). At the end of this process, we have a correct reconstruction of the volume, the projections being correctly aligned. Our method is tested on simulated data and shown to estimate accurately the translation, rotation and scale of arbitrary transforms. We have successfully tested our method with real projections of different catalyst supports.Dans le domaine du raffinage, les mesures morphologiques de particules sont devenues indispensables pour caractériser les supports de catalyseurs. A travers ces paramètres, on peut remonter aux spécificités physico-chimiques des matériaux étudiés. Une des techniques d’acquisition utilisées est la tomographie électronique (ou nanotomographie). Des volumes 3D sont reconstruits à partir de séries de projections sous différents angles obtenues par Microscopie Électronique en Transmission (MET). Cette technique permet d’obtenir une réelle information tridimensionnelle à l’échelle du nanomètre. Un problème majeur dans ce contexte est le mauvais alignement des projections qui contribuent à la reconstruction. Les techniques d’alignement actuelles emploient habituellement des marqueurs de réference tels que des nanoparticules d’or pour un alignement correct des images. Lorsque l’utilisation de marqueurs n’est pas possible, l’alignement de projections adjacentes est obtenu par corrélation entre ces projections. Cependant, cette méthode échoue parfois. Dans cet article, nous proposons une nouvelle méthode basée sur une approche de type problème inverse où un certain critère est minimisé en utilisant une variante de l’algorithme de Nelder et Mead, qui exploite le concept de simplexe. Elle est composéé de deux étapes. La première étape consiste en un processus d’alignement initial s’appuyant sur la minimisation d’une fonction de coût basée sur des statistiques robustes, mesurant la similarité entre une projection et les projections précédentes de la série. Elle vise à réduire les forts déplacements, résultant de l’acquisition entre les projections successives. Dans la seconde étape, les projections pré-recalées sont employées pour initialiser un processus itératif et alterné d’alignement et reconstruction, minimisant alternativement une fonction de coût basée sur la reconstruction du volume et une fonction basée sur l’alignement d’une projection avec sa version simulée obtenue à partir du volume reconstruit. A la fin de ce processus, nous obtenons une reconstruction correcte du volume, les projections étant correctement alignées. Notre méthode a été testée sur des données simulées et prouve qu’elle récupère d’une manière précise les changements dans les paramètres de translation, rotation et mise à l’échelle. Nous avons testé avec succès notre méthode pour les projections réelles de différents supports de catalyseur

    Image reconstruction from limited angle projections collected by multisource interior x-ray imaging systems

    Get PDF
    A multi-source x-ray interior imaging system with limited angle scanning is investigated to study the possibility of building an ultra-fast micro-CT for dynamic small animal imaging. And two methods are employed to perform interior reconstruction from a limited number of projections collected by the multi-source interior x-ray system. The first is total variation minimization with the steepest descent search (TVM-SD) and the second is total difference minimization with soft-threshold filtering (TDM-STF). Comprehensive numerical simulations and animal studies are performed to validate the associated reconstructed methods and demonstrate the feasibility and application of the proposed system configuration. The image reconstruction results show that both of the two reconstruction methods can significantly improve the image quality and the TDM-SFT is slightly superior to the TVM-SD. Finally, quantitative image analysis shows it is possible to make an ultra-fast micro-CT using a multi-source interior x-ray system scheme combined with the state-of-the-art interior tomography

    Reduced projection angles for binary tomography with particle aggregation

    Get PDF
    This paper extends particle aggregate reconstruction technique (PART), a reconstruction algorithm for binary tomography based on the movement of particles. PART supposes that pixel values are particles, and that particles diffuse through the image, staying together in regions of uniform pixel value known as aggregates. In this work, a variation of this algorithm is proposed and a focus is placed on reducing the number of projections and whether this impacts the reconstruction of images. The algorithm is tested on three phantoms of varying sizes and numbers of forward projections and compared to filtered back projection, a random search algorithm and to SART, a standard algebraic reconstruction method. It is shown that the proposed algorithm outperforms the aforementioned algorithms on small numbers of projections. This potentially makes the algorithm attractive in scenarios where collecting less projection data are inevitable
    • …
    corecore