2,167 research outputs found

    Generic iterative subset algorithms for discrete tomography

    Get PDF
    AbstractDiscrete tomography deals with the reconstruction of images from their projections where the images are assumed to contain only a small number of grey values. In particular, there is a strong focus on the reconstruction of binary images (binary tomography). A variety of binary tomography problems have been considered in the literature, each using different projection models or additional constraints. In this paper, we propose a generic iterative reconstruction algorithm that can be used for many different binary reconstruction problems. In every iteration, a subproblem is solved based on at most two of the available projections. Each of the subproblems can be solved efficiently using network flow methods. We report experimental results for various reconstruction problems. Our results demonstrate that the algorithm is capable of reconstructing complex objects from a small number of projections

    Network Flow Algorithms for Discrete Tomography

    Get PDF
    Tomography is a powerful technique to obtain images of the interior of an object in a nondestructive way. First, a series of projection images (e.g., X-ray images) is acquired and subsequently a reconstruction of the interior is computed from the available project data. The algorithms that are used to compute such reconstructions are known as tomographic reconstruction algorithms. Discrete tomography is concerned with the tomographic reconstruction of images that are known to contain only a few different gray levels. By using this knowledge in the reconstruction algorithm it is often possible to reduce the number of projections required to compute an accurate reconstruction, compared to algorithms that do not use prior knowledge. This thesis deals with new reconstruction algorithms for discrete tomography. In particular, the first five chapters are about reconstruction algorithms based on network flow methods. These algorithms make use of an elegant correspondence between certain types of tomography problems and network flow problems from the field of Operations Research. Chapter 6 deals with a problem that occurs in the application of discrete tomography to the reconstruction of nanocrystals from projections obtained by electron microscopy.The research for this thesis has been financially supported by the Netherlands Organisation for Scientific Research (NWO), project 613.000.112.UBL - phd migration 201

    3D particle tracking velocimetry using dynamic discrete tomography

    Get PDF
    Particle tracking velocimetry in 3D is becoming an increasingly important imaging tool in the study of fluid dynamics, combustion as well as plasmas. We introduce a dynamic discrete tomography algorithm for reconstructing particle trajectories from projections. The algorithm is efficient for data from two projection directions and exact in the sense that it finds a solution consistent with the experimental data. Non-uniqueness of solutions can be detected and solutions can be tracked individually

    EvAC3D: From Event-based Apparent Contours to 3D Models via Continuous Visual Hulls

    Full text link
    3D reconstruction from multiple views is a successful computer vision field with multiple deployments in applications. State of the art is based on traditional RGB frames that enable optimization of photo-consistency cross views. In this paper, we study the problem of 3D reconstruction from event-cameras, motivated by the advantages of event-based cameras in terms of low power and latency as well as by the biological evidence that eyes in nature capture the same data and still perceive well 3D shape. The foundation of our hypothesis that 3D reconstruction is feasible using events lies in the information contained in the occluding contours and in the continuous scene acquisition with events. We propose Apparent Contour Events (ACE), a novel event-based representation that defines the geometry of the apparent contour of an object. We represent ACE by a spatially and temporally continuous implicit function defined in the event x-y-t space. Furthermore, we design a novel continuous Voxel Carving algorithm enabled by the high temporal resolution of the Apparent Contour Events. To evaluate the performance of the method, we collect MOEC-3D, a 3D event dataset of a set of common real-world objects. We demonstrate the ability of EvAC3D to reconstruct high-fidelity mesh surfaces from real event sequences while allowing the refinement of the 3D reconstruction for each individual event.Comment: 16 pages, 8 figures, European Conference on Computer Vision (ECCV) 202

    Discrete X-ray tomographic reconstruction for fast mineral liberation spectrum retrieval

    Get PDF
    In minerals beneficiation, the mineral liberation spectrum of the plant feed conveys valuable information for adjusting operations, provided it is available in minutes from particulate sampling. X-ray micro-tomography is the only technique available for unbiased measurement of composite particle composition (on a 3D basis). The bottleneck of current micro-tomographic systems is the X-ray scanning time (data acquisition) rather than the slice reconstruction time (data processing). An algorithm capable of reconstructing tomographic slices of composite mineral particles from a limited number of radiographic projections, thus significantly reducing the overall measurement time, is presented and demonstrated with numerical examples. The algorithm is cast around the discrete algebraical reconstruction technique and requires less than one tenth of the projection data needed by the currently used filtered back-projection methods, thus allowing a dramatic reduction of the scanning time

    Latest developments in the improvement and quantification of high resolution X-ray tomography data

    Get PDF
    X-ray Computed Tomography (CT) is a powerful tool to visualize the internal structure of objects. Although X-ray CT is often used for medical purposes, it has many applications in the academic and industrial world. X-ray CT is a non destructive tool which provides the possibility to obtain a three dimensional (3D) representation of the investigated object. The currently available high resolution systems can achieve resolutions of less than one micrometer which makes it a valuable technique for various scientific and industrial applications. At the Centre for X-ray Tomography of the Ghent University (UGCT) research is performed on the improvement and application of high resolution X-ray CT (µCT). Important aspects of this research are the development of state of the art high resolution CT scanners and the development of software for controlling the scanners, reconstruction software and analysis software. UGCT works closely together with researchers from various research fields and each of them have their specific requirements. To obtain the best possible results in any particular case, the scanners are developed in a modular way, which allows for optimizations, modifications and improvements during use. Another way of improving the image quality lies in optimization of the reconstruction software, which is why the software package Octopus was developed in house. Once a scanned volume is reconstructed, an important challenge lies in the interpretation of the obtained data. For this interpretation visualization alone is often insufficient and quantitative information is needed. As researchers from different fields have different needs with respect to quantification of their data, UGCT developed the 3D software analysis package Morpho+ for analysing all kinds of samples. The research presented in this work focuses on improving the accuracy and extending the amount of the quantitative information which can be extracted from µCT data. Even if a perfect analysis algorithm would exist, it would be impossible to accurately quantify data of which the image quality is insufficient. As image quality can significantly be improved with the aid of adequate reconstruction techniques, the research presented in this work focuses on analysis as well as reconstruction software. As the datasets obtained with µCT at UGCT are of substantial size, the possibility to process large datasets in a limited amount of time is crucial in the development of new algorithms. The contributions of the author can be subdivided in three major aspects of the processing of CT data: The modification of iterative reconstruction algorithms, the extension and optimization of 3D analysis algorithms and the development of a new algorithm for discrete tomography. These topics are discussed in more detail below. A main aspect in the improvement of image quality is the reduction of artefacts which often occur in µCT such as noise-, cone beam- and beam hardening artefacts. Cone beam artefacts are a result of the cone beam geometry which is often used in laboratory based µCT and beam hardening is a consequence of the polychromaticity of the beam. Although analytical reconstruction algorithms based on filtered back projection are still most commonly used for the reconstruction of µCT datasets, there is another approach which is becoming a valuable alternative: iterative reconstruction algorithms. Iterative algorithms are inherently better at coping with the previously mentioned artefacts. Additionally iterative algorithms can improve image quality in case the number of available projections or the angular range is limited. In chapter 3 the possibility to modify these algorithms to further improve image quality is investigated. It is illustrated that streak artefacts which can occur when metals are present in a sample can be significantly reduced by modifying the reconstruction algorithm. Additionally, it is demonstrated that the incorporation of an initial solution (if available) allows reducing the required number of projections for a second slightly modified sample. To reduce beam hardening artefacts, the physics of the process is modelled and incorporated in the iterative reconstruction algorithm, which results in an easy to use and efficient algorithm for the reduction of beam hardening artefacts and requires no prior knowledge about the sample. In chapter 4 the 3D analysis process is described. In the scope of this work, algorithms of the 3D-analysis software package Morpho+ were optimized and new methods were added to the program, focusing on quantifying connectivity and shape of the phases and elements in the sample, as well as obtaining accurate segmentation, which is essential step in the analysis process is the segmentation of the reconstructed sample. Evidently, the different phases in the sample need to be separated from one another. However, often a second segmentation step is needed in order to separate the different elements present in a volume, such as pores in a pore network, or to separate elements which are physically separated but appear to be connected on the reconstructed images to limited resolution and/or limited contrast of the scan. The latter effect often occurs in the process of identifying different grains in a geological sample. Algorithms which are available for this second segmentation step often result in over-segmentation, i.e. elements are not only separated from one another but also separations inside a single element occur. To overcome this effect an algorithm is presented to semi-automically rejoin the separated parts of a single element. Additionally, Morpho+ was extended with tools to extract information about the connectivity of a sample, which is difficult to quantify but important for samples from various research fields. The connectivity can be described with the aid of the calculation of the Euler Number and tortuosity. Moreover, the number of neighbouring objects of each object can be determined and the connections between objects can be quantified. It is now also possible to extract a skeleton, which describes the basic structure of the volume. A calculation of several shape parameters was added to the program as well, resulting in the possibility to visualize the different objects on a disc-rod diagram. The many possibilities to characterize reconstructed samples with the aid of Morpho+ are illustrated on several applications. As mentioned in the previous section, an important aspect for correctly quantifying µCT data is the correct segmentation of the different phases present in the sample. Often it is the case that a sample consists of only one or a limited number of materials (and surrounding air). In this case this prior knowledge about the sample can be incorporated in the reconstruction algorithm. These kind of algorithms are referred to as discrete reconstruction algorithms, which are used when only a limited number of projections is available. Chapter 5 deals with discrete reconstruction algorithms. One of these algorithms is the Discrete Algebraic Reconstruction Technique, which combines iterative with discrete reconstruction and has shown excellent results. DART requires knowledge about the attenuation coefficient(s) and segmentation threshold(s) of the material(s). For µCT applications (resulting in large datasets) reconstruction times can significantly increase when DART is used in comparison with standard iterative reconstruction, as DART requires more iterations. This complicates the practical applicability of DART for routine applications at UGCT. Therefore a modified algorithm (based on the DART algorithm) for reconstruction of samples consisting out of only one material and surrounding air was developed in the scope of this work, which is referred to as the Experimental Discrete Algebraic Reconstruction Technique (EDART). The goal of this algorithm is to obtain better reconstruction results in comparison with standard iterative reconstruction algorithms, without significantly increasing reconstruction time. Moreover, a fast and intuitive technique to estimate the attenuation coefficient and threshold was developed as a part of the EDART algorithm. In chapter 5 it is illustrated that EDART provides improved image quality for both phantom and real data, in comparison with standard iterative reconstruction algorithms, when only a limited number of projections is available. The algorithms presented in this work can be subsequently applied but can also be combined with one another. It is for example illustrated in chapter 5 that the beam hardening correction method can also be incorporated in the EDART algorithm. The combination of the introduced methods allows for an improvement in the process of extracting accurate quantitative information from µCT data
    corecore