6 research outputs found

    Sparsity-based algorithms for inverse problems

    Get PDF
    Inverse problems are problems where we want to estimate the values of certain parameters of a system given observations of the system. Such problems occur in several areas of science and engineering. Inverse problems are often ill-posed, which means that the observations of the system do not uniquely define the parameters we seek to estimate, or that the solution is highly sensitive to small changes in the observation. In order to solve such problems, therefore, we need to make use of additional knowledge about the system at hand. One such prior information is given by the notion of sparsity. Sparsity refers to the knowledge that the solution to the inverse problem can be expressed as a combination of a few terms. The sparsity of a solution can be controlled explicitly or implicitly. An explicit way to induce sparsity is to minimize the number of non-zero terms in the solution. Implicit use of sparsity can be made, for e.g., by making adjustments to the algorithm used to arrive at the solution.In this thesis we studied various inverse problems that arise in different application areas, such as tomographic imaging and equation learning for biology, and showed how ideas of sparsity can be used in each case to design effective algorithms to solve such problems.Financial support was provided by the European Union's Horizon 2020 Research and Innovation programme under the Marie Sklodowska-Curie grant agreement no.~765604Number theory, Algebra and Geometr

    Improving reproducibility in synchrotron tomography using implementation-adapted filters

    Get PDF
    For reconstructing large tomographic datasets fast, filtered backprojection-type or Fourier-based algorithms are still the method of choice, as they have been for decades. These robust and computationally efficient algorithms have been integrated in a broad range of software packages. The continuous mathematical formulas used for image reconstruction in such algorithms are unambiguous. However, variations in discretization and interpolation result in quantitative differences between reconstructed images, and corresponding segmentations, obtained from different software. This hinders reproducibility of experimental results, making it difficult to ensure that results and conclusions from experiments can be reproduced at different facilities or using different software. In this paper, a way to reduce such differences by optimizing the filter used in analytical algorithms is proposed. These filters can be computed using a wrapper routine around a black-box implementation of a reconstruction algorithm, and lead to quantitatively similar reconstructions. Use cases for this approach are demonstrated by computing implementation-adapted filters for several open-source implementations and applying them to simulated phantoms and real-world data acquired at the synchrotron. Our contribution to a reproducible reconstruction step forms a building block towards a fully reproducible synchrotron tomography data processing pipeline

    Parallel-beam X-ray CT datasets of apples with internal defects and label balancing for machine learning

    Get PDF
    We present three parallel-beam tomographic datasets of 94 apples with internal defects along with defect label files. The datasets are prepared for development and testing of data-driven, learning-based image reconstruction, segmentation and post-processing methods. The three versions are a noiseless simulation; simulation with added Gaussian noise, and with scattering noise. The datasets are based on real 3D X-ray CT data and their subsequent volume reconstructions. The ground truth images, based on the volume reconstructions, are also available through this project. Apples contain various defects, which naturally introduce a label bias. We tackle this by formulating the bias as an optimization

    Quantitative comparison of deep learning-based image reconstruction methods for low-dose and sparse-angle CT applications

    Get PDF
    The reconstruction of computed tomography (CT) images is an active area of research. Following the rise of deep learning methods, many data-driven models have been proposed in recent years. In this work, we present the results of a data challenge that we organized, bringing together algorithm experts from different institutes to jointly work on quantitative evaluation of several data-driven methods on two large, public datasets during a ten day sprint. We focus on two applications of CT, namely, low-dose CT and sparse-angle CT. This enables us to fairly compare different methods using standardized settings. As a general result, we observe that the deep learning-based methods are able to improve the reconstruction quality metrics in both CT applications while the top performing methods show only minor differences in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). We further discuss a number of other important criteria that should be taken into account when selecting a method, such as the availability of training data, the knowledge of the physical measurement model and the reconstruction speed

    Apple CT Data: Ground truth reconstructions - 1 of 6

    No full text
    This submission is a supplementary material to the article [Coban 2020b]. As part of the manuscript, we release three simulated parallel-beam tomographic datasets of 94 apples with internal defects, the ground truth reconstructions and two defect label files

    Apple CT Data: Simulated parallel-beam tomographic datasets

    No full text
    This submission is a supplementary material to the article [Coban 2020b]. As part of the manuscript, we release three simulated parallel-beam tomographic datasets of 94 apples with internal defects, the ground truth reconstructions and two defect label files
    corecore