6 research outputs found

    Variational Methods for Discrete Tomography

    Get PDF
    Image reconstruction from tomographic sampled data has contoured as a stand alone research area with application in many practical situations, in domains such as medical imaging, seismology, astronomy, flow analysis, industrial inspection and many more. Already existing algorithms on the market (continuous) fail in being able to model the analysed object. In this thesis, we study discrete tomographic approaches that enable the addition of constraints in order to better fit the description of the analysed object and improve the end result. A particular focus is set on assumptions regarding the signals' sampling methodology, point at which we look towards the recently introduced Compressive Sensing (CS) approach, that has shown to return remarkable results based on how sparse a given signal is. However, research done in the CS field does not accurately relate to real world applications, as objects usually surrounding us are considered to be piece-wise constant (not sparse on their own) and the properties of the sensing matrices from the viewpoint of CS do not re ect real acquisition processes. Motivated by these shortcomings, we study signals that are sparse in a given representation, e.g. the forward-difference operator (total variation) and develop reconstruction diagrams (phase transitions) with the help of linear programming, convex analysis and duality that enable the user to pin-point the type of objects (with regard to their sparsity) which can be reconstructed, given an ensemble of acquisition directions. Moreover, a closer look is given to handling large data volumes, by adding different perturbations (entropic, quadratic) to the already constrained linear program. In empirical assessments, perturbation has lead to an increased reconstruction rate. Needless to say, the topic of this thesis is motivated by industrial applications where the acquisition process is restricted to a maximum of nine cameras, thus returning a severely undersampled inverse problem

    Theoretical and Numerical Approaches to Co-/Sparse Recovery in Discrete Tomography

    Get PDF
    We investigate theoretical and numerical results that guarantee the exact reconstruction of piecewise constant images from insufficient projections in Discrete Tomography. This is often the case in non-destructive quality inspection of industrial objects, made of few homogeneous materials, where fast scanning times do not allow for full sampling. As a consequence, this low number of projections presents us with an underdetermined linear system of equations. We restrict the solution space by requiring that solutions (a) must possess a sparse image gradient, and (b) have constrained pixel values. To that end, we develop an lower bound, using compressed sensing theory, on the number of measurements required to uniquely recover, by convex programming, an image in our constrained setting. We also develop a second bound, in the non-convex setting, whose novelty is to use the number of connected components when bounding the number of linear measurements for unique reconstruction. Having established theoretical lower bounds on the number of required measurements, we then examine several optimization models that enforce sparse gradients or restrict the image domain. We provide a novel convex relaxation that is provably tighter than existing models, assuming the target image to be gradient sparse and integer-valued. Given that the number of connected components in an image is critical for unique reconstruction, we provide an integer program model that restricts the maximum number of connected components in the reconstructed image. When solving the convex models, we view the image domain as a manifold and use tools from differential geometry and optimization on manifolds to develop a first-order multilevel optimization algorithm. The developed multilevel algorithm exhibits fast convergence and enables us to recover images of higher resolution

    Testable uniqueness conditions for empirical assessment of undersampling levels in total variation-regularized X-ray CT

    Get PDF
    We study recoverability in fan-beam computed tomography (CT) with sparsity and total variation priors: how many underdetermined linear measurements suffice for recovering images of given sparsity? Results from compressed sensing (CS) establish such conditions for, e.g., random measurements, but not for CT. Recoverability is typically tested by checking whether a computed solution recovers the original. This approach cannot guarantee solution uniqueness and the recoverability decision therefore depends on the optimization algorithm. We propose new computational methods to test recoverability by verifying solution uniqueness conditions. Using both reconstruction and uniqueness testing we empirically study the number of CT measurements sufficient for recovery on new classes of sparse test images. We demonstrate an average-case relation between sparsity and sufficient sampling and observe a sharp phase transition as known from CS, but never established for CT. In addition to assessing recoverability more reliably, we show that uniqueness tests are often the faster option.Comment: 18 pages, 7 figures, submitte

    â„“1\ell^1-Analysis Minimization and Generalized (Co-)Sparsity: When Does Recovery Succeed?

    Full text link
    This paper investigates the problem of signal estimation from undersampled noisy sub-Gaussian measurements under the assumption of a cosparse model. Based on generalized notions of sparsity, we derive novel recovery guarantees for the â„“1\ell^{1}-analysis basis pursuit, enabling highly accurate predictions of its sample complexity. The corresponding bounds on the number of required measurements do explicitly depend on the Gram matrix of the analysis operator and therefore particularly account for its mutual coherence structure. Our findings defy conventional wisdom which promotes the sparsity of analysis coefficients as the crucial quantity to study. In fact, this common paradigm breaks down completely in many situations of practical interest, for instance, when applying a redundant (multilevel) frame as analysis prior. By extensive numerical experiments, we demonstrate that, in contrast, our theoretical sampling-rate bounds reliably capture the recovery capability of various examples, such as redundant Haar wavelets systems, total variation, or random frames. The proofs of our main results build upon recent achievements in the convex geometry of data mining problems. More precisely, we establish a sophisticated upper bound on the conic Gaussian mean width that is associated with the underlying â„“1\ell^{1}-analysis polytope. Due to a novel localization argument, it turns out that the presented framework naturally extends to stable recovery, allowing us to incorporate compressible coefficient sequences as well
    corecore