783 research outputs found

    Detection of near-singularity in Cholesky and LDLT factorizations

    Get PDF
    AbstractIn sparse matrix applications it is often important to implement the Cholesky and LDLT factorization methods without pivoting in order to avoid excess fillin. We consider methods for detection of a nearly singular matrix by means of these factorizations without pivoting and demonstrate that a technique based on estimation of the smallest eigenvalue via inverse iteration will always reveal a nearly singular matrix

    The 2-norm of random matrices

    Get PDF
    AbstractNumerical experiments show that it is possible to derive simple estimates for the expected 2-norm of random matrices A with elements from a normal distribution with zero mean and standard deviation σ, and from a Poisson distribution with mean value λ. These estimates are σmax(m,n)<E{|2}<2σmax(m,n) and E{|2}≈λmn, respectively, where m and n are the dimensions of A

    "Plug-and-Play" Edge-Preserving Regularization

    Get PDF
    In many inverse problems it is essential to use regularization methods that preserve edges in the reconstructions, and many reconstruction models have been developed for this task, such as the Total Variation (TV) approach. The associated algorithms are complex and require a good knowledge of large-scale optimization algorithms, and they involve certain tolerances that the user must choose. We present a simpler approach that relies only on standard computational building blocks in matrix computations, such as orthogonal transformations, preconditioned iterative solvers, Kronecker products, and the discrete cosine transform -- hence the term "plug-and-play." We do not attempt to improve on TV reconstructions, but rather provide an easy-to-use approach to computing reconstructions with similar properties.Comment: 14 pages, 7 figures, 3 table

    A Tensor-Based Dictionary Learning Approach to Tomographic Image Reconstruction

    Full text link
    We consider tomographic reconstruction using priors in the form of a dictionary learned from training images. The reconstruction has two stages: first we construct a tensor dictionary prior from our training data, and then we pose the reconstruction problem in terms of recovering the expansion coefficients in that dictionary. Our approach differs from past approaches in that a) we use a third-order tensor representation for our images and b) we recast the reconstruction problem using the tensor formulation. The dictionary learning problem is presented as a non-negative tensor factorization problem with sparsity constraints. The reconstruction problem is formulated in a convex optimization framework by looking for a solution with a sparse representation in the tensor dictionary. Numerical results show that our tensor formulation leads to very sparse representations of both the training images and the reconstructions due to the ability of representing repeated features compactly in the dictionary.Comment: 29 page

    Unmatched Projector/Backprojector Pairs: Perturbation and Convergence Analysis

    Get PDF
    • 

    corecore