26 research outputs found

    Accelerating Random Kaczmarz Algorithm Based on Clustering Information

    Full text link
    Kaczmarz algorithm is an efficient iterative algorithm to solve overdetermined consistent system of linear equations. During each updating step, Kaczmarz chooses a hyperplane based on an individual equation and projects the current estimate for the exact solution onto that space to get a new estimate. Many vairants of Kaczmarz algorithms are proposed on how to choose better hyperplanes. Using the property of randomly sampled data in high-dimensional space, we propose an accelerated algorithm based on clustering information to improve block Kaczmarz and Kaczmarz via Johnson-Lindenstrauss lemma. Additionally, we theoretically demonstrate convergence improvement on block Kaczmarz algorithm

    A Novel Partitioning Method for Accelerating the Block Cimmino Algorithm

    Get PDF
    We propose a novel block-row partitioning method in order to improve the convergence rate of the block Cimmino algorithm for solving general sparse linear systems of equations. The convergence rate of the block Cimmino algorithm depends on the orthogonality among the block rows obtained by the partitioning method. The proposed method takes numerical orthogonality among block rows into account by proposing a row inner-product graph model of the coefficient matrix. In the graph partitioning formulation defined on this graph model, the partitioning objective of minimizing the cutsize directly corresponds to minimizing the sum of inter-block inner products between block rows thus leading to an improvement in the eigenvalue spectrum of the iteration matrix. This in turn leads to a significant reduction in the number of iterations required for convergence. Extensive experiments conducted on a large set of matrices confirm the validity of the proposed method against a state-of-the-art method

    On a general extending and constraining procedure for linear iterative methods

    Get PDF
    Algebraic Reconstruction Techniques (ART), on their both successive or simultaneous formulation, have been developed since early 70's as efficient ''row action methods'' for solving the image reconstruction problem in Computerized Tomography. In this respect, two important development directions were concerned with, firstly their extension to the inconsistent case of the reconstruction problem, and secondly with their combination with constraining strategies, imposed by the particularities of the reconstructed image. In the first part of our paper we introduce extending and constraining procedures for a general iterative method of ART type and we propose a set of sufficient assumptions that ensure the convergence of the corresponding algorithms. As an application of this approach, we prove that Cimmino's simultaneous reflections method satisfies this set of assumptions, and we derive extended and constrained versions for it. Numerical experiments with all these versions are presented on a head phantom widely used in the image reconstruction literature. We also considered hard thresholding constraining used in sparse approximation problems and applied it successfully to a 3D particle image reconstruction problem

    Nanometer-scale Tomographic Reconstruction of 3D Electrostatic Potentials in GaAs/AlGaAs Core-Shell Nanowires

    Full text link
    We report on the development of Electron Holographic Tomography towards a versatile potential measurement technique, overcoming several limitations, such as a limited tilt range, previously hampering a reproducible and accurate electrostatic potential reconstruction in three dimensions. Most notably, tomographic reconstruction is performed on optimally sampled polar grids taking into account symmetry and other spatial constraints of the nanostructure. Furthermore, holographic tilt series acquisition and alignment have been automated and adapted to three dimensions. We demonstrate 6 nm spatial and 0.2 V signal resolution by reconstructing various, previously hidden, potential details of a GaAs/AlGaAs core-shell nanowire. The improved tomographic reconstruction opens pathways towards the detection of minute potentials in nanostructures and an increase in speed and accuracy in related techniques such as X-ray tomography

    New measurements techniques:Optical methods for characterizing sound fields

    Get PDF

    The Practicality of Stochastic Optimization in Imaging Inverse Problems

    Get PDF
    In this work we investigate the practicality of stochastic gradient descent and recently introduced variants with variance-reduction techniques in imaging inverse problems. Such algorithms have been shown in the machine learning literature to have optimal complexities in theory, and provide great improvement empirically over the deterministic gradient methods. Surprisingly, in some tasks such as image deblurring, many of such methods fail to converge faster than the accelerated deterministic gradient methods, even in terms of epoch counts. We investigate this phenomenon and propose a theory-inspired mechanism for the practitioners to efficiently characterize whether it is beneficial for an inverse problem to be solved by stochastic optimization techniques or not. Using standard tools in numerical linear algebra, we derive conditions on the spectral structure of the inverse problem for being a suitable application of stochastic gradient methods. Particularly, we show that, for an imaging inverse problem, if and only if its Hessain matrix has a fast-decaying eigenspectrum, then the stochastic gradient methods can be more advantageous than deterministic methods for solving such a problem. Our results also provide guidance on choosing appropriately the partition minibatch schemes, showing that a good minibatch scheme typically has relatively low correlation within each of the minibatches. Finally, we propose an accelerated primal-dual SGD algorithm in order to tackle another key bottleneck of stochastic optimization which is the heavy computation of proximal operators. The proposed method has fast convergence rate in practice, and is able to efficiently handle non-smooth regularization terms which are coupled with linear operators
    corecore