131 research outputs found

    On the Regularizing Property of Stochastic Gradient Descent

    Get PDF
    Stochastic gradient descent is one of the most successful approaches for solving large-scale problems, especially in machine learning and statistics. At each iteration, it employs an unbiased estimator of the full gradient computed from one single randomly selected data point. Hence, it scales well with problem size and is very attractive for truly massive dataset, and holds significant potentials for solving large-scale inverse problems. In the recent literature of machine learning, it was empirically observed that when equipped with early stopping, it has regularizing property. In this work, we rigorously establish its regularizing property (under \textit{a priori} early stopping rule), and also prove convergence rates under the canonical sourcewise condition, for minimizing the quadratic functional for linear inverse problems. This is achieved by combining tools from classical regularization theory and stochastic analysis. Further, we analyze the preasymptotic weak and strong convergence behavior of the algorithm. The theoretical findings shed insights into the performance of the algorithm, and are complemented with illustrative numerical experiments.Comment: 22 pages, better presentatio

    An Analysis of Finite Element Approximation in Electrical Impedance Tomography

    Full text link
    We present a finite element analysis of electrical impedance tomography for reconstructing the conductivity distribution from electrode voltage measurements by means of Tikhonov regularization. Two popular choices of the penalty term, i.e., H1(Ξ©)H^1(\Omega)-norm smoothness penalty and total variation seminorm penalty, are considered. A piecewise linear finite element method is employed for discretizing the forward model, i.e., the complete electrode model, the conductivity, and the penalty functional. The convergence of the finite element approximations for the Tikhonov model on both polyhedral and smooth curved domains is established. This provides rigorous justifications for the ad hoc discretization procedures in the literature.Comment: 20 page

    Iterative Soft/Hard Thresholding with Homotopy Continuation for Sparse Recovery

    Get PDF
    In this note, we analyze an iterative soft / hard thresholding algorithm with homotopy continuation for recovering a sparse signal x†x^\dag from noisy data of a noise level Ο΅\epsilon. Under suitable regularity and sparsity conditions, we design a path along which the algorithm can find a solution xβˆ—x^* which admits a sharp reconstruction error βˆ₯xβˆ—βˆ’x†βˆ₯β„“βˆž=O(Ο΅)\|x^* - x^\dag\|_{\ell^\infty} = O(\epsilon) with an iteration complexity O(ln⁑ϡln⁑γnp)O(\frac{\ln \epsilon}{\ln \gamma} np), where nn and pp are problem dimensionality and γ∈(0,1)\gamma\in (0,1) controls the length of the path. Numerical examples are given to illustrate its performance.Comment: 5 pages, 4 figure
    • …
    corecore