33 research outputs found
An Arrow-Hurwicz-Uzawa type flow as least squares solver for network linear equations
We study the approach to obtaining least squares solutions to systems of linear algebraic equations over networks by using distributed algorithms. Each node has access to one of the linear equations and holds a dynamic state. The aim for the node states is to reach a consensus as a least squares solution of the linear equations by exchanging their states with neighbors over an underlying interaction graph. A continuous-time distributed least squares solver over networks is developed in the form of the famous Arrow–Hurwicz–Uzawa flow. A necessary and sufficient condition is established on the graph Laplacian for the continuous-time distributed algorithm to give the least squares solution in the limit, with an exponentially fast convergence rate. The feasibility of different fundamental graphs is discussed including path graph and random graph. Moreover, a discrete-time distributed algorithm is developed by Euler’s method, converging exponentially to the least squares solution at the node states with suitable step size and graph conditions.This work was supported by the DAAD with funds of the German Federal Ministry of Education and Research (BMBF), by the Australian Research Council (ARC) under grants DP-130103610 and DP-160104500
First order algorithms in variational image processing
Variational methods in imaging are nowadays developing towards a quite
universal and flexible tool, allowing for highly successful approaches on tasks
like denoising, deblurring, inpainting, segmentation, super-resolution,
disparity, and optical flow estimation. The overall structure of such
approaches is of the form ; where the functional is a data fidelity term also
depending on some input data and measuring the deviation of from such
and is a regularization functional. Moreover is a (often linear)
forward operator modeling the dependence of data on an underlying image, and
is a positive regularization parameter. While is often
smooth and (strictly) convex, the current practice almost exclusively uses
nonsmooth regularization functionals. The majority of successful techniques is
using nonsmooth and convex functionals like the total variation and
generalizations thereof or -norms of coefficients arising from scalar
products with some frame system. The efficient solution of such variational
problems in imaging demands for appropriate algorithms. Taking into account the
specific structure as a sum of two very different terms to be minimized,
splitting algorithms are a quite canonical choice. Consequently this field has
revived the interest in techniques like operator splittings or augmented
Lagrangians. Here we shall provide an overview of methods currently developed
and recent results as well as some computational studies providing a comparison
of different methods and also illustrating their success in applications.Comment: 60 pages, 33 figure
GMRES-Accelerated ADMM for Quadratic Objectives
We consider the sequence acceleration problem for the alternating direction
method-of-multipliers (ADMM) applied to a class of equality-constrained
problems with strongly convex quadratic objectives, which frequently arise as
the Newton subproblem of interior-point methods. Within this context, the ADMM
update equations are linear, the iterates are confined within a Krylov
subspace, and the General Minimum RESidual (GMRES) algorithm is optimal in its
ability to accelerate convergence. The basic ADMM method solves a
-conditioned problem in iterations. We give
theoretical justification and numerical evidence that the GMRES-accelerated
variant consistently solves the same problem in iterations
for an order-of-magnitude reduction in iterations, despite a worst-case bound
of iterations. The method is shown to be competitive against
standard preconditioned Krylov subspace methods for saddle-point problems. The
method is embedded within SeDuMi, a popular open-source solver for conic
optimization written in MATLAB, and used to solve many large-scale semidefinite
programs with error that decreases like , instead of ,
where is the iteration index.Comment: 31 pages, 7 figures. Accepted for publication in SIAM Journal on
Optimization (SIOPT