258 research outputs found
Towards Fast-Convergence, Low-Delay and Low-Complexity Network Optimization
Distributed network optimization has been studied for well over a decade.
However, we still do not have a good idea of how to design schemes that can
simultaneously provide good performance across the dimensions of utility
optimality, convergence speed, and delay. To address these challenges, in this
paper, we propose a new algorithmic framework with all these metrics
approaching optimality. The salient features of our new algorithm are
three-fold: (i) fast convergence: it converges with only
iterations that is the fastest speed among all the existing algorithms; (ii)
low delay: it guarantees optimal utility with finite queue length; (iii) simple
implementation: the control variables of this algorithm are based on virtual
queues that do not require maintaining per-flow information. The new technique
builds on a kind of inexact Uzawa method in the Alternating Directional Method
of Multiplier, and provides a new theoretical path to prove global and linear
convergence rate of such a method without requiring the full rank assumption of
the constraint matrix
Social welfare and profit maximization from revealed preferences
Consider the seller's problem of finding optimal prices for her
(divisible) goods when faced with a set of consumers, given that she can
only observe their purchased bundles at posted prices, i.e., revealed
preferences. We study both social welfare and profit maximization with revealed
preferences. Although social welfare maximization is a seemingly non-convex
optimization problem in prices, we show that (i) it can be reduced to a dual
convex optimization problem in prices, and (ii) the revealed preferences can be
interpreted as supergradients of the concave conjugate of valuation, with which
subgradients of the dual function can be computed. We thereby obtain a simple
subgradient-based algorithm for strongly concave valuations and convex cost,
with query complexity , where is the additive
difference between the social welfare induced by our algorithm and the optimum
social welfare. We also study social welfare maximization under the online
setting, specifically the random permutation model, where consumers arrive
one-by-one in a random order. For the case where consumer valuations can be
arbitrary continuous functions, we propose a price posting mechanism that
achieves an expected social welfare up to an additive factor of
from the maximum social welfare. Finally, for profit maximization (which may be
non-convex in simple cases), we give nearly matching upper and lower bounds on
the query complexity for separable valuations and cost (i.e., each good can be
treated independently)
First order algorithms in variational image processing
Variational methods in imaging are nowadays developing towards a quite
universal and flexible tool, allowing for highly successful approaches on tasks
like denoising, deblurring, inpainting, segmentation, super-resolution,
disparity, and optical flow estimation. The overall structure of such
approaches is of the form ; where the functional is a data fidelity term also
depending on some input data and measuring the deviation of from such
and is a regularization functional. Moreover is a (often linear)
forward operator modeling the dependence of data on an underlying image, and
is a positive regularization parameter. While is often
smooth and (strictly) convex, the current practice almost exclusively uses
nonsmooth regularization functionals. The majority of successful techniques is
using nonsmooth and convex functionals like the total variation and
generalizations thereof or -norms of coefficients arising from scalar
products with some frame system. The efficient solution of such variational
problems in imaging demands for appropriate algorithms. Taking into account the
specific structure as a sum of two very different terms to be minimized,
splitting algorithms are a quite canonical choice. Consequently this field has
revived the interest in techniques like operator splittings or augmented
Lagrangians. Here we shall provide an overview of methods currently developed
and recent results as well as some computational studies providing a comparison
of different methods and also illustrating their success in applications.Comment: 60 pages, 33 figure
Transformed Primal-Dual Methods For Nonlinear Saddle Point Systems
A transformed primal-dual (TPD) flow is developed for a class of nonlinear
smooth saddle point system. The flow for the dual variable contains a Schur
complement which is strongly convex. Exponential stability of the saddle point
is obtained by showing the strong Lyapunov property. Several TPD iterations are
derived by implicit Euler, explicit Euler, and implicit-explicit methods of the
TPD flow. Generalized to the symmetric TPD iterations, linear convergence rate
is preserved for convex-concave saddle point systems under assumptions that the
regularized functions are strongly convex. The effectiveness of augmented
Lagrangian methods can be explained as a regularization of the non-strongly
convexity and a preconditioning for the Schur complement. The algorithm and
convergence analysis depends crucially on appropriate inner products of the
spaces for the primal variable and dual variable. A clear convergence analysis
with nonlinear inexact inner solvers is also developed
- …