8,801 research outputs found
Playing with Duality: An Overview of Recent Primal-Dual Approaches for Solving Large-Scale Optimization Problems
Optimization methods are at the core of many problems in signal/image
processing, computer vision, and machine learning. For a long time, it has been
recognized that looking at the dual of an optimization problem may drastically
simplify its solution. Deriving efficient strategies which jointly brings into
play the primal and the dual problems is however a more recent idea which has
generated many important new contributions in the last years. These novel
developments are grounded on recent advances in convex analysis, discrete
optimization, parallel processing, and non-smooth optimization with emphasis on
sparsity issues. In this paper, we aim at presenting the principles of
primal-dual approaches, while giving an overview of numerical methods which
have been proposed in different contexts. We show the benefits which can be
drawn from primal-dual algorithms both for solving large-scale convex
optimization problems and discrete ones, and we provide various application
examples to illustrate their usefulness
Approximately Truthful Multi-Agent Optimization Using Cloud-Enforced Joint Differential Privacy
Multi-agent coordination problems often require agents to exchange state
information in order to reach some collective goal, such as agreement on a
final state value. In some cases, it is feasible that opportunistic agents may
deceptively report false state values for their own benefit, e.g., to claim a
larger portion of shared resources. Motivated by such cases, this paper
presents a multi-agent coordination framework which disincentivizes
opportunistic misreporting of state information. This paper focuses on
multi-agent coordination problems that can be stated as nonlinear programs,
with non-separable constraints coupling the agents. In this setting, an
opportunistic agent may be tempted to skew the problem's constraints in its
favor to reduce its local cost, and this is exactly the behavior we seek to
disincentivize. The framework presented uses a primal-dual approach wherein the
agents compute primal updates and a centralized cloud computer computes dual
updates. All computations performed by the cloud are carried out in a way that
enforces joint differential privacy, which adds noise in order to dilute any
agent's influence upon the value of its cost function in the problem. We show
that this dilution deters agents from intentionally misreporting their states
to the cloud, and present bounds on the possible cost reduction an agent can
attain through misreporting its state. This work extends our earlier work on
incorporating ordinary differential privacy into multi-agent optimization, and
we show that this work can be modified to provide a disincentivize for
misreporting states to the cloud. Numerical results are presented to
demonstrate convergence of the optimization algorithm under joint differential
privacy.Comment: 17 pages, 3 figure
Bounding Duality Gap for Separable Problems with Linear Constraints
We consider the problem of minimizing a sum of non-convex functions over a
compact domain, subject to linear inequality and equality constraints.
Approximate solutions can be found by solving a convexified version of the
problem, in which each function in the objective is replaced by its convex
envelope. We propose a randomized algorithm to solve the convexified problem
which finds an -suboptimal solution to the original problem. With
probability one, is bounded by a term proportional to the maximal
number of active constraints in the problem. The bound does not depend on the
number of variables in the problem or the number of terms in the objective. In
contrast to previous related work, our proof is constructive, self-contained,
and gives a bound that is tight
A distributed primal-dual interior-point method for loosely coupled problems using ADMM
In this paper we propose an efficient distributed algorithm for solving
loosely coupled convex optimization problems. The algorithm is based on a
primal-dual interior-point method in which we use the alternating direction
method of multipliers (ADMM) to compute the primal-dual directions at each
iteration of the method. This enables us to join the exceptional convergence
properties of primal-dual interior-point methods with the remarkable
parallelizability of ADMM. The resulting algorithm has superior computational
properties with respect to ADMM directly applied to our problem. The amount of
computations that needs to be conducted by each computing agent is far less. In
particular, the updates for all variables can be expressed in closed form,
irrespective of the type of optimization problem. The most expensive
computational burden of the algorithm occur in the updates of the primal
variables and can be precomputed in each iteration of the interior-point
method. We verify and compare our method to ADMM in numerical experiments.Comment: extended version, 50 pages, 9 figure
- …