1,862 research outputs found
Regularized Jacobi iteration for decentralized convex optimization with separable constraints
We consider multi-agent, convex optimization programs subject to separable
constraints, where the constraint function of each agent involves only its
local decision vector, while the decision vectors of all agents are coupled via
a common objective function. We focus on a regularized variant of the so called
Jacobi algorithm for decentralized computation in such problems. We first
consider the case where the objective function is quadratic, and provide a
fixed-point theoretic analysis showing that the algorithm converges to a
minimizer of the centralized problem. Moreover, we quantify the potential
benefits of such an iterative scheme by comparing it against a scaled projected
gradient algorithm. We then consider the general case and show that all limit
points of the proposed iteration are optimal solutions of the centralized
problem. The efficacy of the proposed algorithm is illustrated by applying it
to the problem of optimal charging of electric vehicles, where, as opposed to
earlier approaches, we show convergence to an optimal charging scheme for a
finite, possibly large, number of vehicles
Index Information Algorithm with Local Tuning for Solving Multidimensional Global Optimization Problems with Multiextremal Constraints
Multidimensional optimization problems where the objective function and the
constraints are multiextremal non-differentiable Lipschitz functions (with
unknown Lipschitz constants) and the feasible region is a finite collection of
robust nonconvex subregions are considered. Both the objective function and the
constraints may be partially defined. To solve such problems an algorithm is
proposed, that uses Peano space-filling curves and the index scheme to reduce
the original problem to a H\"{o}lder one-dimensional one. Local tuning on the
behaviour of the objective function and constraints is used during the work of
the global optimization procedure in order to accelerate the search. The method
neither uses penalty coefficients nor additional variables. Convergence
conditions are established. Numerical experiments confirm the good performance
of the technique.Comment: 29 pages, 5 figure
A Non-Convex Relaxation for Fixed-Rank Approximation
This paper considers the problem of finding a low rank matrix from
observations of linear combinations of its elements. It is well known that if
the problem fulfills a restricted isometry property (RIP), convex relaxations
using the nuclear norm typically work well and come with theoretical
performance guarantees. On the other hand these formulations suffer from a
shrinking bias that can severely degrade the solution in the presence of noise.
In this theoretical paper we study an alternative non-convex relaxation that
in contrast to the nuclear norm does not penalize the leading singular values
and thereby avoids this bias. We show that despite its non-convexity the
proposed formulation will in many cases have a single local minimizer if a RIP
holds. Our numerical tests show that our approach typically converges to a
better solution than nuclear norm based alternatives even in cases when the RIP
does not hold
Applying a phase field approach for shape optimization of a stationary Navier-Stokes flow
We apply a phase field approach for a general shape optimization problem of a
stationary Navier-Stokes flow. To be precise we add a multiple of the
Ginzburg--Landau energy as a regularization to the objective functional and
relax the non-permeability of the medium outside the fluid region. The
resulting diffuse interface problem can be shown to be well-posed and
optimality conditions are derived. We state suitable assumptions on the problem
in order to derive a sharp interface limit for the minimizers and the
optimality conditions. Additionally, we can derive a necessary optimality
system for the sharp interface problem by geometric variations without stating
additional regularity assumptions on the minimizing set
Deterministic global optimization using space-filling curves and multiple estimates of Lipschitz and Holder constants
In this paper, the global optimization problem with
being a hyperinterval in and satisfying the Lipschitz condition
with an unknown Lipschitz constant is considered. It is supposed that the
function can be multiextremal, non-differentiable, and given as a
`black-box'. To attack the problem, a new global optimization algorithm based
on the following two ideas is proposed and studied both theoretically and
numerically. First, the new algorithm uses numerical approximations to
space-filling curves to reduce the original Lipschitz multi-dimensional problem
to a univariate one satisfying the H\"{o}lder condition. Second, the algorithm
at each iteration applies a new geometric technique working with a number of
possible H\"{o}lder constants chosen from a set of values varying from zero to
infinity showing so that ideas introduced in a popular DIRECT method can be
used in the H\"{o}lder global optimization. Convergence conditions of the
resulting deterministic global optimization method are established. Numerical
experiments carried out on several hundreds of test functions show quite a
promising performance of the new algorithm in comparison with its direct
competitors.Comment: 26 pages, 10 figures, 4 table
- …