27 research outputs found
Repulsion dynamics for uniform Pareto front approximation in multi-objective optimization problems
Scalarization allows to solve a multi-objective optimization problem by
solving many single-objective sub-problems, uniquely determined by some
parameters. In this work, we propose several adaptive strategies to select such
parameters in order to obtain a uniform approximation of the Pareto front. This
is done by introducing a heuristic dynamics where the parameters interact
through a binary repulsive potential. The approach aims to minimize the
associated energy potential which is used to quantify the diversity of the
computed solutions. A stochastic component is also added to overcome
non-optimal energy configurations. Numerical experiments show the validity of
the proposed approach for bi- and tri-objectives problems with different Pareto
front geometries
Stochastic optimization methods for the simultaneous control of parameter-dependent systems
We address the application of stochastic optimization methods for the
simultaneous control of parameter-dependent systems. In particular, we focus on
the classical Stochastic Gradient Descent (SGD) approach of Robbins and Monro,
and on the recently developed Continuous Stochastic Gradient (CSG) algorithm.
We consider the problem of computing simultaneous controls through the
minimization of a cost functional defined as the superposition of individual
costs for each realization of the system. We compare the performances of these
stochastic approaches, in terms of their computational complexity, with those
of the more classical Gradient Descent (GD) and Conjugate Gradient (CG)
algorithms, and we discuss the advantages and disadvantages of each
methodology. In agreement with well-established results in the machine learning
context, we show how the SGD and CSG algorithms can significantly reduce the
computational burden when treating control problems depending on a large amount
of parameters. This is corroborated by numerical experiments
A consensus-based global optimization method for high dimensional machine learning problems
We improve recently introduced consensus-based optimization method, proposed
in [R. Pinnau, C. Totzeck, O. Tse and S. Martin, Math. Models Methods Appl.
Sci., 27(01):183--204, 2017], which is a gradient-free optimization method for
general non-convex functions. We first replace the isotropic geometric Brownian
motion by the component-wise one, thus removing the dimensionality dependence
of the drift rate, making the method more competitive for high dimensional
optimization problems. Secondly, we utilize the random mini-batch ideas to
reduce the computational cost of calculating the weighted average which the
individual particles tend to relax toward. For its mean-field limit--a
nonlinear Fokker-Planck equation--we prove, in both time continuous and
semi-discrete settings, that the convergence of the method, which is
exponential in time, is guaranteed with parameter constraints {\it independent}
of the dimensionality. We also conduct numerical tests to high dimensional
problems to check the success rate of the method