778 research outputs found
The proximal point method for locally lipschitz functions in multiobjective optimization with application to the compromise problem
This paper studies the constrained multiobjective optimization problem of finding Pareto critical points of vector-valued functions. The proximal point method considered by Bonnel, Iusem, and Svaiter [SIAM J. Optim., 15 (2005), pp. 953–970] is extended to locally Lipschitz functions in the finite dimensional multiobjective setting. To this end, a new (scalarization-free) approach for convergence analysis of the method is proposed where the first-order optimality condition of the scalarized problem is replaced by a necessary condition for weak Pareto points of a multiobjective problem. As a consequence, this has allowed us to consider the method without any assumption of convexity over the constraint sets that determine the vectorial improvement steps. This is very important for applications; for example, to extend to a dynamic setting the famous compromise problem in management sciences and game theory.Fundação de Amparo à Pesquisa do Estado de GoiásConselho Nacional de Desenvolvimento CientÃfico e TecnológicoCoordenação de Aperfeiçoamento de Pessoal de Nivel SuperiorMinisterio de EconomÃa y CompetitividadAgence nationale de la recherch
Multiobjective Reinforcement Learning for Reconfigurable Adaptive Optimal Control of Manufacturing Processes
In industrial applications of adaptive optimal control often multiple
contrary objectives have to be considered. The weights (relative importance) of
the objectives are often not known during the design of the control and can
change with changing production conditions and requirements. In this work a
novel model-free multiobjective reinforcement learning approach for adaptive
optimal control of manufacturing processes is proposed. The approach enables
sample-efficient learning in sequences of control configurations, given by
particular objective weights.Comment: Conference, Preprint, 978-1-5386-5925-0/18/$31.00 \c{opyright} 2018
IEE
Methods for many-objective optimization: an analysis
Decomposition-based methods are often cited as the
solution to problems related with many-objective optimization. Decomposition-based methods employ a scalarizing function to reduce a many-objective problem into a set of single objective problems, which upon solution yields a good approximation of the set of optimal solutions. This set is commonly referred to as
Pareto front. In this work we explore the implications of using decomposition-based methods over Pareto-based methods from a probabilistic point of view. Namely, we investigate whether there is an advantage of using a decomposition-based method, for example using the Chebyshev scalarizing function, over Paretobased methods
Optimal Scalarizations for Sublinear Hypervolume Regret
Scalarization is a general technique that can be deployed in any
multiobjective setting to reduce multiple objectives into one, such as recently
in RLHF for training reward models that align human preferences. Yet some have
dismissed this classical approach because linear scalarizations are known to
miss concave regions of the Pareto frontier. To that end, we aim to find simple
non-linear scalarizations that can explore a diverse set of objectives on
the Pareto frontier, as measured by the dominated hypervolume. We show that
hypervolume scalarizations with uniformly random weights are surprisingly
optimal for provably minimizing the hypervolume regret, achieving an optimal
sublinear regret bound of , with matching lower bounds that
preclude any algorithm from doing better asymptotically. As a theoretical case
study, we consider the multiobjective stochastic linear bandits problem and
demonstrate that by exploiting the sublinear regret bounds of the hypervolume
scalarizations, we can derive a novel non-Euclidean analysis that produces
improved hypervolume regret bounds of . We
support our theory with strong empirical performance of using simple
hypervolume scalarizations that consistently outperforms both the linear and
Chebyshev scalarizations, as well as standard multiobjective algorithms in
bayesian optimization, such as EHVI.Comment: ICML 2023 Worksho
- …