432 research outputs found

    Risk Minimization, Regret Minimization and Progressive Hedging Algorithms

    Get PDF
    This paper begins with a study on the dual representations of risk and regret measures and their impact on modeling multistage decision making under uncertainty. A relationship between risk envelopes and regret envelopes is established by using the Lagrangian duality theory. Such a relationship opens a door to a decomposition scheme, called progressive hedging, for solving multistage risk minimization and regret minimization problems. In particular, the classical progressive hedging algorithm is modified in order to handle a new class of linkage constraints that arises from reformulations and other applications of risk and regret minimization problems. Numerical results are provided to show the efficiency of the progressive hedging algorithms.Comment: 21 pages, 2 figure

    Global convergence of block proximal iteratively reweighted algorithm with extrapolation

    Full text link
    In this paper, we propose a proximal iteratively reweighted algorithm with extrapolation based on block coordinate update aimed at solving a class of optimization problems which is the sum of a smooth possibly nonconvex loss function and a general nonconvex regularizer with a special structure. The proposed algorithm can be used to solve the p(0<p<1)\ell_p(0<p<1) regularization problem by employing a updating strategy of the smoothing parameter. It is proved that there exists the nonzero extrapolation parameter such that the objective function is nonincreasing. Moreover, the global convergence and local convergence rate are obtained by using the Kurdyka-{\L}ojasiewicz (KL) property on the objective function. Numerical experiments are given to indicate the efficiency of the proposed algorithm

    Obtaining properly Pareto optimal solutions of multiobjective optimization problems via the branch and bound method

    Full text link
    In multiobjective optimization, most branch and bound algorithms provide the decision maker with the whole Pareto front, and then decision maker could select a single solution finally. However, if the number of objectives is large, the number of candidate solutions may be also large, and it may be difficult for the decision maker to select the most interesting solution. As we argue in this paper, the most interesting solutions are the ones whose trade-offs are bounded. These solutions are usually known as the properly Pareto optimal solutions. We propose a branch-and-bound-based algorithm to provide the decision maker with so-called ϵ\epsilon-properly Pareto optimal solutions. The discarding test of the algorithm adopts a dominance relation induced by a convex polyhedral cone instead of the common used Pareto dominance relation. In this way, the proposed algorithm excludes the subboxes which do not contain ϵ\epsilon-properly Pareto optimal solution from further exploration. We establish the global convergence results of the proposed algorithm. Finally, the algorithm is applied to benchmark problems as well as to two real-world optimization problems

    The convergence rate of the accelerated proximal gradient algorithm for Multiobjective Optimization is faster than O(1/k2)O(1/k^2)

    Full text link
    In this paper, we propose a fast proximal gradient algorithm for multiobjective optimization, it is proved that the convergence rate of the accelerated algorithm for multiobjective optimization developed by Tanabe et al. can be improved from O(1/k2)O(1/k^2) to o(1/k2)o(1/k^2) by introducing different extrapolation term k1k+α1\frac{k-1}{k+\alpha-1} with α>3\alpha>3. Further, we establish the inexact version of the proposed algorithm when the error term is additive, which owns the same convergence rate. At last, the efficiency of the proposed algorithm is verified on some numerical experiments

    Convergence properties of nonmonotone spectral projected gradient methods

    Get PDF
    AbstractIn a recent paper, a nonmonotone spectral projected gradient (SPG) method was introduced by Birgin et al. for the minimization of differentiable functions on closed convex sets and extensive presented results showed that this method was very efficient. In this paper, we give a more comprehensive theoretical analysis of the SPG method. In doing so, we remove various boundedness conditions that are assumed in existing results, such as boundedness from below of f, boundedness of xk or existence of accumulation point of {xk}. If ∇f(·) is uniformly continuous, we establish the convergence theory of this method and prove that the SPG method forces the sequence of projected gradients to zero. Moreover, we show under appropriate conditions that the SPG method has some encouraging convergence properties, such as the global convergence of the sequence of iterates generated by this method and the finite termination, etc. Therefore, these results show that the SPG method is attractive in theory

    Improvements to steepest descent method for multi-objective optimization

    Full text link
    In this paper, we propose a simple yet efficient strategy for improving the multi-objective steepest descent method proposed by Fliege and Svaiter (Math Methods Oper Res, 2000, 3: 479--494). The core idea behind this strategy involves incorporating a positive modification parameter into the iterative formulation of the multi-objective steepest descent algorithm in a multiplicative manner. This modification parameter captures certain second-order information associated with the objective functions. We provide two distinct methods for calculating this modification parameter, leading to the development of two improved multi-objective steepest descent algorithms tailored for solving multi-objective optimization problems. Under reasonable assumptions, we demonstrate the convergence of sequences generated by the first algorithm toward a critical point. Moreover, for strongly convex multi-objective optimization problems, we establish the linear convergence to Pareto optimality of the sequence of generated points. The performance of the new algorithms is empirically evaluated through a computational comparison on a set of multi-objective test instances. The numerical results underscore that the proposed algorithms consistently outperform the original multi-objective steepest descent algorithm
    corecore