348 research outputs found

    Convergence properties of nonmonotone spectral projected gradient methods

    Get PDF
    AbstractIn a recent paper, a nonmonotone spectral projected gradient (SPG) method was introduced by Birgin et al. for the minimization of differentiable functions on closed convex sets and extensive presented results showed that this method was very efficient. In this paper, we give a more comprehensive theoretical analysis of the SPG method. In doing so, we remove various boundedness conditions that are assumed in existing results, such as boundedness from below of f, boundedness of xk or existence of accumulation point of {xk}. If ∇f(·) is uniformly continuous, we establish the convergence theory of this method and prove that the SPG method forces the sequence of projected gradients to zero. Moreover, we show under appropriate conditions that the SPG method has some encouraging convergence properties, such as the global convergence of the sequence of iterates generated by this method and the finite termination, etc. Therefore, these results show that the SPG method is attractive in theory

    Variable Metric Method for Unconstrained Multiobjective Optimization Problems

    Full text link
    In this paper, we propose a variable metric method for unconstrained multiobjective optimization problems (MOPs). First, a sequence of points is generated using different positive definite matrices in the generic framework. It is proved that accumulation points of the sequence are Pareto critical points. Then, without convexity assumption, strong convergence is established for the proposed method. Moreover, we use a common matrix to approximate the Hessian matrices of all objective functions, along which, a new nonmonotone line search technique is proposed to achieve a local superlinear convergence rate. Finally, several numerical results demonstrate the effectiveness of the proposed method

    Barzilai-Borwein Descent Methods for Multiobjective Optimization Problems with Variable Trade-off Metrics

    Full text link
    The imbalances and conditioning of the objective functions influence the performance of first-order methods for multiobjective optimization problems (MOPs). The latter is related to the metric selected in the direction-finding subproblems. Unlike single-objective optimization problems, capturing the curvature of all objective functions with a single Hessian matrix is impossible. On the other hand, second-order methods for MOPs use different metrics for objectives in direction-finding subproblems, leading to a high per-iteration cost. To balance per-iteration cost and better curvature exploration, we propose a Barzilai-Borwein descent method with variable metrics (BBDMO\_VM). In the direction-finding subproblems, we employ a variable metric to explore the curvature of all objectives. Subsequently, Barzilai-Borwein's method relative to the variable metric is applied to tune objectives, which mitigates the effect of imbalances. We investigate the convergence behaviour of the BBDMO\_VM, confirming fast linear convergence for well-conditioned problems relative to the variable metric. In particular, we establish linear convergence for problems that involve some linear objectives. These convergence results emphasize the importance of metric selection, motivating us to approximate the trade-off of Hessian matrices to better capture the geometry of the problem. Comparative numerical results confirm the efficiency of the proposed method, even when applied to large-scale and ill-conditioned problems
    corecore