7 research outputs found

    Some Adaptive First-order Methods for Variational Inequalities with Relatively Strongly Monotone Operators and Generalized Smoothness

    Full text link
    In this paper, we introduce some adaptive methods for solving variational inequalities with relatively strongly monotone operators. Firstly, we focus on the modification of the recently proposed, in smooth case [1], adaptive numerical method for generalized smooth (with H\"older condition) saddle point problem, which has convergence rate estimates similar to accelerated methods. We provide the motivation for such an approach and obtain theoretical results of the proposed method. Our second focus is the adaptation of widespread recently proposed methods for solving variational inequalities with relatively strongly monotone operators. The key idea in our approach is the refusal of the well-known restart technique, which in some cases causes difficulties in implementing such algorithms for applied problems. Nevertheless, our algorithms show a comparable rate of convergence with respect to algorithms based on the above-mentioned restart technique. Also, we present some numerical experiments, which demonstrate the effectiveness of the proposed methods. [1] Jin, Y., Sidford, A., & Tian, K. (2022). Sharper rates for separable minimax and finite sum optimization via primal-dual extragradient methods. arXiv preprint arXiv:2202.04640

    Subgradient methods for non-smooth optimization problems with some relaxation of sharp minimum

    Full text link
    In this paper we propose a generalized condition for a sharp minimum, somewhat similar to the inexact oracle proposed recently by Devolder-Glineur-Nesterov. The proposed approach makes it possible to extend the class of applicability of subgradient methods with the Polyak step-size, to the situation of inexact information about the value of the minimum, as well as the unknown Lipschitz constant of the objective function. Moreover, the use of local analogs of the global characteristics of the objective function makes it possible to apply the results of this type to wider classes of problems. We show the possibility of applying the proposed approach to strongly convex non-smooth problems, also, we make an experimental comparison with the known optimal subgradient method for such a class of problems. Moreover, there were obtained some results connected to the applicability of the proposed technique to some types of problems with convexity relaxations: the recently proposed notion of weak β\beta-quasi-convexity and ordinary quasi-convexity. Also in the paper, we study a generalization of the described technique to the situation with the assumption that the δ\delta-subgradient of the objective function is available instead of the usual subgradient. For one of the considered methods, conditions are found under which, in practice, it is possible to escape the projection of the considered iterative sequence onto the feasible set of the problem.Comment: in Russian languag
    corecore