57 research outputs found

    ADMM for monotone operators: convergence analysis and rates

    Full text link
    We propose in this paper a unifying scheme for several algorithms from the literature dedicated to the solving of monotone inclusion problems involving compositions with linear continuous operators in infinite dimensional Hilbert spaces. We show that a number of primal-dual algorithms for monotone inclusions and also the classical ADMM numerical scheme for convex optimization problems, along with some of its variants, can be embedded in this unifying scheme. While in the first part of the paper convergence results for the iterates are reported, the second part is devoted to the derivation of convergence rates obtained by combining variable metric techniques with strategies based on suitable choice of dynamical step sizes

    Continuous dynamics related to monotone inclusions and non-smooth optimization problems

    Full text link
    The aim of this survey is to present the main important techniques and tools from variational analysis used for first and second order dynamical systems of implicit type for solving monotone inclusions and non-smooth optimization problems. The differential equations are expressed by means of the resolvent (in case of a maximally monotone set valued operator) or the proximal operator for non-smooth functions. The asymptotic analysis of the trajectories generated relies on Lyapunov theory, where the appropriate energy functional plays a decisive role. While the most part of the paper is related to monotone inclusions and convex optimization problems in the variational case, we present also results for dynamical systems for solving non-convex optimization problems, where the Kurdyka-Lojasiewicz property is used.Comment: Survey, 26 pages, to appear in Set Valued and Variational Analysis. arXiv admin note: text overlap with arXiv:1507.01416, arXiv:1411.4442, arXiv:1503.0465

    Second order forward-backward dynamical systems for monotone inclusion problems

    Full text link
    We begin by considering second order dynamical systems of the from x¨(t)+γ(t)x˙(t)+λ(t)B(x(t))=0\ddot x(t) + \gamma(t)\dot x(t) + \lambda(t)B(x(t))=0, where B:H→HB: {\cal H}\rightarrow{\cal H} is a cocoercive operator defined on a real Hilbert space H{\cal H}, λ:[0,+∞)→[0,+∞)\lambda:[0,+\infty)\rightarrow [0,+\infty) is a relaxation function and γ:[0,+∞)→[0,+∞)\gamma:[0,+\infty)\rightarrow [0,+\infty) a damping function, both depending on time. For the generated trajectories, we show existence and uniqueness of the generated trajectories as well as their weak asymptotic convergence to a zero of the operator BB. The framework allows to address from similar perspectives second order dynamical systems associated with the problem of finding zeros of the sum of a maximally monotone operator and a cocoercive one. This captures as particular case the minimization of the sum of a nonsmooth convex function with a smooth convex one. Furthermore, we prove that when BB is the gradient of a smooth convex function the value of the latter converges along the ergodic trajectory to its minimal value with a rate of O(1/t){\cal O}(1/t)

    Proximal-gradient algorithms for fractional programming

    Full text link
    In this paper we propose two proximal gradient algorithms for fractional programming problems in real Hilbert spaces, where the numerator is a proper, convex and lower semicontinuous function and the denominator is a smooth function, either concave or convex. In the iterative schemes, we perform a proximal step with respect to the nonsmooth numerator and a gradient step with respect to the smooth denominator. The algorithm in case of a concave denominator has the particularity that it generates sequences which approach both the (global) optimal solutions set and the optimal objective value of the underlying fractional programming problem. In case of a convex denominator the numerical scheme approaches the set of critical points of the objective function, provided the latter satisfies the Kurdyka-\L{}ojasiewicz property

    Approaching the solving of constrained variational inequalities via penalty term-based dynamical systems

    Full text link
    We investigate the existence and uniqueness of (locally) absolutely continuous trajectories of a penalty term-based dynamical system associated to a constrained variational inequality expressed as a monotone inclusion problem. Relying on Lyapunov analysis and on the ergodic continuous version of the celebrated Opial Lemma we prove weak ergodic convergence of the orbits to a solution of the constrained variational inequality under investigation. If one of the operators involved satisfies stronger monotonicity properties, then strong convergence of the trajectories can be shown.Comment: arXiv admin note: text overlap with arXiv:1306.035

    Forward-Backward and Tseng's Type Penalty Schemes for Monotone Inclusion Problems

    Full text link
    We deal with monotone inclusion problems of the form 0∈Ax+Dx+NC(x)0\in Ax+Dx+N_C(x) in real Hilbert spaces, where AA is a maximally monotone operator, DD a cocoercive operator and CC the nonempty set of zeros of another cocoercive operator. We propose a forward-backward penalty algorithm for solving this problem which extends the one proposed by H. Attouch, M.-O. Czarnecki and J. Peypouquet in [3]. The condition which guarantees the weak ergodic convergence of the sequence of iterates generated by the proposed scheme is formulated by means of the Fitzpatrick function associated to the maximally monotone operator that describes the set CC. In the second part we introduce a forward-backward-forward algorithm for monotone inclusion problems having the same structure, but this time by replacing the cocoercivity hypotheses with Lipschitz continuity conditions. The latter penalty type algorithm opens the gate to handle monotone inclusion problems with more complicated structures, for instance, involving compositions of maximally monotone operators with linear continuous ones.Comment: 18 page

    A Dynamical Approach to Two-Block Separable Convex Optimization Problems with Linear Constraints

    Full text link
    The aim of this manuscript is to approach by means of first order differential equations/inclusions convex programming problems with two-block separable linear constraints and objectives, whereby (at least) one of the components of the latter is assumed to be strongly convex. Each block of the objective contains a further smooth convex function. We investigate the dynamical system proposed and prove that its trajectories asymptotically converge to a saddle point of the Lagrangian of the convex optimization problem. Time discretization of the dynamical system leads to the alternating minimization algorithm AMA and also to its proximal variant recently introduced in the literature.Comment: 30 pages, 2 figure

    An inertial Tseng's type proximal algorithm for nonsmooth and nonconvex optimization problems

    Full text link
    We investigate the convergence of a forward-backward-forward proximal-type algorithm with inertial and memory effects when minimizing the sum of a nonsmooth function with a smooth one in the absence of convexity. The convergence is obtained provided an appropriate regularization of the objective satisfies the Kurdyka-\L{}ojasiewicz inequality, which is for instance fulfilled for semi-algebraic functions

    A hybrid proximal-extragradient algorithm with inertial effects

    Full text link
    We incorporate inertial terms in the hybrid proximal-extragradient algorithm and investigate the convergence properties of the resulting iterative scheme designed for finding the zeros of a maximally monotone operator in real Hilbert spaces. The convergence analysis relies on extended Fej\'er monotonicity techniques combined with the celebrated Opial Lemma. We also show that the classical hybrid proximal-extragradient algorithm and the inertial versions of the proximal point, the forward-backward and the forward-backward-forward algorithms can be embedded in the framework of the proposed iterative scheme

    A dynamical system associated with the fixed points set of a nonexpansive operator

    Full text link
    We study the existence and uniqueness of (locally) absolutely continuous trajectories of a dynamical system governed by a nonexpansive operator. The weak convergence of the orbits to a fixed point of the operator is investigated by relying on Lyapunov analysis. We show also an order of convergence of o(1t)o(\frac{1}{\sqrt{t}}) for the fixed point residual of the trajectory of the dynamical system. We apply the results to dynamical systems associated with the problem of finding the zeros of the sum of a maximally monotone operator and a cocoercive one. Several dynamical systems from the literature turn out to be particular instances of this general approach
    • …
    corecore