271 research outputs found

    Variational Principles for Monotone and Maximal Bifunctions

    Get PDF
    2000 Mathematics Subject Classification: 49J40, 49J35, 58E30, 47H05We establish variational principles for monotone and maximal bifunctions of Brþndsted-Rockafellar type by using our characterization of bifunction’s maximality in reflexive Banach spaces. As applications, we give an existence result of saddle point for convex-concave function and solve an approximate inclusion governed by a maximal monotone operator

    Combining Strong Convergence, Values Fast Convergence and Vanishing of Gradients for a Proximal Point Algorithm Using Tikhonov Regularization in a Hilbert Space

    Full text link
    In a real Hilbert space H\mathcal{H}. Given any function ff convex differentiable whose solution set arg min⁥H f\argmin_{\mathcal{H}}\,f is nonempty, by considering the Proximal Algorithm x_{k+1}=\text{prox}_{\b_k f}(d x_k), where 0<d<10<d<1 and (\b_k) is nondecreasing function, and by assuming some assumptions on (\b_k), we will show that the value of the objective function in the sequence generated by our algorithm converges in order O(1ÎČk)\mathcal{O} \left( \frac{1}{ \beta _k} \right) to the global minimum of the objective function, and that the generated sequence converges strongly to the minimum norm element of arg min⁥H f\argmin_{\mathcal{H}}\,f, we also obtain a convergence rate of gradient toward zero. Afterward, we extend these results to non-smooth convex functions with extended real values

    FAST CONVEX OPTIMIZATION VIA A THIRD-ORDER IN TIME EVOLUTION EQUATION

    Get PDF
    In a Hilbert space H, we develop fast convex optimization methods, which are based on a third order in time evolution system. The function to minimize f : H → R is convex, continuously differentiable, with argmin f = ∅, and enters the dynamic via its gradient. On the basis of Lyapunov's analysis and temporal scaling techniques, we show a convergence rate of the values of the order 1/t 3 , and obtain the convergence of the trajectories towards optimal solutions. When f is strongly convex, an exponential rate of convergence is obtained. We complete the study of the continuous dynamic by introducing a damping term induced by the Hessian of f. This allows the oscillations to be controlled and attenuated. Then, we analyze the convergence of the proximal-based algorithms obtained by temporal discretization of this system, and obtain similar convergence rates. The algorithmic results are valid for a general convex, lower semicontinuous, and proper function f : H → R âˆȘ {+∞}

    Accelerated gradient methods combining Tikhonov regularization with geometric damping driven by the Hessian

    Full text link
    In a Hilbert setting, for convex differentiable optimization, we consider accelerated gradient dynamics combining Tikhonov regularization with Hessian-driven damping. The Tikhonov regularization parameter is assumed to tend to zero as time tends to infinity, which preserves equilibria. The presence of the Tikhonov regularization term induces a strong convexity property which vanishes asymptotically. To take advantage of the exponential convergence rates attached to the heavy ball method in the strongly convex case, we consider the inertial dynamic where the viscous damping coefficient is taken proportional to the square root of the Tikhonov regularization parameter, and therefore also converges towards zero. Moreover, the dynamic involves a geometric damping which is driven by the Hessian of the function to be minimized, which induces a significant attenuation of the oscillations. Under an appropriate tuning of the parameters, based on Lyapunov's analysis, we show that the trajectories have at the same time several remarkable properties: they provide fast convergence of values, fast convergence of gradients towards zero, and strong convergence to the minimum norm minimizer. This study extends a previous paper by the authors where similar issues were examined but without the presence of Hessian driven damping.Comment: 28 pages, 3 figure

    Fast convergence of dynamical ADMM via time scaling of damped inertial dynamics

    Full text link
    In this paper, we propose in a Hilbertian setting a second-order time-continuous dynamic system with fast convergence guarantees to solve structured convex minimization problems with an affine constraint. The system is associated with the augmented Lagrangian formulation of the minimization problem. The corresponding dynamics brings into play three general time-varying parameters, each with specific properties, and which are respectively associated with viscous damping, extrapolation and temporal scaling. By appropriately adjusting these parameters, we develop a Lyapunov analysis which provides fast convergence properties of the values and of the feasibility gap. These results will naturally pave the way for developing corresponding accelerated ADMM algorithms, obtained by temporal discretization
    • 

    corecore