2,070 research outputs found

    A new, globally convergent Riemannian conjugate gradient method

    Get PDF
    This article deals with the conjugate gradient method on a Riemannian manifold with interest in global convergence analysis. The existing conjugate gradient algorithms on a manifold endowed with a vector transport need the assumption that the vector transport does not increase the norm of tangent vectors, in order to confirm that generated sequences have a global convergence property. In this article, the notion of a scaled vector transport is introduced to improve the algorithm so that the generated sequences may have a global convergence property under a relaxed assumption. In the proposed algorithm, the transported vector is rescaled in case its norm has increased during the transport. The global convergence is theoretically proved and numerically observed with examples. In fact, numerical experiments show that there exist minimization problems for which the existing algorithm generates divergent sequences, but the proposed algorithm generates convergent sequences.Comment: 22 pages, 8 figure

    A globally convergent matricial algorithm for multivariate spectral estimation

    Full text link
    In this paper, we first describe a matricial Newton-type algorithm designed to solve the multivariable spectrum approximation problem. We then prove its global convergence. Finally, we apply this approximation procedure to multivariate spectral estimation, and test its effectiveness through simulation. Simulation shows that, in the case of short observation records, this method may provide a valid alternative to standard multivariable identification techniques such as MATLAB's PEM and MATLAB's N4SID

    A derivative-free algorithm for bound constrained optimization.

    Get PDF
    In this work, we propose a new globally convergent derivative-free algorithm for the minimization of a continuously differentiable function in the case that some of (or all) the variables are bounded. This algorithm investigates the local behaviour of the objective function on the feasible set by sampling it along the coordinate directions. Whenever a "suitable" descent feasible coordinate direction is detected a new point is produced by performing a linesearch along this direction. The information progressively obtained during the iterates of the algorithm can be used to build an approximation model of the objective function. The minimum of such a model is accepted if it produces an improvement of the objective function value. We also derive a bound for the limit accuracy of the algorithm in the minimization of noisy functions. Finally, we report the results of a preliminary numerical experience

    Globally convergent block-coordinate techniques for unconstrained optimization.

    Get PDF
    In this paper we define new classes of globally convergent block-coordinate techniques for the unconstrained minimization of a continuously differentiable function. More specifically, we first describe conceptual models of decomposition algorithms based on the interconnection of elementary operations performed on the block components of the variable vector. Then we characterize the elementary operations defined through a suitable line search or the global minimization in a component subspace. Using these models, we establish new results on the convergence of the nonlinear Gauss–Seidel method and we prove that this method with a two-block decomposition is globally convergent towards stationary points, even in the absence of convexity or uniqueness assumptions. In the general case of nonconvex objective function and arbitrary decomposition we define new globally convergent line-search-based schemes that may also include partial global inimizations with respect to some component. Computational aspects are discussed and, in particular, an application to a learning problem in a Radial Basis Function neural network is illustrated

    A Superlinear Convergence Framework for Kurdyka-{\L}ojasiewicz Optimization

    Full text link
    This work extends the iterative framework proposed by Attouch et al. (in Math. Program. 137: 91-129, 2013) for minimizing a nonconvex and nonsmooth function Φ\Phi so that the generated sequence possesses a Q-superlinear convergence rate. This framework consists of a monotone decrease condition, a relative error condition and a continuity condition, and the first two conditions both involve a parameter p ⁣>0p\!>0. We justify that any sequence conforming to this framework is globally convergent when Φ\Phi is a Kurdyka-{\L}ojasiewicz (KL) function, and the convergence has a Q-superlinear rate of order pθ(1+p)\frac{p}{\theta(1+p)} when Φ\Phi is a KL function of exponent θ∈(0,pp+1)\theta\in(0,\frac{p}{p+1}). Then, we illustrate that the iterate sequence generated by an inexact q∈[2,3]q\in[2,3]-order regularization method for composite optimization problems with a nonconvex and nonsmooth term belongs to this framework, and consequently, first achieve the Q-superlinear convergence rate of order 4/34/3 for an inexact cubic regularization method to solve this class of composite problems with KL property of exponent 1/21/2

    A Multigrid Optimization Algorithm for the Numerical Solution of Quasilinear Variational Inequalities Involving the pp-Laplacian

    Full text link
    In this paper we propose a multigrid optimization algorithm (MG/OPT) for the numerical solution of a class of quasilinear variational inequalities of the second kind. This approach is enabled by the fact that the solution of the variational inequality is given by the minimizer of a nonsmooth energy functional, involving the pp-Laplace operator. We propose a Huber regularization of the functional and a finite element discretization for the problem. Further, we analyze the regularity of the discretized energy functional, and we are able to prove that its Jacobian is slantly differentiable. This regularity property is useful to analyze the convergence of the MG/OPT algorithm. In fact, we demostrate that the algorithm is globally convergent by using a mean value theorem for semismooth functions. Finally, we apply the MG/OPT algorithm to the numerical simulation of the viscoplastic flow of Bingham, Casson and Herschel-Bulkley fluids in a pipe. Several experiments are carried out to show the efficiency of the proposed algorithm when solving this kind of fluid mechanics problems
    • …
    corecore