541 research outputs found

    Regularization of Limited Memory Quasi-Newton Methods for Large-Scale Nonconvex Minimization

    Full text link
    This paper deals with regularized Newton methods, a flexible class of unconstrained optimization algorithms that is competitive with line search and trust region methods and potentially combines attractive elements of both. The particular focus is on combining regularization with limited memory quasi-Newton methods by exploiting the special structure of limited memory algorithms. Global convergence of regularization methods is shown under mild assumptions and the details of regularized limited memory quasi-Newton updates are discussed including their compact representations. Numerical results using all large-scale test problems from the CUTEst collection indicate that our regularized version of L-BFGS is competitive with state-of-the-art line search and trust-region L-BFGS algorithms and previous attempts at combining L-BFGS with regularization, while potentially outperforming some of them, especially when nonmonotonicity is involved.Comment: 23 pages, 4 figure

    Fast and memory-efficient optimization for large-scale data-driven predictive control

    Full text link
    Recently, data-enabled predictive control (DeePC) schemes based on Willems' fundamental lemma have attracted considerable attention. At the core are computations using Hankel-like matrices and their connection to the concept of persistency of excitation. We propose an iterative solver for the underlying data-driven optimal control problems resulting from linear discrete-time systems. To this end, we apply factorizations based on the discrete Fourier transform of the Hankel-like matrices, which enable fast and memory-efficient computations. To take advantage of this factorization in an optimal control solver and to reduce the effect of inherent bad conditioning of the Hankel-like matrices, we propose an augmented Lagrangian lBFGS-method. We illustrate the performance of our method by means of a numerical study

    PDFO: A Cross-Platform Package for Powell's Derivative-Free Optimization Solvers

    Full text link
    The late Professor M. J. D. Powell devised five trust-region derivative-free optimization methods, namely COBYLA, UOBYQA, NEWUOA, BOBYQA, and LINCOA. He also carefully implemented them into publicly available solvers, which are renowned for their robustness and efficiency. However, the solvers were implemented in Fortran 77 and hence may not be easily accessible to some users. We introduce the PDFO package, which provides user-friendly Python and MATLAB interfaces to Powell's code. With PDFO, users of such languages can call Powell's Fortran solvers easily without dealing with the Fortran code. Moreover, PDFO includes bug fixes and improvements, which are particularly important for handling problems that suffer from ill-conditioning or failures of function evaluations. In addition to the PDFO package, we provide an overview of Powell's methods, sketching them from a uniform perspective, summarizing their main features, and highlighting the similarities and interconnections among them. We also present experiments on PDFO to demonstrate its stability under noise, tolerance of failures in function evaluations, and potential in solving certain hyperparameter optimization problems

    A variation of Broyden Class methods using Householder adaptive transforms

    Full text link
    In this work we introduce and study novel Quasi Newton minimization methods based on a Hessian approximation Broyden Class-\textit{type} updating scheme, where a suitable matrix B~k\tilde{B}_k is updated instead of the current Hessian approximation BkB_k. We identify conditions which imply the convergence of the algorithm and, if exact line search is chosen, its quadratic termination. By a remarkable connection between the projection operation and Krylov spaces, such conditions can be ensured using low complexity matrices B~k\tilde{B}_k obtained projecting BkB_k onto algebras of matrices diagonalized by products of two or three Householder matrices adaptively chosen step by step. Extended experimental tests show that the introduction of the adaptive criterion, which theoretically guarantees the convergence, considerably improves the robustness of the minimization schemes when compared with a non-adaptive choice; moreover, they show that the proposed methods could be particularly suitable to solve large scale problems where LL-BFGSBFGS performs poorly
    • …
    corecore