2,328 research outputs found

    Limited-memory BFGS Systems with Diagonal Updates

    Get PDF
    In this paper, we investigate a formula to solve systems of the form (B + {\sigma}I)x = y, where B is a limited-memory BFGS quasi-Newton matrix and {\sigma} is a positive constant. These types of systems arise naturally in large-scale optimization such as trust-region methods as well as doubly-augmented Lagrangian methods. We show that provided a simple condition holds on B_0 and \sigma, the system (B + \sigma I)x = y can be solved via a recursion formula that requies only vector inner products. This formula has complexity M^2n, where M is the number of L-BFGS updates and n >> M is the dimension of x

    On Reduced Input-Output Dynamic Mode Decomposition

    Full text link
    The identification of reduced-order models from high-dimensional data is a challenging task, and even more so if the identified system should not only be suitable for a certain data set, but generally approximate the input-output behavior of the data source. In this work, we consider the input-output dynamic mode decomposition method for system identification. We compare excitation approaches for the data-driven identification process and describe an optimization-based stabilization strategy for the identified systems

    Preconditioned subspace quasi-newton method for large scale optimization

    Get PDF
    Subspace quasi-Newton (SQN) method has been widely used in large scale unconstrained optimization problem. Its popularity is due to the fact that the method can construct subproblems in low dimensions so that storage requirement as well as the computation cost can be minimized. However, the main drawback of the SQN method is that it can be very slow on certain types of non-linear problem such as ill-conditioned problems. Hence, we proposed a preconditioned SQN method, which is generally more effective than the SQN method. In order to achieve this, we proposed that a diagonal updating matrix that was derived based on the weak secant relation be used instead of the identity matrix to approximate the initial inverse Hessian. Our numerical results show that the proposed preconditioned SQN method performs better than the SQN method which is without preconditioning

    On Quasi-Newton Forward--Backward Splitting: Proximal Calculus and Convergence

    Get PDF
    We introduce a framework for quasi-Newton forward--backward splitting algorithms (proximal quasi-Newton methods) with a metric induced by diagonal ±\pm rank-rr symmetric positive definite matrices. This special type of metric allows for a highly efficient evaluation of the proximal mapping. The key to this efficiency is a general proximal calculus in the new metric. By using duality, formulas are derived that relate the proximal mapping in a rank-rr modified metric to the original metric. We also describe efficient implementations of the proximity calculation for a large class of functions; the implementations exploit the piece-wise linear nature of the dual problem. Then, we apply these results to acceleration of composite convex minimization problems, which leads to elegant quasi-Newton methods for which we prove convergence. The algorithm is tested on several numerical examples and compared to a comprehensive list of alternatives in the literature. Our quasi-Newton splitting algorithm with the prescribed metric compares favorably against state-of-the-art. The algorithm has extensive applications including signal processing, sparse recovery, machine learning and classification to name a few.Comment: arXiv admin note: text overlap with arXiv:1206.115
    corecore