45 research outputs found

    A variation of Broyden Class methods using Householder adaptive transforms

    Full text link
    In this work we introduce and study novel Quasi Newton minimization methods based on a Hessian approximation Broyden Class-\textit{type} updating scheme, where a suitable matrix B~k\tilde{B}_k is updated instead of the current Hessian approximation BkB_k. We identify conditions which imply the convergence of the algorithm and, if exact line search is chosen, its quadratic termination. By a remarkable connection between the projection operation and Krylov spaces, such conditions can be ensured using low complexity matrices B~k\tilde{B}_k obtained projecting BkB_k onto algebras of matrices diagonalized by products of two or three Householder matrices adaptively chosen step by step. Extended experimental tests show that the introduction of the adaptive criterion, which theoretically guarantees the convergence, considerably improves the robustness of the minimization schemes when compared with a non-adaptive choice; moreover, they show that the proposed methods could be particularly suitable to solve large scale problems where LL-BFGSBFGS performs poorly

    Low complexity secant quasi-Newton minimization algorithms for nonconvex functions

    Get PDF
    In this work some interesting relations between results on basic optimization and algorithms for nonconvex functions (such as BFGS and secant methods) are pointed out. In particular, some innovative tools for improving our recent secant BFGS-type and LQN algorithms are described in detail

    On the best least squares fit to a matrix and its applications

    Get PDF
    The best least squares fit L_A to a matrix A in a space L can be useful to improve the rate of convergence of the conjugate gradient method in solving systems Ax=b as well as to define low complexity quasi-Newton algorithms in unconstrained minimization. This is shown in the present paper with new important applications and ideas. Moreover, some theoretical results on the representation and on the computation of L_A are investigated

    Matrix algebras in quasi-newtonian algorithms for optimal learning in multi-layer perceptrons

    Get PDF
    In this work the authors implement in a Multi-Layer Perceptron (MLP) environment a new class of quasi-newtonian (QN) methods. The algorithms proposed in the present paper use in the iterative scheme of a generalized BFGS-method a family of matrix algebras, recently introduced for displacement decompositions and for optimal preconditioning. This novel approach allows to construct methods having an O(n log_2 n) complexity. Numerical experiences compared with the performances of the best QN-algorithms known in the literature confirm the effectiveness of these new optimization techniques

    Adaptive matrix algebras in unconstrained minimization

    Get PDF
    In this paper we study adaptive L(k)QNmethods, involving special matrix algebras of low complexity, to solve general (non-structured) unconstrained minimization problems. These methods, which generalize the classical BFGS method, are based on an iterative formula which exploits, at each step, an ad hocchosen matrix algebra L(k). A global convergence result is obtained under suitable assumptions on f

    Logos und Zahl

    No full text

    Eine Kurze Geschichte der Unendlicheit

    Get PDF
    Nel libro si tracciano le linee essenziali della storia del concetto di infinito, da Aristotele alla scienza del calcolo del Novecento. L'accento è posto sul confronto tra l'infinito attuale e potenziale e sul modo in cui queste due concezioni si sono contrapposte nei secoli. La tesi del libro è che la logica e la matematica del secolo scorso hanno riproposto una concezione dell'infinito potenziale simile a quella dei greci

    An efficient generalization of Battiti-Shanno's quasi-Newton algorithm for learning in MLP-networks

    Get PDF
    This paper presents a novel quasi-Newton method fo the minimization of the error function of a feed-forward neural network. The method is a generalization of Battiti's well known OSS algorithm. The aim of the proposed approach is to achieve a significant improvement both in terms of computational effort and in the capability of evaluating the global minimum of the error function. The technique described in this work is founded on the innovative concept of "convex algorithm" in order to avoid possible entrapments into local minima. Convergence results as well numerical experiences are presented

    Sonsuzun Kisa Tarihi

    No full text
    corecore