114 research outputs found

    Implicit algorithms for eigenvector nonlinearities

    Full text link
    We study and derive algorithms for nonlinear eigenvalue problems, where the system matrix depends on the eigenvector, or several eigenvectors (or their corresponding invariant subspace). The algorithms are derived from an implicit viewpoint. More precisely, we change the Newton update equation in a way that the next iterate does not only appear linearly in the update equation. Although, the modifications of the update equation make the methods implicit we show how corresponding iterates can be computed explicitly. Therefore we can carry out steps of the implicit method using explicit procedures. In several cases, these procedures involve a solution of standard eigenvalue problems. We propose two modifications, one of the modifications leads directly to a well-established method (the self-consistent field iteration) whereas the other method is to our knowledge new and has several attractive properties. Convergence theory is provided along with several simulations which illustrate the properties of the algorithms

    Sparsity Constrained Inverse Problems - Application to Vibration-based Structural Health Monitoring

    Get PDF
    Vibration-based structural health monitoring (SHM) seeks to detect, quantify, locate, and prognosticate damage by processing vibration signals measured while the structure is operational. The basic premise of vibration-based SHM is that damage will affect the stiffness, mass or energy dissipation properties of the structure and in turn alter its measured dynamic characteristics. In order to make SHM a practical technology it is necessary to perform damage assessment using only a minimum number of permanently installed sensors. Deducing damage at unmeasured regions of the structural domain requires solving an inverse problem that is underdetermined and(or) ill-conditioned. In addition, the effects of local damage on global vibration response may be overshadowed by the effects of modelling error, environmental changes, sensor noise, and unmeasured excitation. These theoretical and practical challenges render the damage identification inverse problem ill-posed, and in some cases unsolvable with conventional inverse methods. This dissertation proposes and tests a novel interpretation of the damage identification inverse problem. Since damage is inherently local and strictly reduces stiffness and(or) mass, the underdetermined inverse problem can be made uniquely solvable by either imposing sparsity or non-negativity on the solution space. The goal of this research is to leverage this concept in order to prove that damage identification can be performed in practical applications using significantly less measurements than conventional inverse methods require. This dissertation investigates two sparsity inducing methods, L1-norm optimization and the non-negative least squares, in their application to identifying damage from eigenvalues, a minimal sensor-based feature that results in an underdetermined inverse problem. This work presents necessary conditions for solution uniqueness and a method to quantify the bounds on the non-unique solution space. The proposed methods are investigated using a wide range of numerical simulations and validated using a four-story lab-scale frame and a full-scale 17 m long aluminum truss. The findings of this study suggest that leveraging the attributes of both L1-norm optimization and non-negative constrained least squares can provide significant improvement over their standalone applications and over other existing methods of damage detection

    A survey on numerical methods for unconstrained optimization problems.

    Get PDF
    by Chung Shun Shing.Thesis (M.Phil.)--Chinese University of Hong Kong, 2002.Includes bibliographical references (leaves 158-170).Abstracts in English and Chinese.List of Figures --- p.xChapter 1 --- Introduction --- p.1Chapter 1.1 --- Background and Historical Development --- p.1Chapter 1.2 --- Practical Problems --- p.3Chapter 1.2.1 --- Statistics --- p.3Chapter 1.2.2 --- Aerodynamics --- p.4Chapter 1.2.3 --- Factory Allocation Problem --- p.5Chapter 1.2.4 --- Parameter Problem --- p.5Chapter 1.2.5 --- Chemical Engineering --- p.5Chapter 1.2.6 --- Operational Research --- p.6Chapter 1.2.7 --- Economics --- p.6Chapter 1.3 --- Mathematical Models for Optimization Problems --- p.6Chapter 1.4 --- Unconstrained Optimization Techniques --- p.8Chapter 1.4.1 --- Direct Method - Differential Calculus --- p.8Chapter 1.4.2 --- Iterative Methods --- p.10Chapter 1.5 --- Main Objectives of the Thesis --- p.11Chapter 2 --- Basic Concepts in Optimizations of Smooth Func- tions --- p.14Chapter 2.1 --- Notation --- p.14Chapter 2.2 --- Different Types of Minimizer --- p.16Chapter 2.3 --- Necessary and Sufficient Conditions for Optimality --- p.18Chapter 2.4 --- Quadratic Functions --- p.22Chapter 2.5 --- Convex Functions --- p.24Chapter 2.6 --- "Existence, Uniqueness and Stability of a Minimum" --- p.29Chapter 2.6.1 --- Existence of a Minimum --- p.29Chapter 2.6.2 --- Uniqueness of a Minimum --- p.30Chapter 2.6.3 --- Stability of a Minimum --- p.31Chapter 2.7 --- Types of Convergence --- p.34Chapter 2.8 --- Minimization of Functionals --- p.35Chapter 3 --- Steepest Descent Method --- p.37Chapter 3.1 --- Background --- p.37Chapter 3.2 --- Line Search Method and the Armijo Rule --- p.39Chapter 3.3 --- Steplength Control with Polynomial Models --- p.43Chapter 3.3.1 --- Quadratic Polynomial Model --- p.43Chapter 3.3.2 --- Safeguarding --- p.45Chapter 3.3.3 --- Cubic Polynomial Model --- p.46Chapter 3.3.4 --- General Line Search Strategy --- p.49Chapter 3.3.5 --- Algorithm of Steepest Descent Method --- p.51Chapter 3.4 --- Advantages of the Armijo Rule --- p.54Chapter 3.5 --- Convergence Analysis --- p.56Chapter 4 --- Iterative Methods Using Second Derivatives --- p.63Chapter 4.1 --- Background --- p.63Chapter 4.2 --- Newton's Method --- p.64Chapter 4.2.1 --- Basic Concepts --- p.64Chapter 4.2.2 --- Convergence Analysis of Newton's Method --- p.65Chapter 4.2.3 --- Newton's Method with Steplength --- p.69Chapter 4.2.4 --- Convergence Analysis of Newton's Method with Step-length --- p.70Chapter 4.3 --- Greenstadt's Method --- p.72Chapter 4.4 --- Marquardt-Levenberg Method --- p.74Chapter 4.5 --- Fiacco and McComick Method --- p.76Chapter 4.6 --- Matthews and Davies Method --- p.79Chapter 4.7 --- Numerically Stable Modified Newton's Method --- p.80Chapter 4.8 --- The Role of the Second Derivative Methods --- p.89Chapter 5 --- Multi-step Methods --- p.92Chapter 5.1 --- Background --- p.93Chapter 5.2 --- Heavy Ball Method --- p.94Chapter 5.3 --- Conjugate Gradient Method --- p.99Chapter 5.3.1 --- Some Types of Conjugate Gradient Method --- p.99Chapter 5.3.2 --- Convergence Analysis of Conjugate Gradient Method --- p.108Chapter 5.4 --- Methods of Variable Metric and Methods of Conju- gate Directions --- p.111Chapter 5.5 --- Other Approaches for Constructing the First-order Methods --- p.116Chapter 6 --- Quasi-Newton Methods --- p.121Chapter 6.1 --- Disadvantages of Newton's Method --- p.122Chapter 6.2 --- General Idea of Quasi-Newton Method --- p.124Chapter 6.2.1 --- Quasi-Newton Methods --- p.124Chapter 6.2.2 --- Convergence of Quasi-Newton Methods --- p.129Chapter 6.3 --- Properties of Quasi-Newton Methods --- p.131Chapter 6.4 --- Some Particular Algorithms for Quasi-Newton Methods --- p.137Chapter 6.4.1 --- Single-Rank Algorithms --- p.137Chapter 6.4.2 --- Double-Rank Algorithms --- p.144Chapter 6.4.3 --- Other Applications --- p.149Chapter 6.5 --- Conclusion --- p.152Chapter 7 --- Choice of Methods in Optimization Problems --- p.154Chapter 7.1 --- Choice of Methods --- p.154Chapter 7.2 --- Conclusion --- p.157Bibliography --- p.15

    Structured Eigenvalue Problems

    Get PDF
    Most eigenvalue problems arising in practice are known to be structured. Structure is often introduced by discretization and linearization techniques but may also be a consequence of properties induced by the original problem. Preserving this structure can help preserve physically relevant symmetries in the eigenvalues of the matrix and may improve the accuracy and efficiency of an eigenvalue computation. The purpose of this brief survey is to highlight these facts for some common matrix structures. This includes a treatment of rather general concepts such as structured condition numbers and backward errors as well as an overview of algorithms and applications for several matrix classes including symmetric, skew-symmetric, persymmetric, block cyclic, Hamiltonian, symplectic and orthogonal matrices

    Model order reduction techniques for circuit simulation

    Get PDF
    Includes bibliographical references (p. 156-160).Supported in part by the Semiconductor Research Corporation. SRC 93-SJ-558 Supported in part by the National Science Foundation / Advanced Research Projects Agency. MIP 91-17724Luis Miguel Silveira
    • …
    corecore