4 research outputs found

    Approximate solution of system of equations arising in interior-point methods for bound-constrained optimization

    Full text link
    The focus in this paper is interior-point methods for bound-constrained nonlinear optimization, where the system of nonlinear equations that arise are solved with Newton's method. There is a trade-off between solving Newton systems directly, which give high quality solutions, and solving many approximate Newton systems which are computationally less expensive but give lower quality solutions. We propose partial and full approximate solutions to the Newton systems. The specific approximate solution depends on estimates of the active and inactive constraints at the solution. These sets are at each iteration estimated by basic heuristics. The partial approximate solutions are computationally inexpensive, whereas a system of linear equations needs to be solved for the full approximate solution. The size of the system is determined by the estimate of the inactive constraints at the solution. In addition, we motivate and suggest two Newton-like approaches which are based on an intermediate step that consists of the partial approximate solutions. The theoretical setting is introduced and asymptotic error bounds are given. We also give numerical results to investigate the performance of the approximate solutions within and beyond the theoretical framework

    Model-constrained optimization methods for reduction of parameterized large-scale systems

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2007.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 143-158).Most model reduction techniques employ a projection framework that utilizes a reduced-space basis. The basis is usually formed as the span of a set of solutions of the large-scale system, which are computed for selected values (samples) of input parameters and forcing inputs. In existing model reduction techniques, choosing where and how many samples to generate has been, in general, an ad-hoc process. A key challenge is therefore how to systematically sample the input space, which is of high dimension for many applications of interest. This thesis proposes and analyzes a model-constrained greedy-based adaptive sampling approach in which the parametric input sampling problem is formulated as an optimization problem that targets an error estimation of reduced model output prediction. The method solves the optimization problem to find a locally-optimal point in parameter space where the error estimator is largest, updates the reduced basis with information at this optimal sample location, forms a new reduced model, and repeats the process. Therefore, we use a systematic, adaptive error metric based on the ability of the reduced-order model to capture the outputs of interest in order to choose the snapshot locations that are locally the worst case scenarios.(cont.) The state-of-the-art subspace trust-region interior-reflective inexact Newton conjugate-gradient optimization solver is employed to solve the resulting greedy partial differential equation constrained optimization problem, giving a reduction methodology that is efficient for large-scale systems and scales well to high-dimensional input spaces. The model-constrained adaptive sampling approach is applied to a steady thermal fin optimal design problem and to probabilistic analysis of geometric mistuning in turbomachinery. The method leads to reduced models that accurately represent the full large-scale systems over a wide range of parameter values in parametric spaces up to dimension 21.by Tan Bui-Thanh.Ph.D
    corecore