15,774 research outputs found
Progressive construction of a parametric reduced-order model for PDE-constrained optimization
An adaptive approach to using reduced-order models as surrogates in
PDE-constrained optimization is introduced that breaks the traditional
offline-online framework of model order reduction. A sequence of optimization
problems constrained by a given Reduced-Order Model (ROM) is defined with the
goal of converging to the solution of a given PDE-constrained optimization
problem. For each reduced optimization problem, the constraining ROM is trained
from sampling the High-Dimensional Model (HDM) at the solution of some of the
previous problems in the sequence. The reduced optimization problems are
equipped with a nonlinear trust-region based on a residual error indicator to
keep the optimization trajectory in a region of the parameter space where the
ROM is accurate. A technique for incorporating sensitivities into a
Reduced-Order Basis (ROB) is also presented, along with a methodology for
computing sensitivities of the reduced-order model that minimizes the distance
to the corresponding HDM sensitivity, in a suitable norm. The proposed reduced
optimization framework is applied to subsonic aerodynamic shape optimization
and shown to reduce the number of queries to the HDM by a factor of 4-5,
compared to the optimization problem solved using only the HDM, with errors in
the optimal solution far less than 0.1%
Gradient type optimization methods for electronic structure calculations
The density functional theory (DFT) in electronic structure calculations can
be formulated as either a nonlinear eigenvalue or direct minimization problem.
The most widely used approach for solving the former is the so-called
self-consistent field (SCF) iteration. A common observation is that the
convergence of SCF is not clear theoretically while approaches with convergence
guarantee for solving the latter are often not competitive to SCF numerically.
In this paper, we study gradient type methods for solving the direct
minimization problem by constructing new iterations along the gradient on the
Stiefel manifold. Global convergence (i.e., convergence to a stationary point
from any initial solution) as well as local convergence rate follows from the
standard theory for optimization on manifold directly. A major computational
advantage is that the computation of linear eigenvalue problems is no longer
needed. The main costs of our approaches arise from the assembling of the total
energy functional and its gradient and the projection onto the manifold. These
tasks are cheaper than eigenvalue computation and they are often more suitable
for parallelization as long as the evaluation of the total energy functional
and its gradient is efficient. Numerical results show that they can outperform
SCF consistently on many practically large systems.Comment: 24 pages, 11 figures, 59 references, and 1 acknowledgement
On an adaptive regularization for ill-posed nonlinear systems and its trust-region implementation
In this paper we address the stable numerical solution of nonlinear ill-posed
systems by a trust-region method. We show that an appropriate choice of the
trust-region radius gives rise to a procedure that has the potential to
approach a solution of the unperturbed system. This regularizing property is
shown theoretically and validated numerically.Comment: arXiv admin note: text overlap with arXiv:1410.278
Data Driven Surrogate Based Optimization in the Problem Solving Environment WBCSim
Large scale, multidisciplinary, engineering designs are always difficult due to the complexity and dimensionality of these problems. Direct coupling between the analysis codes and the optimization routines can be prohibitively time consuming due to the complexity of the underlying simulation codes. One way of tackling this problem is by constructing computationally cheap(er) approximations of the expensive simulations, that mimic the behavior of the simulation model as closely as possible. This paper presents a data driven, surrogate based optimization algorithm that uses a trust region based sequential approximate optimization (SAO) framework and a statistical sampling approach based on design of experiment (DOE) arrays. The algorithm is implemented using techniques from two packages—SURFPACK and SHEPPACK that provide a collection of approximation algorithms to build the surrogates and three different DOE techniques—full factorial (FF), Latin hypercube sampling (LHS), and central composite design (CCD)—are used to train the surrogates. The results are compared with the optimization results obtained by directly coupling an optimizer with the simulation code. The biggest concern in using the SAO framework based on statistical sampling is the generation of the required database. As the number of design variables grows, the computational cost of generating the required database grows rapidly. A data driven approach is proposed to tackle this situation, where the trick is to run the expensive simulation if and only if a nearby data point does not exist in the cumulatively growing database. Over time the database matures and is enriched as more and more optimizations are performed. Results show that the proposed methodology dramatically reduces the total number of calls to the expensive simulation runs during the optimization process
- …