229 research outputs found

    Simultaneous single-step one-shot optimization with unsteady PDEs

    Get PDF
    The single-step one-shot method has proven to be very efficient for PDE-constrained optimization where the partial differential equation (PDE) is solved by an iterative fixed point solver. In this approach, the simulation and optimization tasks are performed simultaneously in a single iteration. If the PDE is unsteady, finding an appropriate fixed point iteration is non-trivial. In this paper, we provide a framework that makes the single-step one-shot method applicable for unsteady PDEs that are solved by classical time-marching schemes. The one-shot method is applied to an optimal control problem with unsteady incompressible Navier-Stokes equations that are solved by an industry standard simulation code. With the Van-der-Pol oscillator as a generic model problem, the modified simulation scheme is further improved using adaptive time scales. Finally, numerical results for the advection-diffusion equation are presented. Keywords: Simultaneous optimization; One-shot method; PDE-constrained optimization; Unsteady PDE; Adaptive time scal

    Towards Reduced-order Model Accelerated Optimization for Aerodynamic Design

    Get PDF
    The adoption of mathematically formal simulation-based optimization approaches within aerodynamic design depends upon a delicate balance of affordability and accessibility. Techniques are needed to accelerate the simulation-based optimization process, but they must remain approachable enough for the implementation time to not eliminate the cost savings or act as a barrier to adoption. This dissertation introduces a reduced-order model technique for accelerating fixed-point iterative solvers (e.g. such as those employed to solve primal equations, sensitivity equations, design equations, and their combination). The reduced-order model-based acceleration technique collects snapshots of early iteration (pre-convergent) solutions and residuals and then uses them to project to significantly more accurate solutions, i.e. smaller residual. The technique can be combined with other convergence schemes like multigrid and adaptive timestepping. The technique is generalizable and in this work is demonstrated to accelerate steady and unsteady flow solutions; continuous and discrete adjoint sensitivity solutions; and one-shot design optimization solutions. This final application, reduced-order model accelerated one-shot optimization approach, in particular represents a step towards more efficient aerodynamic design optimization. Through this series of applications, different basis vectors were considered and best practices for snapshot collection procedures were outlined. The major outcome of this dissertation is the development and demonstration of this reduced-order model acceleration technique. This work includes the first application of the reduced-order model-based acceleration method to an explicit one-shot iterative optimization process

    PDE-constrained Models with Neural Network Terms: Optimization and Global Convergence

    Full text link
    Recent research has used deep learning to develop partial differential equation (PDE) models in science and engineering. The functional form of the PDE is determined by a neural network, and the neural network parameters are calibrated to available data. Calibration of the embedded neural network can be performed by optimizing over the PDE. Motivated by these applications, we rigorously study the optimization of a class of linear elliptic PDEs with neural network terms. The neural network parameters in the PDE are optimized using gradient descent, where the gradient is evaluated using an adjoint PDE. As the number of parameters become large, the PDE and adjoint PDE converge to a non-local PDE system. Using this limit PDE system, we are able to prove convergence of the neural network-PDE to a global minimum during the optimization. The limit PDE system contains a non-local linear operator whose eigenvalues are positive but become arbitrarily small. The lack of a spectral gap for the eigenvalues poses the main challenge for the global convergence proof. Careful analysis of the spectral decomposition of the coupled PDE and adjoint PDE system is required. Finally, we use this adjoint method to train a neural network model for an application in fluid mechanics, in which the neural network functions as a closure model for the Reynolds-averaged Navier-Stokes (RANS) equations. The RANS neural network model is trained on several datasets for turbulent channel flow and is evaluated out-of-sample at different Reynolds numbers

    Stochastic optimization methods for the simultaneous control of parameter-dependent systems

    Full text link
    We address the application of stochastic optimization methods for the simultaneous control of parameter-dependent systems. In particular, we focus on the classical Stochastic Gradient Descent (SGD) approach of Robbins and Monro, and on the recently developed Continuous Stochastic Gradient (CSG) algorithm. We consider the problem of computing simultaneous controls through the minimization of a cost functional defined as the superposition of individual costs for each realization of the system. We compare the performances of these stochastic approaches, in terms of their computational complexity, with those of the more classical Gradient Descent (GD) and Conjugate Gradient (CG) algorithms, and we discuss the advantages and disadvantages of each methodology. In agreement with well-established results in the machine learning context, we show how the SGD and CSG algorithms can significantly reduce the computational burden when treating control problems depending on a large amount of parameters. This is corroborated by numerical experiments

    Layer-Parallel Training with GPU Concurrency of Deep Residual Neural Networks via Nonlinear Multigrid

    Full text link
    A Multigrid Full Approximation Storage algorithm for solving Deep Residual Networks is developed to enable neural network parallelized layer-wise training and concurrent computational kernel execution on GPUs. This work demonstrates a 10.2x speedup over traditional layer-wise model parallelism techniques using the same number of compute units.Comment: 7 pages, 6 figures, 27 citations. Accepted to 2020 IEEE High Performance Extreme Computing Conference - Outstanding Paper Awar

    Online adjoint methods for optimization of PDEs

    Full text link
    We present and mathematically analyze an online adjoint algorithm for the optimization of partial differential equations (PDEs). Traditional adjoint algorithms would typically solve a new adjoint PDE at each optimization iteration, which can be computationally costly. In contrast, an online adjoint algorithm updates the design variables in continuous-time and thus constantly makes progress towards minimizing the objective function. The online adjoint algorithm we consider is similar in spirit to the the pseudo-time-stepping, one-shot method which has been previously proposed. Motivated by the application of such methods to engineering problems, we mathematically study the convergence of the online adjoint algorithm. The online adjoint algorithm relies upon a time-relaxed adjoint PDE which provides an estimate of the direction of steepest descent. The algorithm updates this estimate continuously in time, and it asymptotically converges to the exact direction of steepest descent as ā†’āˆž. We rigorously prove that the online adjoint algorithm converges to a critical point of the objective function for optimizing the PDE. Under appropriate technical conditions, we also prove a convergence rate for the algorithm. A crucial step in the convergence proof is a multi-scale analysis of the coupled system for the forward PDE, adjoint PDE, and the gradient descent ODE for the design variables.First author draf
    • ā€¦
    corecore