83,570 research outputs found

    Iterative Surrogate Model Optimization (ISMO): An active learning algorithm for PDE constrained optimization with deep neural networks

    Get PDF
    We present a novel active learning algorithm, termed as iterative surrogate model optimization (ISMO), for robust and efficient numerical approximation of PDE constrained optimization problems. This algorithm is based on deep neural networks and its key feature is the iterative selection of training data through a feedback loop between deep neural networks and any underlying standard optimization algorithm. Under suitable hypotheses, we show that the resulting optimizers converge exponentially fast (and with exponentially decaying variance), with respect to increasing number of training samples. Numerical examples for optimal control, parameter identification and shape optimization problems for PDEs are provided to validate the proposed theory and to illustrate that ISMO significantly outperforms a standard deep neural network based surrogate optimization algorithm

    The ADMM-PINNs Algorithmic Framework for Nonsmooth PDE-Constrained Optimization: A Deep Learning Approach

    Full text link
    We study the combination of the alternating direction method of multipliers (ADMM) with physics-informed neural networks (PINNs) for a general class of nonsmooth partial differential equation (PDE)-constrained optimization problems, where additional regularization can be employed for constraints on the control or design variables. The resulting ADMM-PINNs algorithmic framework substantially enlarges the applicable range of PINNs to nonsmooth cases of PDE-constrained optimization problems. The application of the ADMM makes it possible to untie the PDE constraints and the nonsmooth regularization terms for iterations. Accordingly, at each iteration, one of the resulting subproblems is a smooth PDE-constrained optimization which can be efficiently solved by PINNs, and the other is a simple nonsmooth optimization problem which usually has a closed-form solution or can be efficiently solved by various standard optimization algorithms or pre-trained neural networks. The ADMM-PINNs algorithmic framework does not require to solve PDEs repeatedly, and it is mesh-free, easy to implement, and scalable to different PDE settings. We validate the efficiency of the ADMM-PINNs algorithmic framework by different prototype applications, including inverse potential problems, source identification in elliptic equations, control constrained optimal control of the Burgers equation, and sparse optimal control of parabolic equations

    Shape optimization in aeronautical applications using neural networks

    Get PDF
    An optimization methodology based on neural networks was developed for use in 2D optimal shape design problems. Neural networks were used as a parameterization scheme to represent the shape function, and an edge-based high-resolution scheme for the solution of the compressible Euler equations was used to model the flow around the shape. The global system incorporates neural networks and the Euler fluid solver into the C++ Flood optimization framework containing a library of optimization algorithms. The optimization scheme was applied to a minimal drag problem in an unconstrained optimization case and a constrained case in hypersonic flow using evolutionary training algorithms. The results indicate that the minimum drag problem is solved to a high degree of accuracy but at high computational cost. For more complex shapes, parallel computing methods are required to reduce computational time

    Shape optimization in aeronautical applications using neural networks

    Get PDF
    An optimization methodology based on neural networks was developed for use in 2D optimal shape design problems. Neural networks were used as a parameterization scheme to represent the shape function, and an edge-based high-resolution scheme for the solution of the compressible Euler equations was used to model the flow around the shape. The global system incorporates neural networks and the Euler fluid solver into the C++ Flood optimization framework containing a library of optimization algorithms. The optimization scheme was applied to a minimal drag problem in an unconstrained optimization case and a constrained case in hypersonic flow using evolutionary training algorithms. The results indicate that the minimum drag problem is solved to a high degree of accuracy but at high computational cost. For more complex shapes, parallel computing methods are required to reduce computational time.Preprin

    AskewSGD : An Annealed interval-constrained Optimisation method to train Quantized Neural Networks

    Full text link
    In this paper, we develop a new algorithm, Annealed Skewed SGD - AskewSGD - for training deep neural networks (DNNs) with quantized weights. First, we formulate the training of quantized neural networks (QNNs) as a smoothed sequence of interval-constrained optimization problems. Then, we propose a new first-order stochastic method, AskewSGD, to solve each constrained optimization subproblem. Unlike algorithms with active sets and feasible directions, AskewSGD avoids projections or optimization under the entire feasible set and allows iterates that are infeasible. The numerical complexity of AskewSGD is comparable to existing approaches for training QNNs, such as the straight-through gradient estimator used in BinaryConnect, or other state of the art methods (ProxQuant, LUQ). We establish convergence guarantees for AskewSGD (under general assumptions for the objective function). Experimental results show that the AskewSGD algorithm performs better than or on par with state of the art methods in classical benchmarks

    Constrained methods for Neural Networks and Computer Graphics

    Get PDF
    Both computer graphics and neural networks are related, in that they model natural phenomena. Physically-based models are used by computer graphics researchers to create realistic, natural animation, and neural models are used by neural network researchers to create new algorithms or new circuits. To exploit successfully these graphical and neural models, engineers want models that fulfill designer-specified goals. These goals are converted into mathematical constraints. This thesis presents constraint methods for computer graphics and neural networks. The mathematical constraint methods modify the differential equations that govern the neural or physically-based models. The constraints methods gradually enforce the constraints exactly. This thesis also describes applications of constrained models to real problems. The first half of this thesis discusses constrained neural networks. The desired models and goals are often converted into constrained optimization problems. These optimization problems are solved using first-orderdifferential equations. There are a series of constraint methods which are applicable to optimization using differential equations: the Penalty Method adds extra terms to the optimization function which penalize violations of constraints, the Differential Multiplier Method adds subsidiary differential equations which estimate Lagrange multipliers to fulfill the constraints gradually and exactly, Rate-Controlled Constraints compute extra terms for the differential equation that force the system to fulfill the constraints exponentially. The applications of constrained neural networks include the creation of constrained circuits, error-correcting codes, symmetric edge detection for computer vision, and heuristics for the traveling salesman problem. The second half of this thesis discusses constrained computer graphics models. In computer graphics, the desired models and goals become constrained mechanical systems, which are typically simulated with second-order differential equations. The Penalty Method adds springs to the mechanical system to penalize violations of the constraints. Rate-Controlled Constraints add forces and impulses to the mechanical system to fulfill the constraints with critically damped motion. Constrained computer graphics models can be used to make deformable physically-based models follow the directives of a animator

    Quantum HyperNetworks: Training Binary Neural Networks in Quantum Superposition

    Full text link
    Binary neural networks, i.e., neural networks whose parameters and activations are constrained to only two possible values, offer a compelling avenue for the deployment of deep learning models on energy- and memory-limited devices. However, their training, architectural design, and hyperparameter tuning remain challenging as these involve multiple computationally expensive combinatorial optimization problems. Here we introduce quantum hypernetworks as a mechanism to train binary neural networks on quantum computers, which unify the search over parameters, hyperparameters, and architectures in a single optimization loop. Through classical simulations, we demonstrate that of our approach effectively finds optimal parameters, hyperparameters and architectural choices with high probability on classification problems including a two-dimensional Gaussian dataset and a scaled-down version of the MNIST handwritten digits. We represent our quantum hypernetworks as variational quantum circuits, and find that an optimal circuit depth maximizes the probability of finding performant binary neural networks. Our unified approach provides an immense scope for other applications in the field of machine learning.Comment: 10 pages, 6 figures. Minimal implementation: https://github.com/carrasqu/binncod

    Neural Fields with Hard Constraints of Arbitrary Differential Order

    Full text link
    While deep learning techniques have become extremely popular for solving a broad range of optimization problems, methods to enforce hard constraints during optimization, particularly on deep neural networks, remain underdeveloped. Inspired by the rich literature on meshless interpolation and its extension to spectral collocation methods in scientific computing, we develop a series of approaches for enforcing hard constraints on neural fields, which we refer to as Constrained Neural Fields (CNF). The constraints can be specified as a linear operator applied to the neural field and its derivatives. We also design specific model representations and training strategies for problems where standard models may encounter difficulties, such as conditioning of the system, memory consumption, and capacity of the network when being constrained. Our approaches are demonstrated in a wide range of real-world applications. Additionally, we develop a framework that enables highly efficient model and constraint specification, which can be readily applied to any downstream task where hard constraints need to be explicitly satisfied during optimization.Comment: 37th Conference on Neural Information Processing Systems (NeurIPS 2023

    Bi-level Physics-Informed Neural Networks for PDE Constrained Optimization using Broyden's Hypergradients

    Full text link
    Deep learning based approaches like Physics-informed neural networks (PINNs) and DeepONets have shown promise on solving PDE constrained optimization (PDECO) problems. However, existing methods are insufficient to handle those PDE constraints that have a complicated or nonlinear dependency on optimization targets. In this paper, we present a novel bi-level optimization framework to resolve the challenge by decoupling the optimization of the targets and constraints. For the inner loop optimization, we adopt PINNs to solve the PDE constraints only. For the outer loop, we design a novel method by using Broyden's method based on the Implicit Function Theorem (IFT), which is efficient and accurate for approximating hypergradients. We further present theoretical explanations and error analysis of the hypergradients computation. Extensive experiments on multiple large-scale and nonlinear PDE constrained optimization problems demonstrate that our method achieves state-of-the-art results compared with strong baselines
    • …
    corecore