1,034 research outputs found

    Modifications of the Limited Memory BFGS Algorithm for Large-scale Nonlinear Optimization

    Get PDF
    In this paper we present two new numerical methods for unconstrained large-scale optimization. These methods apply update formulae, which are derived by considering different techniques of approximating the objective function. Theoretical analysis is given to show the advantages of using these update formulae. It is observed that these update formulae can be employed within the framework of limited memory strategy with only a modest increase in the linear algebra cost. Comparative results with limited memory BFGS (L-BFGS) method are presented.</p

    Second order adjoints for solving PDE-constrained optimization problems

    Get PDF
    Inverse problems are of utmost importance in many fields of science and engineering. In the variational approach inverse problems are formulated as PDE-constrained optimization problems, where the optimal estimate of the uncertain parameters is the minimizer of a certain cost functional subject to the constraints posed by the model equations. The numerical solution of such optimization problems requires the computation of derivatives of the model output with respect to model parameters. The first order derivatives of a cost functional (defined on the model output) with respect to a large number of model parameters can be calculated efficiently through first order adjoint sensitivity analysis. Second order adjoint models give second derivative information in the form of matrix-vector products between the Hessian of the cost functional and user defined vectors. Traditionally, the construction of second order derivatives for large scale models has been considered too costly. Consequently, data assimilation applications employ optimization algorithms that use only first order derivative information, like nonlinear conjugate gradients and quasi-Newton methods. In this paper we discuss the mathematical foundations of second order adjoint sensitivity analysis and show that it provides an efficient approach to obtain Hessian-vector products. We study the benefits of using of second order information in the numerical optimization process for data assimilation applications. The numerical studies are performed in a twin experiment setting with a two-dimensional shallow water model. Different scenarios are considered with different discretization approaches, observation sets, and noise levels. Optimization algorithms that employ second order derivatives are tested against widely used methods that require only first order derivatives. Conclusions are drawn regarding the potential benefits and the limitations of using high-order information in large scale data assimilation problems

    The LBFGS Quasi-Newtonian Method for Molecular Modeling Prion AGAAAAGA Amyloid Fibrils

    Get PDF
    Experimental X-ray crystallography, NMR (Nuclear Magnetic Resonance) spectroscopy, dual polarization interferometry, etc are indeed very powerful tools to determine the 3-Dimensional structure of a protein (including the membrane protein); theoretical mathematical and physical computational approaches can also allow us to obtain a description of the protein 3D structure at a submicroscopic level for some unstable, noncrystalline and insoluble proteins. X-ray crystallography finds the X-ray final structure of a protein, which usually need refinements using theoretical protocols in order to produce a better structure. This means theoretical methods are also important in determinations of protein structures. Optimization is always needed in the computer-aided drug design, structure-based drug design, molecular dynamics, and quantum and molecular mechanics. This paper introduces some optimization algorithms used in these research fields and presents a new theoretical computational method - an improved LBFGS Quasi-Newtonian mathematical optimization method - to produce 3D structures of Prion AGAAAAGA amyloid fibrils (which are unstable, noncrystalline and insoluble), from the potential energy minimization point of view. Because the NMR or X-ray structure of the hydrophobic region AGAAAAGA of prion proteins has not yet been determined, the model constructed by this paper can be used as a reference for experimental studies on this region, and may be useful in furthering the goals of medicinal chemistry in this field

    Optimization Methods for Inverse Problems

    Full text link
    Optimization plays an important role in solving many inverse problems. Indeed, the task of inversion often either involves or is fully cast as a solution of an optimization problem. In this light, the mere non-linear, non-convex, and large-scale nature of many of these inversions gives rise to some very challenging optimization problems. The inverse problem community has long been developing various techniques for solving such optimization tasks. However, other, seemingly disjoint communities, such as that of machine learning, have developed, almost in parallel, interesting alternative methods which might have stayed under the radar of the inverse problem community. In this survey, we aim to change that. In doing so, we first discuss current state-of-the-art optimization methods widely used in inverse problems. We then survey recent related advances in addressing similar challenges in problems faced by the machine learning community, and discuss their potential advantages for solving inverse problems. By highlighting the similarities among the optimization challenges faced by the inverse problem and the machine learning communities, we hope that this survey can serve as a bridge in bringing together these two communities and encourage cross fertilization of ideas.Comment: 13 page
    corecore