35,400 research outputs found

    Development of the adjoint of GEOS-Chem

    Get PDF
    We present the adjoint of the global chemical transport model GEOS-Chem, focusing on the chemical and thermodynamic relationships between sulfate – ammonium – nitrate aerosols and their gas-phase precursors. The adjoint model is constructed from a combination of manually and automatically derived discrete adjoint algorithms and numerical solutions to continuous adjoint equations. Explicit inclusion of the processes that govern secondary formation of inorganic aerosol is shown to afford efficient calculation of model sensitivities such as the dependence of sulfate and nitrate aerosol concentrations on emissions of SOx, NOx, and NH3. The adjoint model is extensively validated by comparing adjoint to finite difference sensitivities, which are shown to agree within acceptable tolerances; most sets of comparisons have a nearly 1:1 correlation and R2>0.9. We explore the robustness of these results, noting how insufficient observations or nonlinearities in the advection routine can degrade the adjoint model performance. The potential for inverse modeling using the adjoint of GEOS-Chem is assessed in a data assimilation framework through a series of tests using simulated observations, demonstrating the feasibility of exploiting gas- and aerosol-phase measurements for optimizing emission inventories of aerosol precursors

    Using Automatic Differentiation for Adjoint CFD Code Development

    Get PDF
    This paper addresses the concerns of CFD code developers who are facing the task of creating a discrete adjoint CFD code for design optimisation. It discusses how the development of such a code can be greatly eased through the selective use of Automatic Differentiation, and how the software development can be subjected to a sequence of checks to ensure the correctness of the final software

    Automatic Differentiation of Algorithms for Machine Learning

    Get PDF
    Automatic differentiation---the mechanical transformation of numeric computer programs to calculate derivatives efficiently and accurately---dates to the origin of the computer age. Reverse mode automatic differentiation both antedates and generalizes the method of backwards propagation of errors used in machine learning. Despite this, practitioners in a variety of fields, including machine learning, have been little influenced by automatic differentiation, and make scant use of available tools. Here we review the technique of automatic differentiation, describe its two main modes, and explain how it can benefit machine learning practitioners. To reach the widest possible audience our treatment assumes only elementary differential calculus, and does not assume any knowledge of linear algebra.Comment: 7 pages, 1 figur

    Status of the differential transformation method

    Full text link
    Further to a recent controversy on whether the differential transformation method (DTM) for solving a differential equation is purely and solely the traditional Taylor series method, it is emphasized that the DTM is currently used, often only, as a technique for (analytically) calculating the power series of the solution (in terms of the initial value parameters). Sometimes, a piecewise analytic continuation process is implemented either in a numerical routine (e.g., within a shooting method) or in a semi-analytical procedure (e.g., to solve a boundary value problem). Emphasized also is the fact that, at the time of its invention, the currently-used basic ingredients of the DTM (that transform a differential equation into a difference equation of same order that is iteratively solvable) were already known for a long time by the "traditional"-Taylor-method users (notably in the elaboration of software packages --numerical routines-- for automatically solving ordinary differential equations). At now, the defenders of the DTM still ignore the, though much better developed, studies of the "traditional"-Taylor-method users who, in turn, seem to ignore similarly the existence of the DTM. The DTM has been given an apparent strong formalization (set on the same footing as the Fourier, Laplace or Mellin transformations). Though often used trivially, it is easily attainable and easily adaptable to different kinds of differentiation procedures. That has made it very attractive. Hence applications to various problems of the Taylor method, and more generally of the power series method (including noninteger powers) has been sketched. It seems that its potential has not been exploited as it could be. After a discussion on the reasons of the "misunderstandings" which have caused the controversy, the preceding topics are concretely illustrated.Comment: To appear in Applied Mathematics and Computation, 29 pages, references and further considerations adde

    Automating embedded analysis capabilities and managing software complexity in multiphysics simulation part I: template-based generic programming

    Full text link
    An approach for incorporating embedded simulation and analysis capabilities in complex simulation codes through template-based generic programming is presented. This approach relies on templating and operator overloading within the C++ language to transform a given calculation into one that can compute a variety of additional quantities that are necessary for many state-of-the-art simulation and analysis algorithms. An approach for incorporating these ideas into complex simulation codes through general graph-based assembly is also presented. These ideas have been implemented within a set of packages in the Trilinos framework and are demonstrated on a simple problem from chemical engineering

    Automatic Differentiation Variational Inference

    Full text link
    Probabilistic modeling is iterative. A scientist posits a simple model, fits it to her data, refines it according to her analysis, and repeats. However, fitting complex models to large data is a bottleneck in this process. Deriving algorithms for new models can be both mathematically and computationally challenging, which makes it difficult to efficiently cycle through the steps. To this end, we develop automatic differentiation variational inference (ADVI). Using our method, the scientist only provides a probabilistic model and a dataset, nothing else. ADVI automatically derives an efficient variational inference algorithm, freeing the scientist to refine and explore many models. ADVI supports a broad class of models-no conjugacy assumptions are required. We study ADVI across ten different models and apply it to a dataset with millions of observations. ADVI is integrated into Stan, a probabilistic programming system; it is available for immediate use
    • …
    corecore