17 research outputs found

    09061 Abstracts Collection -- Combinatorial Scientific Computing

    Get PDF
    From 01.02.2009 to 06.02.2009, the Dagstuhl Seminar 09061 ``Combinatorial Scientific Computing \u27\u27 was held in Schloss Dagstuhl -- Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Short Tutorial: Getting Started With Ipopt in 90 Minutes

    Get PDF
    Ipopt is an open-source software package for large-scale nonlinear optimization. This tutorial gives a short introduction that should allow the reader to install and test the package on a UNIX-like system, and to run simple examples in a short period of time

    Automatic implementation of material laws: Jacobian calculation in a finite element code with TAPENADE

    Full text link
    In an effort to increase the versatility of finite element codes, we explore the possibility of automatically creating the Jacobian matrix necessary for the gradient-based solution of nonlinear systems of equations. Particularly, we aim to assess the feasibility of employing the automatic differentiation tool TAPENADE for this purpose on a large Fortran codebase that is the result of many years of continuous development. As a starting point we will describe the special structure of finite element codes and the implications that this code design carries for an efficient calculation of the Jacobian matrix. We will also propose a first approach towards improving the efficiency of such a method. Finally, we will present a functioning method for the automatic implementation of the Jacobian calculation in a finite element software, but will also point out important shortcomings that will have to be addressed in the future.Comment: 17 pages, 9 figure

    Algorithmic differentiation applied to the optimal calibration of a shallow water model

    Get PDF
    International audienceThe information on sensitivity provided by derivatives is indispensable in many fields of science. In numerical analysis, computing the accurate value of the derivatives of a function can be a challenge. The classical Finite Differences (FD) method is a simple solution to implement when estimating the value of a derivative. However, it remains highly sensitive numerically and costly in calculation time. Conversely, the Algorithmic Differentiation Method (AD) is a powerful tool for calculating the derivatives of a function described by a computer program. Whatever the complexity of the algorithms implemented in the expression of a function, AD calculates its derivative accurately and reduces development efforts. This article presents the contribution of AD in comparison to FD in the problem of calibrating an industrial class 1D shallow water model. Model calibration is performed by a deterministic mathematical optimiser requiring accurate calculation of the sensitivity of the water surface profile in relation to the friction on a river bed. Two comparative real test cases are presented. They permit validating the better performance expected from AD as a tool used to obtain optimal calibration.Les informations de sensibilité fournies par les dérivées sont indispensables en science dans de nombreux domaines. En analyse numérique, calculer très précisément la valeur des dérivées d'une fonction d'un simulateur physique peut relever du défi. La méthode classique des Différences Finies (DF) est une solution simple à mettre en oeuvre pour estimer la valeur d'une dérivée. Cependant, elle reste très sensible numériquement et coûteuse en temps de calcul. A contrario la méthode de la Différentiation Algorithmique (DA) est une aide puissante pour le calcul des dérivées d'une fonction décrite au moyen d'un programme informatique. Quelle que soit la complexité des algorithmes mis en oeuvre dans l'expression d'une fonction, elle calcule précisément sa dérivée en minimisant les efforts de développement. Cet article montre l'apport de la DA en comparaison des DF sur le problème du calage d'un modèle hydraulique à surface libre 1D de classe industrielle. Le calage du modèle est réalisé par un optimiseur mathématique déterministe nécessitant le calcul précis de la sensibilité de la cote d'eau par rapport au frottement sur le fond de la rivière. Deux cas tests réels de comparaison sont présentés. Ils permettent de valider la supériorité attendue de la DA comme outil d'aide à l'obtention d'un calage optimal

    Sparse Automatic Differentiation for Large-Scale Computations Using Abstract Elementary Algebra

    Full text link
    Most numerical solvers and libraries nowadays are implemented to use mathematical models created with language-specific built-in data types (e.g. real in Fortran or double in C) and their respective elementary algebra implementations. However, built-in elementary algebra typically has limited functionality and often restricts flexibility of mathematical models and analysis types that can be applied to those models. To overcome this limitation, a number of domain-specific languages with more feature-rich built-in data types have been proposed. In this paper, we argue that if numerical libraries and solvers are designed to use abstract elementary algebra rather than language-specific built-in algebra, modern mainstream languages can be as effective as any domain-specific language. We illustrate our ideas using the example of sparse Jacobian matrix computation. We implement an automatic differentiation method that takes advantage of sparse system structures and is straightforward to parallelize in MPI setting. Furthermore, we show that the computational cost scales linearly with the size of the system.Comment: Submitted to ACM Transactions on Mathematical Softwar

    Elimination Techniques for Algorithmic Differentiation Revisited

    Full text link
    All known elimination techniques for (first-order) algorithmic differentiation (AD) rely on Jacobians to be given for a set of relevant elemental functions. Realistically, elemental tangents and adjoints are given instead. They can be obtained by applying software tools for AD to the parts of a given modular numerical simulation. The novel generalized face elimination rule proposed in this article facilitates the rigorous exploitation of associativity of the chain rule of differentiation at arbitrary levels of granularity ranging from elemental scalar (state of the art) to multivariate vector functions with given elemental tangents and adjoints. The implied combinatorial Generalized Face Elimination problem asks for a face elimination sequence of minimal computational cost. Simple branch and bound and greedy heuristic methods are employed as a baseline for further research into more powerful algorithms motivated by promising first test results. The latter can be reproduced with the help of an open-source reference implementation

    Some highlights on Source-to-Source Adjoint AD

    Get PDF
    International audienceAlgorithmic Differentiation (AD) provides the analytic derivatives of functions given as programs. Adjoint AD, which computes gradients, is similar to Back Propagation for Machine Learning. AD researchers study strategies to overcome the difficulties of adjoint AD, to get closer to its theoretical efficiency. To promote fruitful exchanges between Back Propagation and adjoint AD, we present three of these strategies and give our view of their interest and current status
    corecore