13 research outputs found
A Micro-Macro Parareal Implementation for the Ocean-Circulation Model FESOM2
A micro-macro variant of the parallel-in-time algorithm Parareal has been
applied to the ocean-circulation and sea-ice model model FESOM2. The
state-of-the-art software in climate research has been developed by the
Alfred-Wegener-Institut (AWI) in Bremen, Germany. The algorithm requires two
meshes of low and high spatial resolution to define the coarse and fine
propagator. As a first assessment we refined the PI mesh, increasing its
resolution by factor 4. The main objective of this study was to demonstrate
that micro-macro Parareal can provide convergence in diagnostic variables in
complex climate research problems. After the introduction to FESOM2 we show how
to generate the refined mesh and which interpolation methods were chosen. With
the convergence results presented we discuss the success of this attempt and
which steps have to be taken to extend the approach to current research
problems.Comment: 65 pages, 107 figure
Assessing the benefits of approximately exact step sizes for Picard and Newton solver in simulating ice flow (FEniCS-full-Stokes v.1.3.2)
Solving the momentum balance is the computationally expensive part of simulating the evolution of ice sheets. The momentum balance is described by the nonlinear full-Stokes equations, which are solved iteratively. We use the Picard iteration and Newton's method combined with Armijo step sizes and approximately exact step sizes, respectively, to solve these equations. The Picard iteration uses either no step size control or the approximately exact step sizes. We compare the variants of Newton's method and the Picard iteration in benchmark experiments, called ISMIP-HOM experiments A, B, E1, and E2. The ISMIP-HOM experiments consist of a more realistic domain and are designed to test the quality of ice models. For an even more realistic test case, we simulate the experiments E1 and E2 with a time-dependent surface. We obtain that approximately exact step sizes greatly reduce the necessary number of iterations for the Picard iteration and Newton's method with nearly no increase in the computation time for each iteration.</p
Porting marine ecosystem model spin-up using transport matrices to GPUs
We have ported an implementation of the spin-up for marine ecosystem models based on transport matrices to graphics processing units (GPUs). The original implementation was designed for distributed-memory architectures and uses the Portable, Extensible Toolkit for Scientific Computation (PETSc) library that is based on the Message Passing Interface (MPI) standard. The spin-up computes a steady seasonal cycle of ecosystem tracers with climatological ocean circulation data as forcing. Since the transport is linear with respect to the tracers, the resulting operator is represented by matrices. Each iteration of the spin-up involves two matrix-vector multiplications and the evaluation of the used biogeochemical model. The original code was written in C and Fortran. On the GPU, we use the Compute Unified Device Architecture (CUDA) standard, a customized version of PETSc and a commercial CUDA Fortran compiler. We describe the extensions to PETSc and the modifications of the original C and Fortran codes that had to be done. Here we make use of freely available libraries for the GPU. We analyze the computational effort of the main parts of the spin-up for two exemplar ecosystem models and compare the overall computational time to those necessary on different CPUs. The results show that a consumer GPU can compete with a significant number of cluster CPUs without further code optimization
Optimization of model parameters and experimental designs with the Optimal Experimental Design Toolbox (v1.0) exemplified by sedimentation in salt marshes
The geosciences are a highly suitable field of application for optimizing model parameters and experimental designs especially because many data are collected.
In this paper, the weighted least squares estimator for optimizing model parameters is presented together with its asymptotic properties. A popular approach to optimize experimental designs called local optimal experimental designs is described together with a lesser known approach which takes into account the potential nonlinearity of the model parameters. These two approaches have been combined with two methods to solve their underlying discrete optimization problem.
All presented methods were implemented in an opensource MATLAB toolbox called the Optimal Experimental Design Toolbox whose structure and application is described. In numerical experiments, the model parameters and experimental design were optimized using this toolbox. Two existing models for sediment concentration in seawater and sediment accretion on salt marshes of different complexity served as an application example. The advantages and disadvantages of these approaches were compared based on these models.
Thanks to optimized experimental designs, the parameters of these models could be determined very accurately with significantly fewer measurements compared to unoptimized experimental designs. The chosen optimization approach played a minor role for the accuracy; therefore, the approach with the least computational effort is recommended
Generating efficient derivative code with TAF: Adjoint and tangent linear Euler flow around an airfoil
FastOpt's new automatic differentiation tool TAF is applied to the two-dimensional Navier-Stokes solver NSC2KE. For a configuration that simulates the Euler flow around a NACA airfoil, TAF has generated the tangent linear and adjoint models as well as the second derivative (Hessian) code. Owing to TAF's capability of generating efficient adjoints of iterative solvers, the derivative code has a high performance: Running both the solver and its adjoint requires 3.4 times as long as running the solver only. Further examples of highly efficient tangent linear, adjoint, and Hessian codes for large and complex three-dimensional Fortran 77-90 climate models are listed. These examples suggest that the performance of the NSC2KE adjoint may well be generalised to more complex three-dimensional CFD codes. We also sketch how TAF can improve the adjoint's performance by exploiting self-adjointness, which is a common feature of CFD codes
Coupling distributed FORTRAN applications using C++ wrappers and the CORBA sequence type
SIGLEAvailable from TIB Hannover: RR 5801(67) / FIZ - Fachinformationszzentrum Karlsruhe / TIB - Technische InformationsbibliothekDEGerman
A diffusion-based kernel density estimator (diffKDE, version 1) with optimal bandwidth approximation for the analysis of data in geoscience and ecological research
Probability density functions (PDFs) provide information about the probability of a random variable taking on a specific value. In geoscience, data distributions are often expressed by a parametric estimation of their PDF, such as, for example, a Gaussian distribution. At present there is growing attention towards the analysis of non-parametric estimation of PDFs, where no prior assumptions about the type of PDF are required. A common tool for such non-parametric estimation is a kernel density estimator (KDE). Existing KDEs are valuable but problematic because of the difficulty of objectively specifying optimal bandwidths for the individual kernels. In this study, we designed and developed a new implementation of a diffusion-based KDE as an open source Python tool to make diffusion-based KDE accessible for general use. Our new diffusion-based KDE provides (1) consistency at the boundaries, (2) better resolution of multimodal data, and (3) a family of KDEs with different smoothing intensities. We demonstrate our tool on artificial data with multiple and boundary-close modes and on real marine biogeochemical data, and compare our results against other popular KDE methods. We also provide an example for how our approach can be efficiently utilized for the derivation of plankton size spectra in ecological research. Our estimator is able to detect relevant multiple modes and it resolves modes that are located closely to a boundary of the observed data interval. Furthermore, our approach produces a smooth graph that is robust to noise and outliers. The convergence rate is comparable to that of the Gaussian estimator, but with a generally smaller error. This is most notable for small data sets with up to around 5000 data points. We discuss the general applicability and advantages of such KDEs for data–model comparison in geoscience
Parameter optimization and uncertainty analysis in a model of oceanic CO2 uptake using a hybrid algorithm and algorithmic differentiation
Methods and results for parameter optimization and uncertainty analysis for a one-dimensional marine biogeochemical model of NPZD type are presented. The model, developed by Schartau and Oschlies, simulates the distribution of nitrogen, phytoplankton, zooplankton and detritus in a water column and is driven by ocean circulation data. Our aim is to identify parameters and fit the model output to given observational data. For this model, it has been shown that a satisfactory fit could not be obtained, and that parameters with comparable fits can vary significantly. Since these results were obtained by evolutionary algorithms (EA), we used a wider range of optimization methods: A special type of EA (called quantum-EA) with coordinate line search and a quasi-Newton SQP method, where exact gradients were generated by Automatic/Algorithmic Differentiation. Both methods are parallelized and can be viewed as instances of a hybrid, mixed evolutionary and deterministic optimization algorithm that we present in detail. This algorithm provides a flexible and robust tool for parameter identification and model validation. We show how the obtained parameters depend on data sparsity and given data error. We present an uncertainty analysis of the optimized parameters w.r.t. Gaussian perturbed data. We show that the model is well suited for parameter identification if the data are attainable. On the other hand, the result that it cannot be fitted to the real observational data without extension or modification, is confirmed
Description of a global marine particulate organic carbon-13 isotope data set
Marine particulate organic carbon stable isotope ratios (δ13CPOC) provide insights into understanding carbon cycling through the atmosphere, ocean and biosphere. They have for example been used to trace the input of anthropogenic carbon in the marine ecosystem due to the distinct isotopically light signature of anthropogenic emissions. However, δ13CPOC is also significantly altered during photosynthesis by phytoplankton, which complicates its interpretation. For such purposes, robust spatio-temporal coverage of δ13CPOC observations is essential. We collected all such available data sets and merged and homogenized them to provide the largest available marine δ13CPOC data set (https://doi.org/10.1594/PANGAEA.929931; Verwega et al., 2021). The data set consists of 4732 data points covering all major ocean basins beginning in the 1960s. We describe the compiled raw data, compare different observational methods, and provide key insights in the temporal and spatial distribution that is consistent with previously observed large-scale patterns. The main different sample collection methods (bottle, intake, net, trap) are generally consistent with each other when comparing within regions. An analysis of 1990s median δ13CPOC values in a meridional section across the best-covered Atlantic Ocean shows relatively high values (≥-22 ‰) in the low latitudes (<30∘) trending towards lower values in the Arctic Ocean (∼-24 ‰) and Southern Ocean (≤-28 ‰). The temporal trend since the 1960s shows a decrease in the median δ13CPOC by more than 3 ‰ in all basins except for the Southern Ocean, which shows a weaker trend but contains relatively poor multi-decadal coverage.</p