47 research outputs found
Multi-scale dynamics and rheology of mantle flow with plates
Fundamental issues in our understanding of plate and mantle dynamics remain unresolved, including the rheology and state of stress of plates and slabs; the coupling between plates, slabs and mantle; and the flow around slabs. To address these questions, models of global mantle flow with plates are computed using adaptive finite elements, and compared to a variety of observational constraints. The dynamically consistent instantaneous models include a composite rheology with yielding, and incorporate details of the thermal buoyancy field. Around plate boundaries, the local resolution is 1 km, which allows us to study highly detailed features in a globally consistent framework. Models that best fit plateness criteria and plate motion data have strong slabs with high stresses. We find a strong dependence of global plate motions, trench rollback, net rotation, plateness, and strain rate on the stress exponent in the nonlinear viscosity; the yield stress is found to be important only if it is smaller than the ambient convective stress. Due to strong coupling between plates, slabs, and the surrounding mantle, the presence of lower mantle anomalies affect plate motions. The flow in and around slabs, microplate motion, and trench rollback are intimately linked to the amount of yielding in the subducting slab hinge, slab morphology, and the presence of high viscosity structures in the lower mantle beneath the slab
Slab stress and strain rate as constraints on global mantle flow
Dynamically consistent global models of mantle convection with plates are developed that are consistent with detailed constraints on the state of stress and strain rate from deep focus earthquakes. Models that best fit plateness criteria and plate motion data have strong slabs that have high stresses. The regions containing the M_W 8.3 Bolivia and M_W 7.6 Tonga 1994 events are considered in detail. Modeled stress orientations match stress patterns from earthquake focal mechanisms. A yield stress of at least 100 MPa is required to fit plate motions and matches the minimum stress requirement obtained from the stress drop for the Bolivia 1994 deep focus event. The minimum strain rate determined from seismic moment release in the Tonga slab provides an upper limit of ~200 MPa on the strength in the slab
Large-scale adaptive mantle convection simulation
A new generation, parallel adaptive-mesh mantle convection code, Rhea, is described and benchmarked. Rhea targets large-scale mantle convection simulations on parallel computers, and thus has been developed with a strong focus on computational efficiency and parallel scalability of both mesh handling and numerical solvers. Rhea builds mantle convection solvers on a collection of parallel octree-based adaptive finite element libraries that support new distributed data structures and parallel algorithms for dynamic coarsening, refinement, rebalancing and repartitioning of the mesh. In this study we demonstrate scalability to 122 880 compute cores and verify correctness of the implementation. We present the numerical approximation and convergence properties using 3-D benchmark problems and other tests for variable-viscosity Stokes flow and thermal convection
Scalable Adaptive Mantle Convection Simulation on Petascale Supercomputers
Mantle convection is the principal control on
the thermal and geological evolution of the Earth. Mantle
convection modeling involves solution of the mass, momentum,
and energy equations for a viscous, creeping, incompressible
non-Newtonian fluid at high Rayleigh and Peclet
numbers. Our goal is to conduct global mantle convection
simulations that can resolve faulted plate boundaries, down
to 1 km scales. However, uniform resolution at these scales
would result in meshes with a trillion elements, which
would elude even sustained petaflops supercomputers. Thus
parallel adaptive mesh refinement and coarsening (AMR)
is essential.
We present RHEA, a new generation mantle convection
code designed to scale to hundreds of thousands of cores.
RHEA is built on ALPS, a parallel octree-based adaptive
mesh finite element library that provides new distributed
data structures and parallel algorithms for dynamic coarsening,
refinement, rebalancing, and repartitioning of the
mesh. ALPS currently supports low order continuous
Lagrange elements, and arbitrary order discontinuous
Galerkin spectral elements, on octree meshes. A forest-ofoctrees
implementation permits nearly arbitrary geometries
to be accommodated. Using TACC’s 579 teraflops
Ranger supercomputer, we demonstrate excellent weak and
strong scalability of parallel AMR on up to 62,464 cores
for problems with up to 12.4 billion elements. With RHEA’s
adaptive capabilities, we have been able to reduce the
number of elements by over three orders of magnitude,
thus enabling us to simulate large-scale mantle convection
with finest local resolution of 1.5 km
An extreme-scale implicit solver for complex PDEs: highly heterogeneous flow in earth's mantle
Mantle convection is the fundamental physical process within earth's interior responsible for the thermal and geological evolution of the planet, including plate tectonics. The mantle is modeled as a viscous, incompressible, non-Newtonian fluid. The wide range of spatial scales, extreme variability and anisotropy in material properties, and severely nonlinear rheology have made global mantle convection modeling with realistic parameters prohibitive. Here we present a new implicit solver that exhibits optimal algorithmic performance and is capable of extreme scaling for hard PDE problems, such as mantle convection. To maximize accuracy and minimize runtime, the solver incorporates a number of advances, including aggressive multi-octree adaptivity, mixed continuous-discontinuous discretization, arbitrarily-high-order accuracy, hybrid spectral/geometric/algebraic multigrid, and novel Schur-complement preconditioning. These features present enormous challenges for extreme scalability. We demonstrate that---contrary to conventional wisdom---algorithmically optimal implicit solvers can be designed that scale out to 1.5 million cores for severely nonlinear, ill-conditioned, heterogeneous, and anisotropic PDEs
Research and Education in Computational Science and Engineering
Over the past two decades the field of computational science and engineering
(CSE) has penetrated both basic and applied research in academia, industry, and
laboratories to advance discovery, optimize systems, support decision-makers,
and educate the scientific and engineering workforce. Informed by centuries of
theory and experiment, CSE performs computational experiments to answer
questions that neither theory nor experiment alone is equipped to answer. CSE
provides scientists and engineers of all persuasions with algorithmic
inventions and software systems that transcend disciplines and scales. Carried
on a wave of digital technology, CSE brings the power of parallelism to bear on
troves of data. Mathematics-based advanced computing has become a prevalent
means of discovery and innovation in essentially all areas of science,
engineering, technology, and society; and the CSE community is at the core of
this transformation. However, a combination of disruptive
developments---including the architectural complexity of extreme-scale
computing, the data revolution that engulfs the planet, and the specialization
required to follow the applications to new frontiers---is redefining the scope
and reach of the CSE endeavor. This report describes the rapid expansion of CSE
and the challenges to sustaining its bold advances. The report also presents
strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie
The effects of iron fortification on the gut microbiota in African children: a randomized controlled trial in Cote d'Ivoire.
BACKGROUND: Iron is essential for the growth and virulence of many pathogenic enterobacteria, whereas beneficial barrier bacteria, such as lactobacilli, do not require iron. Thus, increasing colonic iron could select gut microbiota for humans that are unfavorable to the host. OBJECTIVE: The objective was to determine the effect of iron fortification on gut microbiota and gut inflammation in African children. DESIGN: In a 6-mo, randomized, double-blind, controlled trial, 6-14-y-old Ivorian children (n = 139) received iron-fortified biscuits, which contained 20 mg Fe/d, 4 times/wk as electrolytic iron or nonfortifoed biscuits. We measured changes in hemoglobin concentrations, inflammation, iron status, helminths, diarrhea, fecal calprotectin concentrations, and microbiota diversity and composition (n = 60) and the prevalence of selected enteropathogens. RESULTS: At baseline, there were greater numbers of fecal enterobacteria than of lactobacilli and bifidobacteria (P < 0.02). Iron fortification was ineffective; there were no differences in iron status, anemia, or hookworm prevalence at 6 mo. The fecal microbiota was modified by iron fortification as shown by a significant increase in profile dissimilarity (P < 0.0001) in the iron group as compared with the control group. There was a significant increase in the number of enterobacteria (P < 0.005) and a decrease in lactobacilli (P < 0.0001) in the iron group after 6 mo. In the iron group, there was an increase in the mean fecal calprotectin concentration (P < 0.01), which is a marker of gut inflammation, that correlated with the increase in fecal enterobacteria (P < 0.05). CONCLUSIONS: Anemic African children carry an unfavorable ratio of fecal enterobacteria to bifidobacteria and lactobacilli, which is increased by iron fortification. Thus, iron fortification in this population produces a potentially more pathogenic gut microbiota profile, and this profile is associated with increased gut inflammation. This trial was registered at controlled-trials.com as ISRCTN21782274
Recommended from our members
Sensitivity technologies for large scale simulation.
Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first order approximation of the Euler equations and used as a preconditioner. In comparison to other methods, the AD preconditioner showed better convergence behavior. Our ultimate target is to perform shape optimization and hp adaptivity using adjoint formulations in the Premo compressible fluid flow simulator. A mathematical formulation for mixed-level simulation algorithms has been developed where different physics interact at potentially different spatial resolutions in a single domain. To minimize the implementation effort, explicit solution methods can be considered, however, implicit methods are preferred if computational efficiency is of high priority. We present the use of a partial elimination nonlinear solver technique to solve these mixed level problems and show how these formulation are closely coupled to intrusive optimization approaches and sensitivity analyses. Production codes are typically not designed for sensitivity analysis or large scale optimization. The implementation of our optimization libraries into multiple production simulation codes in which each code has their own linear algebra interface becomes an intractable problem. In an attempt to streamline this task, we have developed a standard interface between the numerical algorithm (such as optimization) and the underlying linear algebra. These interfaces (TSFCore and TSFCoreNonlin) have been adopted by the Trilinos framework and the goal is to promote the use of these interfaces especially with new developments. Finally, an adjoint based a posteriori error estimator has been developed for discontinuous Galerkin discretization of Poisson's equation. The goal is to investigate other ways to leverage the adjoint calculations and we show how the convergence of the forward problem can be improved by adapting the grid using adjoint-based error estimates. Error estimation is usually conducted with continuous adjoints but if discrete adjoints are available it may be possible to reuse the discrete version for error estimation. We investigate the advantages and disadvantages of continuous and discrete adjoints through a simple example
Global Carbon Budget 2023
Accurate assessment of anthropogenic carbon dioxide (CO2) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere in a changing climate is critical to better understand the global carbon cycle, support the development of climate policies, and project future climate change. Here we describe and synthesize data sets and methodology to quantify the five major components of the global carbon budget and their uncertainties. Fossil CO2 emissions (EFOS) are based on energy statistics and cement production data, while emissions from land-use change (ELUC), mainly deforestation, are based on land-use and land-use change data and bookkeeping models. Atmospheric CO2 concentration is measured directly, and its growth rate
(GATM) is computed from the annual changes in concentration. The ocean CO2 sink (SOCEAN) is estimated with global ocean biogeochemistry models and observation-based f CO2 products. The terrestrial CO2 sink (SLAND) is estimated with dynamic global vegetation models. Additional lines of evidence on land and ocean sinks are provided by atmospheric inversions, atmospheric oxygen measurements, and Earth system models. The resulting carbon budget imbalance (BIM), the difference between the estimated total emissions and the estimated changes in the atmosphere, ocean, and terrestrial biosphere, is a measure of imperfect data and incomplete understanding
of the contemporary carbon cycle. All uncertainties are reported as ±1σ. For the year 2022, EFOS increased by 0.9 % relative to 2021, with fossil emissions at 9.9 ± 0.5 Gt C yr−1 (10.2 ± 0.5 Gt C yr−1 when the cement carbonation sink is not included), and ELUC was 1.2 ± 0.7 Gt C yr−1, for a total anthropogenic CO2 emission (including the cement carbonation sink) of 11.1 ± 0.8 Gt C yr−1 (40.7±3.2 Gt CO2 yr−1). Also, for 2022, GATM was 4.6±0.2 Gt C yr−1 (2.18±0.1 ppm yr−1; ppm denotes parts per million), SOCEAN was 2.8 ± 0.4 Gt C yr−1, and SLAND was 3.8 ± 0.8 Gt C yr−1, with a BIM of −0.1 Gt C yr−1 (i.e. total estimated sources marginally too low or sinks marginally too high). The global atmospheric CO2 concentration averaged over 2022 reached 417.1 ± 0.1 ppm. Preliminary data for 2023 suggest an increase in EFOS relative to 2022 of +1.1 % (0.0 % to 2.1 %) globally and atmospheric CO2 concentration reaching 419.3 ppm, 51 % above the pre-industrial level (around 278 ppm in 1750). Overall, the mean of and trend in the components of the global carbon budget are consistently estimated over the period 1959–2022, with a near-zero overall budget imbalance, although discrepancies of up to around 1 Gt C yr−1 persist for the representation of annual to semi-decadal variability in CO2 fluxes. Comparison of estimates from multiple approaches and observations shows the following: (1) a persistent large uncertainty in the estimate of land-use changes emissions, (2) a low agreement between the different methods on the magnitude of the land CO2 flux in the northern extra-tropics, and (3) a discrepancy between the different methods on the strength of the ocean sink over the last decade. This living-data update documents changes in methods and data sets applied to this most recent global carbon budget as well as evolving community understanding of the global carbon cycle. The data presented in this work
are available at https://doi.org/10.18160/GCP-2023 (Friedlingstein et al., 2023)