29 research outputs found

    Utilization, release, and long-term fate of ancient carbon from eroding permafrost coastlines

    Get PDF
    About 34% of global coast lines are underlain by permafrost. Rising temperatures cause an acceleration in erosion rates of up to 10s of meters annually, exporting increasing amounts of carbon and nutrients to the coastal ocean. The degradation of ancient organic carbon (OC) from permafrost is an important potential feedback mechanism in a warming climate. However, little is known about permafrost OC degradation after entering the ocean and its long term-fate after redeposition on the sea floor. Some recent studies have revealed CO2 release to occur when ancient permafrost materials are incubated with sea water. However, despite its importance for carbon feedback mechanisms, no study has directly assessed whether this CO2 release is indeed derived from respiration of ancient permafrost OC. We used a multi-disciplinary approach incubating Yedoma permafrost from the Lena Delta in natural coastal seawater from the south-eastern Kara Sea. By combining biogeochemical analyses, DNA-sequencing, ramped oxidation, pyrolysis and stable and radiocarbon isotope analysis we were able to: 1) quantify CO2 emissions from permafrost utilization; 2) for the first time demonstrate the amount of ancient OC contributing to CO2 emissions; 3) link the processes to specific microbial communities; and 4) characterize and assess lability of permafrost OC after redeposition on the sea floor. Our data clearly indicate high bioavailability of permafrost OC and rapid utilization after thawed material has entered the water column, while observing only minor changes in permafrost OC composition over time. Microbial communities are distinctly different in suspended Yedoma particles and water. Overall, our results suggest that under anthropogenic Arctic warming, enhanced coastal erosion will result in increased greenhouse gas emissions, as formerly freeze-locked ancient permafrost OC is remineralized by microbial communities when released to the coastal ocean

    How to verify the precision of density-functional-theory implementations via reproducible and universal workflows

    Full text link
    In the past decades many density-functional theory methods and codes adopting periodic boundary conditions have been developed and are now extensively used in condensed matter physics and materials science research. Only in 2016, however, their precision (i.e., to which extent properties computed with different codes agree among each other) was systematically assessed on elemental crystals: a first crucial step to evaluate the reliability of such computations. We discuss here general recommendations for verification studies aiming at further testing precision and transferability of density-functional-theory computational approaches and codes. We illustrate such recommendations using a greatly expanded protocol covering the whole periodic table from Z=1 to 96 and characterizing 10 prototypical cubic compounds for each element: 4 unaries and 6 oxides, spanning a wide range of coordination numbers and oxidation states. The primary outcome is a reference dataset of 960 equations of state cross-checked between two all-electron codes, then used to verify and improve nine pseudopotential-based approaches. Such effort is facilitated by deploying AiiDA common workflows that perform automatic input parameter selection, provide identical input/output interfaces across codes, and ensure full reproducibility. Finally, we discuss the extent to which the current results for total energies can be reused for different goals (e.g., obtaining formation energies).Comment: Main text: 23 pages, 4 figures. Supplementary: 68 page

    High-throughput All-Electron Density Functional Theory Simulations for a Data-driven Chemical Interpretation of X-ray Photoelectron Spectra

    No full text
    Enabling computer-driven materials design to find and create materials with advanced propertiesfromthe enormous haystack of material phase space is a worthy goal for humanity. Most high-technologies, for example in the energy or health sector, strongly depend on advanced tailored materials. Since conventional research and screening of materials is rather slow and expensive, being able to determine material properties on the computer poses a paradigm shift. For the calculation of properties for pure materials on the nano scale ab initio methods based on the theory of quantum mechanics are well established. Density Functional Theory(DFT) is such a widely applied method from first principles with high predictive power. To screen through larger sets of atomic configurations physical property calculation processes need to be robust and automated. Automation is achieved through the deployment of advanced frameworks which manage many workflows while tracking the provenance of data and calculations. Through workflows, which are essential property calculator procedures, a high-level automation environment is achievable and accumulated knowledge can be reused by others. Workflows can be complex and include multiple programs solving problems over several physical length scales. In this work, the open source all-electron DFT program FLEUR implementing the highly accurate Full-potential Linearized Augmented Plane Wave (FLAPW) method is connected and deployed through the open source Automated Interactive Infrastructure and Database for Computational Science (AiiDA) framework to achieve automation. AiiDA is a Python framework which is capable of provenance tracking millions of high-through put simulations and their data. Basic and advanced workflows are implemented in an open source Pythonpackage AiiDA-FLEUR, especially to calculate properties for the chemical analysis of X-rayphotoemission spectra. These workflows are applied on a wide range of materials, in particular on most known metallic binary compounds. The chemical-phase composition and other material properties of a surface region can be understood through the careful chemical analysis of high-resolution X-ray photoemission spectra. The spectra evaluation process is improved through the development of a fittingmethod driven by data from ab initio simulations. For complex multi-phase spectra this proposedevaluation process is expected to have advantages over the widely applied conventional methods. The spectra evaluation process is successfully deployed on well-behaved spectra of materials relevant for the inner wall (blanket and divertor) plasma-facing components of a nuclear fusion reactor. In particular, the binary beryllium systems Be-Ti, Be-Wand Be-Ta are investigated. Furthermore, different approaches to calculate spectral properties like chemical shifts and binding energies are studied and benchmarked against the experimental literature and data from the NIST X-ray photoelectron spectroscopy database

    The Helmholtz Knowledge Graph & the unified Helmholtz information and data exchange (unHIDE)

    No full text
    <p>Poster about the Helmholtz knowledge graph and the unified Helmholtz information and data exchange (unHIDE) initiative.<br><br>Helmholtz (meta)data is siloed in data infrastructures located in the respective Helmholtz centres. UnHIDE connects this data to the Helmholtz Knowledge Graph . The used technology is co-developed with external stakeholders. Data in the graph is provided for re-use for humans (UI, API) and machines (API, SPARQL). The graph improves visibility for researchers and centers, provides a single point of access to information in the Helmholtz association and allows queries and re-use of data across infrastructures, centers and research fields. With the Helmholtz Knowledge graph unHIDE reveals the current state of (meta)data across Helmholtz data holdings. Data connectivity and connection quality is low. Structured areas of the graph do not reflect Helmholtz entities (centers, repositories or infrastructures). We improve Helmholtz metadata records by uplifting provided documents and feed uplifted data back to the data source. For well known entities (i.e. Helmholtz research centers) we harmonize records through SPARQL updates. We resolve entities by assigning types and ids. Where possible, we infer entity types and ids through the assembled data. </p&gt
    corecore