44,521 research outputs found
Modular System for Shelves and Coasts (MOSSCO v1.0) - a flexible and multi-component framework for coupled coastal ocean ecosystem modelling
Shelf and coastal sea processes extend from the atmosphere through the water
column and into the sea bed. These processes are driven by physical, chemical,
and biological interactions at local scales, and they are influenced by
transport and cross strong spatial gradients. The linkages between domains and
many different processes are not adequately described in current model systems.
Their limited integration level in part reflects lacking modularity and
flexibility; this shortcoming hinders the exchange of data and model components
and has historically imposed supremacy of specific physical driver models. We
here present the Modular System for Shelves and Coasts (MOSSCO,
http://www.mossco.de), a novel domain and process coupling system
tailored---but not limited--- to the coupling challenges of and applications in
the coastal ocean. MOSSCO builds on the existing coupling technology Earth
System Modeling Framework and on the Framework for Aquatic Biogeochemical
Models, thereby creating a unique level of modularity in both domain and
process coupling; the new framework adds rich metadata, flexible scheduling,
configurations that allow several tens of models to be coupled, and tested
setups for coastal coupled applications. That way, MOSSCO addresses the
technology needs of a growing marine coastal Earth System community that
encompasses very different disciplines, numerical tools, and research
questions.Comment: 30 pages, 6 figures, submitted to Geoscientific Model Development
Discussion
Towards Exascale Scientific Metadata Management
Advances in technology and computing hardware are enabling scientists from
all areas of science to produce massive amounts of data using large-scale
simulations or observational facilities. In this era of data deluge, effective
coordination between the data production and the analysis phases hinges on the
availability of metadata that describe the scientific datasets. Existing
workflow engines have been capturing a limited form of metadata to provide
provenance information about the identity and lineage of the data. However,
much of the data produced by simulations, experiments, and analyses still need
to be annotated manually in an ad hoc manner by domain scientists. Systematic
and transparent acquisition of rich metadata becomes a crucial prerequisite to
sustain and accelerate the pace of scientific innovation. Yet, ubiquitous and
domain-agnostic metadata management infrastructure that can meet the demands of
extreme-scale science is notable by its absence.
To address this gap in scientific data management research and practice, we
present our vision for an integrated approach that (1) automatically captures
and manipulates information-rich metadata while the data is being produced or
analyzed and (2) stores metadata within each dataset to permeate
metadata-oblivious processes and to query metadata through established and
standardized data access interfaces. We motivate the need for the proposed
integrated approach using applications from plasma physics, climate modeling
and neuroscience, and then discuss research challenges and possible solutions
Enabling quantitative data analysis through e-infrastructures
This paper discusses how quantitative data analysis in the social sciences can engage with and exploit an e-Infrastructure. We highlight how a number of activities which are central to quantitative data analysis, referred to as ‘data management’, can benefit from e-infrastructure support. We conclude by discussing how these issues are relevant to the DAMES (Data Management through e-Social Science) research Node, an ongoing project that aims to develop e-Infrastructural resources for quantitative data analysis in the social sciences
Cactus: Issues for Sustainable Simulation Software
The Cactus Framework is an open-source, modular, portable programming
environment for the collaborative development and deployment of scientific
applications using high-performance computing. Its roots reach back to 1996 at
the National Center for Supercomputer Applications and the Albert Einstein
Institute in Germany, where its development jumpstarted. Since then, the Cactus
framework has witnessed major changes in hardware infrastructure as well as its
own community. This paper describes its endurance through these past changes
and, drawing upon lessons from its past, also discusses futureComment: submitted to the Workshop on Sustainable Software for Science:
Practice and Experiences 201
- …