1,154 research outputs found

    An unstructured-grid, finite-volume sea ice model : development, validation, and application

    Get PDF
    Author Posting. © American Geophysical Union, 2011. This article is posted here by permission of American Geophysical Union for personal use, not for redistribution. The definitive version was published in Journal of Geophysical Research 116 (2011): C00D04, doi:10.1029/2010JC006688.A sea ice model was developed by converting the Community Ice Code (CICE) into an unstructured-grid, finite-volume version (named UG-CICE). The governing equations were discretized with flux forms over control volumes in the computational domain configured with nonoverlapped triangular meshes in the horizontal and solved using a second-order accurate finite-volume solver. Implementing UG-CICE into the Arctic Ocean finite-volume community ocean model provides a new unstructured-grid, MPI-parallelized model system to resolve the ice-ocean interaction dynamics that frequently occur over complex irregular coastal geometries and steep bottom slopes. UG-CICE was first validated for three benchmark test problems to ensure its capability of repeating the ice dynamics features found in CICE and then for sea ice simulation in the Arctic Ocean under climatologic forcing conditions. The model-data comparison results demonstrate that UG-CICE is robust enough to simulate the seasonal variability of the sea ice concentration, ice coverage, and ice drifting in the Arctic Ocean and adjacent coastal regions.This work was supported by the NSF Arctic Program for projects with grant numbers of ARC0712903, ARC0732084, and ARC0804029. The Arctic Ocean Model Intercomparison Project (AOMIP) has provided an important guidance for model improvements and ocean studies under coordinated experiments activities. We would like to thank AOMIP PI Proshutinsky for his valuable suggestions and comments on the ice dynamics. His contribution is supported by ARC0800400 and ARC0712848. The development of FVCOM was supported by the Massachusetts Marine Fisheries Institute NOAA grants DOC/NOAA/ NA04NMF4720332 and DOC/NOAA/NA05NMF4721131; the NSF Ocean Science Program for projects of OCE‐0234545, OCE‐0227679, OCE‐ 0606928, OCE‐0712903, OCE‐0726851, and OCE‐0814505; MIT Sea Grant funds (2006‐RC‐103 and 2010‐R/RC‐116); and NOAA NERACOOS Program for the UMASS team. G. Gao was also supported by the Chinese NSF Arctic Ocean grant under contract 40476007. C. Chen’s contribution was also supported by Shanghai Ocean University International Cooperation Program (A‐2302‐10‐0003), the Program of Science and Technology Commission of Shanghai Municipality (09320503700), the Leading Academic Discipline Project of Shanghai Municipal Education Commission (J50702), and Zhi jiang Scholar and 111 project funds of the State Key Laboratory for Estuarine and Coastal Research, East China Normal University (ECNU)

    Machine dependence and reproducibility for coupled climate simulations: the HadGEM3-GC3.1 CMIP Preindustrial simulation

    Get PDF
    When the same weather or climate simulation is run on different high-performance computing (HPC) platforms, model outputs may not be identical for a given initial condition. While the role of HPC platforms in delivering better climate projections is to some extent discussed in the literature, attention is mainly focused on scalability and performance rather than on the impact of machine-dependent processes on the numerical solution. Here we investigate the behaviour of the Preindustrial (PI) simulation prepared by the UK Met Office for the forthcoming CMIP6 (Coupled Model Intercomparison Project Phase 6) under different computing environments. Discrepancies between the means of key climate variables were analysed at different timescales, from decadal to centennial. We found that for the two simulations to be statistically indistinguishable, a 200-year averaging period must be used for the analysis of the results. Thus, constant-forcing climate simulations using the HadGEM3-GC3.1 model are reproducible on different HPC platforms provided that a sufficiently long duration of simulation is used. In regions where El Niño–Southern Oscillation (ENSO) teleconnection patterns were detected, we found large sea surface temperature and sea ice concentration differences on centennial timescales. This indicates that a 100-year constant-forcing climate simulation may not be long enough to adequately capture the internal variability of the HadGEM3-GC3.1 model, despite this being the minimum simulation length recommended by CMIP6 protocols for many MIP (Model Intercomparison Project) experiments. On the basis of our findings, we recommend a minimum simulation length of 200 years whenever possible

    GEOS-5 Seasonal Forecast System: ENSO Prediction Skill and Bias

    Get PDF
    The GEOS-5 AOGCM known as S2S-1.0 has been in service from June 2012 through January 2018 (Borovikov et al. 2017). The atmospheric component of S2S-1.0 is Fortuna-2.5, the same that was used for the Modern-Era Retrospective Analysis for Research and Applications (MERRA), but with adjusted parameterization of moist processes and turbulence. The ocean component is the Modular Ocean Model version 4 (MOM4). The sea ice component is the Community Ice CodE, version 4 (CICE). The land surface model is a catchment-based hydrological model coupled to the multi-layer snow model. The AGCM uses a Cartesian grid with a 1 deg 1.25 deg horizontal resolution and 72 hybrid vertical levels with the upper most level at 0.01 hPa. OGCM nominal resolution of the tripolar grid is 1/2 deg, with a meridional equatorial refinement to 1/4 deg. In the coupled model initialization, selected atmospheric variables are constrained with MERRA. The Goddard Earth Observing System integrated Ocean Data Assimilation System (GEOS-iODAS) is used for both ocean state and sea ice initialization. SST, T and S profiles and sea ice concentration were assimilated

    description and performance

    Get PDF
    We developed a coupled regional climate system model based on the CCLM regional climate model. Within this model system, using OASIS3-MCT as a coupler, CCLM can be coupled to two land surface models (the Community Land Model (CLM) and VEG3D), the NEMO-MED12 regional ocean model for the Mediterranean Sea, two ocean models for the North and Baltic seas (NEMO-NORDIC and TRIMNP+CICE) and the MPI-ESM Earth system model. We first present the different model components and the unified OASIS3-MCT interface which handles all couplings in a consistent way, minimising the model source code modifications and defining the physical and numerical aspects of the couplings. We also address specific coupling issues like the handling of different domains, multiple usage of the MCT library and exchange of 3-D fields. We analyse and compare the computational performance of the different couplings based on real-case simulations over Europe. The usage of the LUCIA tool implemented in OASIS3-MCT enables the quantification of the contributions of the coupled components to the overall coupling cost. These individual contributions are (1) cost of the model(s) coupled, (2) direct cost of coupling including horizontal interpolation and communication between the components, (3) load imbalance, (4) cost of different usage of processors by CCLM in coupled and stand-alone mode and (5) residual cost including i.a. CCLM additional computations. Finally a procedure for finding an optimum processor configuration for each of the couplings was developed considering the time to solution, computing cost and parallel efficiency of the simulation. The optimum configurations are presented for sequential, concurrent and mixed (sequential+concurrent) coupling layouts. The procedure applied can be regarded as independent of the specific coupling layout and coupling details. We found that the direct cost of coupling, i.e. communications and horizontal interpolation, in OASIS3-MCT remains below 7 % of the CCLM stand-alone cost for all couplings investigated. This is in particular true for the exchange of 450 2-D fields between CCLM and MPI-ESM. We identified remaining limitations in the coupling strategies and discuss possible future improvements of the computational efficiency

    The COSMO-CLM 4.8 regional climate model coupled to regional ocean, land surface and global earth system models using OASIS3-MCT: description and performance

    Get PDF
    We developed a coupled regional climate system model based on the CCLM regional climate model. Within this model system, using OASIS3-MCT as a coupler, CCLM can be coupled to two land surface models (the Community Land Model (CLM) and VEG3D), the NEMO-MED12 regional ocean model for the Mediterranean Sea, two ocean models for the North and Baltic seas (NEMO-NORDIC and TRIMNP+CICE) and the MPI-ESM Earth system model. We first present the different model components and the unified OASIS3-MCT interface which handles all couplings in a consistent way, minimising the model source code modifications and defining the physical and numerical aspects of the couplings. We also address specific coupling issues like the handling of different domains, multiple usage of the MCT library and exchange of 3-D fields. We analyse and compare the computational performance of the different couplings based on real-case simulations over Europe. The usage of the LUCIA tool implemented in OASIS3-MCT enables the quantification of the contributions of the coupled components to the overall coupling cost. These individual contributions are (1) cost of the model(s) coupled, (2) direct cost of coupling including horizontal interpolation and communication between the components, (3) load imbalance, (4) cost of different usage of processors by CCLM in coupled and stand-alone mode and (5) residual cost including i.a. CCLM additional computations. Finally a procedure for finding an optimum processor configuration for each of the couplings was developed considering the time to solution, computing cost and parallel efficiency of the simulation. The optimum configurations are presented for sequential, concurrent and mixed (sequential+concurrent) coupling layouts. The procedure applied can be regarded as independent of the specific coupling layout and coupling details. We found that the direct cost of coupling, i.e. communications and horizontal interpolation, in OASIS3-MCT remains below 7 % of the CCLM stand-alone cost for all couplings investigated. This is in particular true for the exchange of 450 2-D fields between CCLM and MPI-ESM. We identified remaining limitations in the coupling strategies and discuss possible future improvements of the computational efficiency

    Intercomparison of Antarctic ice-shelf, ocean, and sea-ice interactions simulated by MetROMS-iceshelf and FESOM 1.4

    Get PDF
    An increasing number of Southern Ocean models now include Antarctic ice-shelf cavities, and simulate thermodynamics at the ice-shelf/ocean interface. This adds another level of complexity to Southern Ocean simulations, as ice shelves interact directly with the ocean and indirectly with sea ice. Here, we present the first model intercomparison and evaluation of present-day ocean/sea-ice/ice-shelf interactions, as simulated by two models: a circumpolar Antarctic configuration of MetROMS (ROMS: Regional Ocean Modelling System coupled to CICE: Community Ice CodE) and the global model FESOM (Finite Element Sea-ice Ocean Model), where the latter is run at two different levels of horizontal resolution. From a circumpolar Antarctic perspective, we compare and evaluate simulated ice-shelf basal melting and sub-ice-shelf circulation, as well as sea-ice properties and Southern Ocean water mass characteristics as they influence the sub-ice-shelf processes. Despite their differing numerical methods, the two models produce broadly similar results and share similar biases in many cases. Both models reproduce many key features of observations but struggle to reproduce others, such as the high melt rates observed in the small warm-cavity ice shelves of the Amundsen and Bellingshausen seas. Several differences in model design show a particular influence on the simulations. For example, FESOM's greater topographic smoothing can alter the geometry of some ice-shelf cavities enough to affect their melt rates; this improves at higher resolution, since less smoothing is required. In the interior Southern Ocean, the vertical coordinate system affects the degree of water mass erosion due to spurious diapycnal mixing, with MetROMS' terrain-following coordinate leading to more erosion than FESOM's z coordinate. Finally, increased horizontal resolution in FESOM leads to higher basal melt rates for small ice shelves, through a combination of stronger circulation and small-scale intrusions of warm water from offshore

    XScan: An Integrated Tool for Understanding Open Source Community-Based Scientific Code

    Get PDF
    Many scientific communities have adopted community-based models that integrate multiple components to simulate whole system dynamics. The community software projects’ complexity, stems from the integration of multiple individual software components that were developed under different application requirements and various machine architectures, has become a challenge for effective software system understanding and continuous software development. The paper presents an integrated software toolkit called X-ray Software Scanner (in abbreviation, XScan) for a better understanding of large-scale community-based scientific codes. Our software tool provides support to quickly summarize the overall information of scientific codes, including the number of lines of code, programming languages, external library dependencies, as well as architecture-dependent parallel software features. The XScan toolkit also realizes a static software analysis component to collect detailed structural information and provides an interactive visualization and analysis of the functions. We use a large-scale community-based Earth System Model to demonstrate the workflow, functions and visualization of the toolkit. We also discuss the application of advanced graph analytics techniques to assist software modular design and component refactoring

    A fast input/output library for high-resolution climate models

    Get PDF
    We describe the design and implementation of climate fast input/output (CFIO), a fast input/output (I/O) library for high-resolution climate models. CFIO provides a simple method for modelers to overlap the I/O phase with the computing phase automatically, so as to shorten the running time of numerical simulations. To minimize the code modifications required for porting, CFIO provides similar interfaces and features to parallel Network Common Data Form (PnetCDF), which is one of the most widely used I/O libraries in climate models. We deployed CFIO in three high-resolution climate models, including two ocean models (POP and LICOM) and one sea ice model (CICE). The experimental results show that CFIO improves the performance of climate models significantly versus the original serial I/O approach. When running with CFIO at 0.1° resolution with about 1000 CPU cores, we managed to reduce the running time by factors of 7.9, 4.6 and 2.0 for POP, CICE, and LICOM, respectively. We also compared the performance of CFIO against two existing libraries, PnetCDF and parallel I/O (PIO), in different scenarios. For scenarios with both data output and computations, CFIO decreases the I/O overhead compared to PnetCDF and PIO

    The sea ice model component of HadGEM3-GC3.1

    Get PDF
    A new sea ice configuration, GSI8.1, is implemented in the Met Office global coupled configuration HadGEM3-GC3.1 which will be used for all CMIP6 (Coupled Model Intercomparison Project Phase 6) simulations. The inclusion of multi-layer thermodynamics has required a semi-implicit coupling scheme between atmosphere and sea ice to ensure the stability of the solver. Here we describe the sea ice model component and show that the Arctic thickness and extent compare well with observationally based data
    corecore