2,778 research outputs found

    The Oceanographic Multipurpose Software Environment (OMUSE v1.0)

    Get PDF
    In this paper we present the Oceanographic Multipurpose Software Environment (OMUSE). OMUSE aims to provide a homogeneous environment for existing or newly developed numerical ocean simulation codes, simplifying their use and deployment. In this way, numerical experiments that combine ocean models representing different physics or spanning different ranges of physical scales can be easily designed. Rapid development of simulation models is made possible through the creation of simple high-level scripts. The low-level core of the abstraction in OMUSE is designed to deploy these simulations efficiently on heterogeneous high-performance computing resources. Cross-verification of simulation models with different codes and numerical methods is facilitated by the unified interface that OMUSE provides. Reproducibility in numerical experiments is fostered by allowing complex numerical experiments to be expressed in portable scripts that conform to a common OMUSE interface. Here, we present the design of OMUSE as well as the modules and model components currently included, which range from a simple conceptual quasi-geostrophic solver to the global circulation model POP (Parallel Ocean Program). The uniform access to the codes' simulation state and the extensive automation of data transfer and conversion operations aids the implementation of model couplings. We discuss the types of couplings that can be implemented using OMUSE. We also present example applications that demonstrate the straightforward model initialization and the concurrent use of data analysis tools on a running model. We give examples of multiscale and multiphysics simulations by embedding a regional ocean model into a global ocean model and by coupling a surface wave propagation model with a coastal circulation model

    Analysis of the North Atlantic climatologies using a combined OGCM/adjoint approach

    Get PDF
    An exact adjoint for the full-scale Bryan-Cox primitive equation model is applied to assimilate the North Atlantic climatologies. The inverse calculation aims at searching a steady state oceanic general circulation consistent with observations, by controlling the model input parameters (such as the initial states and the upper thermal and haline boundary conditions). Two climatological hydrographies (Levitus (1982) and Fukumori and Wunsch (1991)) are used for the assimilation. This enables the examination of the sensitivity of the assimilated results to data quality. In addition, the consistency between the climatological hydrography and fluxes is discussed by examining the fits between the optimally estimated surface fluxes and the fluxes calculated by Oberhuber (1988). The efforts made in the study are directed toward assessing the effectiveness of the combined OGCM/ adjoint approach in estimating the state of the ocean from climatologies and identifying the associated problems. The major findings of the study include: (1) The results show that the full OGCM dynamics substantially helps the model in better simulating the frontal structure of the Gulf Stream system and the large-scale features of the velocity field, thus demonstrating the advantage of the full OGCM and its exact adjoint. (2) The study finds that the optimized temperature field has a systematic error structure in the vertical—the upper ocean is cooler and the deep ocean is warmer compared to the climatology. Our analysis indicates that the cool surface layer is a correction imposed by the optimization to reduce large data misfits in the deep ocean due to the deep warming. This deep warming is an outcome of using the steady state assumption, the annual mean climatology and the relaxation boundary condition at the model northern boundary. The annual mean hydrography has a surface water warmer than the observed winter surface water, and a deep ocean whose properties are determined by the surface water at high latitudes. Due to the imposed model northern boundary condition, the modeled deep waters are formed through the artificial sinking of surface waters with annual-mean temperature in the relaxation zone. This process leads to a warm deep ocean and large model-data discrepancies in the vast deep layer. In order to reduce the misfits as required by the optimal procedure, the surface layer which is the source for the modeled deep water needs to be cooler. The strong and deep vertical mixing formed in the model provides the means for an effective cooling. The results further show that the surface cooling is stronger for the experiment assimilating the Fukumofi and Wunsch hydrography because this climatology has an even warmer surface water due to the use of the summer-dominated data source. (3) The experiments assimilating the Levitus hydrography illustrate two anomalous features, one is a strong zonally integrated upwelling in the midlatitude and the other a very noisy flux estimation. The analysis shows that both features are induced by the smeared representation of the Gulf Stream frontal structure in the Levitus hydrography, which indicates that data quality is one of the important factors in obtaining satisfactory results from the assimilation. (4) Although the requirements for a global minimum are only partially satisfied, the experiments show that, comparing with the Levitus hydrography, the Fukumori and Wunsch hydrography is dynamically more compatible with the Oberhuber climatological fluxes

    NEMO-Med: Optimization and Improvement of Scalability

    Get PDF
    The NEMO oceanic model is widely used among the climate community. It is used with different configurations in more than 50 research projects for both long and short-term simulations. Computational requirements of the model and its implementation limit the exploitation of the emerging computational infrastructure at peta and exascale. A deep revision and analysis of the model and its implementation were needed. The paper describes the performance evaluation of the model (v3.2), based on MPI parallelization, on the MareNostrum platform at the Barcelona Supercomputing Centre. The analysis of the scalability has been carried out taking into account different factors, such as the I/O system available on the platform, the domain decomposition of the model and the level of the parallelism. The analysis highlighted different bottlenecks due to the communication overhead. The code has been optimized reducing the communication weight within some frequently called functions and the parallelization has been improved introducing a second level of parallelism based on the OpenMP shared memory paradigm

    Prospects for improving the representation of coastal and shelf seas in global ocean models

    Get PDF
    Accurately representing coastal and shelf seas in global ocean models represents one of the grand challenges of Earth system science. They are regions of immense societal importance through the goods and services they provide, hazards they pose and their role in global-scale processes and cycles, e.g. carbon fluxes and dense water formation. However, they are poorly represented in the current generation of global ocean models. In this contribution, we aim to briefly characterise the problem, and then to identify the important physical processes, and their scales, needed to address this issue in the context of the options available to resolve these scales globally and the evolving computational landscape. We find barotropic and topographic scales are well resolved by the current state-of-the-art model resolutions, e.g. nominal 1∕12°, and still reasonably well resolved at 1∕4°; here, the focus is on process representation. We identify tides, vertical coordinates, river inflows and mixing schemes as four areas where modelling approaches can readily be transferred from regional to global modelling with substantial benefit. In terms of finer-scale processes, we find that a 1∕12° global model resolves the first baroclinic Rossby radius for only  ∼ 8% of regions  < 500m deep, but this increases to  ∼ 70% for a 1∕72° model, so resolving scales globally requires substantially finer resolution than the current state of the art. We quantify the benefit of improved resolution and process representation using 1∕12° global- and basin-scale northern North Atlantic nucleus for a European model of the ocean (NEMO) simulations; the latter includes tides and a k-ε vertical mixing scheme. These are compared with global stratification observations and 19 models from CMIP5. In terms of correlation and basin-wide rms error, the high-resolution models outperform all these CMIP5 models. The model with tides shows improved seasonal cycles compared to the high-resolution model without tides. The benefits of resolution are particularly apparent in eastern boundary upwelling zones. To explore the balance between the size of a globally refined model and that of multiscale modelling options (e.g. finite element, finite volume or a two-way nesting approach), we consider a simple scale analysis and a conceptual grid refining approach. We put this analysis in the context of evolving computer systems, discussing model turnaround time, scalability and resource costs. Using a simple cost model compared to a reference configuration (taken to be a 1∕4° global model in 2011) and the increasing performance of the UK Research Councils' computer facility, we estimate an unstructured mesh multiscale approach, resolving process scales down to 1.5km, would use a comparable share of the computer resource by 2021, the two-way nested multiscale approach by 2022, and a 1∕72° global model by 2026. However, we also note that a 1∕12° global model would not have a comparable computational cost to a 1° global model in 2017 until 2027. Hence, we conclude that for computationally expensive models (e.g. for oceanographic research or operational oceanography), resolving scales to  ∼ 1.5km would be routinely practical in about a decade given substantial effort on numerical and computational development. For complex Earth system models, this extends to about 2 decades, suggesting the focus here needs to be on improved process parameterisation to meet these challenges

    Estimation and Impact of Nonuniform Horizontal Correlation Length Scales for Global Ocean Physical Analyses

    Get PDF
    Optimally modeling background-error horizontal correlations is crucial in ocean data assimilation. This paper investigates the impact of releasing the assumption of uniform background-error correlations in a global ocean variational analysis system. Spatially varying horizontal correlations are introduced in the recursive filter operator, which is used for modeling horizontal covariances in the Centro Euro-Mediterraneo sui Cambiamenti Climatici (CMCC) analysis system. The horizontal correlation length scales (HCLSs) were defined on the full three-dimensional model space and computed from both a dataset of monthly anomalies with respect to the monthly climatology and through the so-called National Meteorological Center (NMC) method. Different formulas for estimating the correlation length scale are also discussed and applied to the two forecast error datasets. The new formulation is tested within a 12-yr period (2000–11) in the ½° resolution system. The comparison with the data assimilation system using uniform background-error horizontal correlations indicates the superiority of the former, especially in eddy-dominated areas. Verification skill scores report a significant reduction of RMSE, and the use of nonuniform length scales improves the representation of the eddy kinetic energy at midlatitudes, suggesting that uniform, latitude, or Rossby radius-dependent formulations are insufficient to represent the geographical variations of the background-error correlations. Furthermore, a small tuning of the globally uniform value of the length scale was found to have a small impact on the analysis system. The use of either anomalies or NMC-derived correlation length scales also has a marginal effect with respect to the use of nonuniform HCLSs. On the other hand, the application of overestimated length scales has proved to be detrimental to the analysis system in all areas and for all parameters

    Stevens Open Boundary Conditions

    Get PDF

    Veros v0.1 – a fast and versatile ocean simulator in pure Python

    Get PDF
    A general circulation ocean model is translated from Fortran to Python. Its code structure is optimized to exploit available Python utilities, remove simulation bottlenecks, and comply with modern best practices. Furthermore, support for Bohrium is added, a framework that provides a just-in-time compiler for array operations and that supports parallel execution on both CPU and GPU targets.For applications containing more than a million grid elements, such as a typical 1° × 1° horizontal resolution global ocean model, Veros is approximately half as fast as the MPI-parallelized Fortran base code on 24 CPUs and as fast as the Fortran reference when running on a high-end GPU. By replacing the original conjugate gradient stream function solver with a solver from the pyAMG Python package, this particular subroutine outperforms the corresponding Fortran version by up to 1 order of magnitude.The study is concluded with a simple application in which the North Atlantic wave response to a Southern Ocean wind perturbation is investigated. It is found that even in a realistic setting the phase speeds of boundary waves matched the expectations based on theory and idealized models.</p

    Report from the MPP Working Group to the NASA Associate Administrator for Space Science and Applications

    Get PDF
    NASA's Office of Space Science and Applications (OSSA) gave a select group of scientists the opportunity to test and implement their computational algorithms on the Massively Parallel Processor (MPP) located at Goddard Space Flight Center, beginning in late 1985. One year later, the Working Group presented its report, which addressed the following: algorithms, programming languages, architecture, programming environments, the way theory relates, and performance measured. The findings point to a number of demonstrated computational techniques for which the MPP architecture is ideally suited. For example, besides executing much faster on the MPP than on conventional computers, systolic VLSI simulation (where distances are short), lattice simulation, neural network simulation, and image problems were found to be easier to program on the MPP's architecture than on a CYBER 205 or even a VAX. The report also makes technical recommendations covering all aspects of MPP use, and recommendations concerning the future of the MPP and machines based on similar architectures, expansion of the Working Group, and study of the role of future parallel processors for space station, EOS, and the Great Observatories era
    • …
    corecore