4 research outputs found

    Short‐term time step convergence in a climate model

    Full text link
    This paper evaluates the numerical convergence of very short (1 h) simulations carried out with a spectral‐element (SE) configuration of the Community Atmosphere Model version 5 (CAM5). While the horizontal grid spacing is fixed at approximately 110 km, the process‐coupling time step is varied between 1800 and 1 s to reveal the convergence rate with respect to the temporal resolution. Special attention is paid to the behavior of the parameterized subgrid‐scale physics. First, a dynamical core test with reduced dynamics time steps is presented. The results demonstrate that the experimental setup is able to correctly assess the convergence rate of the discrete solutions to the adiabatic equations of atmospheric motion. Second, results from full‐physics CAM5 simulations with reduced physics and dynamics time steps are discussed. It is shown that the convergence rate is 0.4—considerably slower than the expected rate of 1.0. Sensitivity experiments indicate that, among the various subgrid‐scale physical parameterizations, the stratiform cloud schemes are associated with the largest time‐stepping errors, and are the primary cause of slow time step convergence. While the details of our findings are model specific, the general test procedure is applicable to any atmospheric general circulation model. The need for more accurate numerical treatments of physical parameterizations, especially the representation of stratiform clouds, is likely common in many models. The suggested test technique can help quantify the time‐stepping errors and identify the related model sensitivities.Key Points:Convergence is slow in CAM5Stratiform cloud parameterizations have large errorsPeer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/111268/1/jame20146.pd

    Integration of Rosenbrock-type solvers into CAM4-Chem and evaluation of its performance in the perspectives of science and computation

    Get PDF
    In this study, the perennial problem of overestimation of ozone concentration from the global chemistry-climate model (CAM4-Chem [Community Earth System Model with chemistry activated]) is investigated in the sense of numerics and computation. The high-order Rosenbrock-type solvers are implemented into CAM4-Chem, motivated by its higher order accuracy and better computational efficiency. The results are evaluated by comparing to the observation data and the ROS-2 [second-order Rosenbrock] solver can reduce the positive bias of ozone concentration horizontally and vertically at most regions. The largest reduce occurs at the mid-latitudes of north hemisphere where the bias is generally high, and the summertime when the photochemical reaction is most active. In addition, the ROS-2 solver can achieve ~2x speed-up compared to the original IMP [first-order implicit] solver. This improvement is mainly due to the reuse of the Jacobian matrix and LU [lower upper] factorization during its two-stage calculation. In order to gain further speed-up, we port the ROS-2 solver to the GPU [graphics processing unit] and compare the performance with CPU. The speed-up of the GPU version with the optimized configuration reaches a factor of ~11.7× for the computation alone and ~3.82× considering the data movement between CPU and GPU. The computational time of the GPU version increases more slowly than the CPU version as a function of the number of loop iterations, which makes the GPU version more attractive for a massive computation. Moreover, under the stochastic perturbation of initial input, we find the ROS-3 [third-order Rosenbrock] solver yields better convergence property than the ROS-2 and IMP solver. However, the ROS-3 solver generally provides a further overestimation of ozone concentration when it is implemented into CAM4-Chem. This is due to the fact that more frequent time step refinements are involved by the ROS-3 solver, which also makes the ROS-3 solver less computationally efficient than the IMP and ROS-2 solvers. We also investigate the effect of grid resolution and it shows that the fine resolution can provide relatively better pattern correlation than the coarse resolution, given the same chemical solver

    Brave New Worlds: How computer simulation changes model-based science

    Get PDF
    A large part of science involves building and investigating models. One key feature of model-based science is that one thing is studied as a means of learning about some rather different thing. How scientists make inferences from a model to the world, then, is a topic of great interest to philosophers of science. An increasing number of models are specified with very complex computer programs. In this thesis, I examine the epistemological issues that arise when scientists use these computer simulation models to learn about the world or to think through their ideas. I argue that the explosion of computational power over the last several decades has revolutionised model-based science, but that restraint and caution must be exercised in the face of this power. To make my arguments, I focus on two kinds of computer simulation modelling: climate modelling and, in particular, high-fidelity climate models; and agent-based models, which are used to represent populations of interacting agents often in an ecological or social context. Both kinds involve complex model structures and are representative of the beneficial capacities of computer simulation. However, both face epistemic costs that follow from using highly complex model structures. As models increase in size and complexity, it becomes far harder for modellers to understand their models and why they behave the way they do. The value of models is further obscured by their proliferation, and a proliferation of programming languages in which they can be described. If modellers struggle to grasp their models, they can struggle to make good inferences with them. While the climate modelling community has developed much of the infrastructure required to mitigate these epistemic costs, the less mature field of agent-based modelling is still struggling to implement such community standards and infrastructure. I conclude that modellers cannot take full advantage of the representational capacities of computer simulations unless resources are invested into their study that scale proportionately with the models' complexity
    corecore