11 research outputs found

    Probabilistic concepts in intermediate-complexity climate models: A snapshot attractor picture

    Get PDF
    Abstract A time series resulting from a single initial condition is shown to be insufficient for quantifying the internal variability in a climate model, and thus one is unable to make meaningful climate projections based on it. The authors argue that the natural distribution, obtained from an ensemble of trajectories differing solely in their initial conditions, of the snapshot attractor corresponding to a particular forcing scenario should be determined in order to quantify internal variability and to characterize any instantaneous state of the system in the future. Furthermore, as a simple measure of internal variability of any particular variable of the model, the authors suggest using its instantaneous ensemble standard deviation. These points are illustrated with the intermediate-complexity climate model Planet Simulator forced by a CO2 scenario, with a 40-member ensemble. In particular, the leveling off of the time dependence of any ensemble average is shown to provide a much clearer indication of reaching a steady state than any property of single time series. Shifts in ensemble averages are indicative of climate changes. The dynamical character of such changes is illustrated by hysteresis-like curves obtained by plotting the ensemble average surface temperature versus the CO2 concentration. The internal variability is found to be the most pronounced on small geographical scales. The traditionally used 30-yr temporal averages are shown to be considerably different from the corresponding ensemble averages. Finally, the North Atlantic Oscillation (NAO) index, related to the teleconnection paradigm, is also investigated. It is found that the NAO time series strongly differs in any individual realization from each other and from the ensemble average, and climatic trends can be extracted only from the latter.</jats:p

    The Theory of Parallel Climate Realizations

    Get PDF

    Predicting climate change using response theory: global averages and spatial patterns

    Get PDF
    The provision of accurate methods for predicting the climate response to anthropogenic and natural forcings is a key contemporary scientific challenge. Using a simplified and efficient open-source general circulation model of the atmosphere featuring O(105105) degrees of freedom, we show how it is possible to approach such a problem using nonequilibrium statistical mechanics. Response theory allows one to practically compute the time-dependent measure supported on the pullback attractor of the climate system, whose dynamics is non-autonomous as a result of time-dependent forcings. We propose a simple yet efficient method for predicting鈥攁t any lead time and in an ensemble sense鈥攖he change in climate properties resulting from increase in the concentration of CO22 using test perturbation model runs. We assess strengths and limitations of the response theory in predicting the changes in the globally averaged values of surface temperature and of the yearly total precipitation, as well as in their spatial patterns. The quality of the predictions obtained for the surface temperature fields is rather good, while in the case of precipitation a good skill is observed only for the global average. We also show how it is possible to define accurately concepts like the inertia of the climate system or to predict when climate change is detectable given a scenario of forcing. Our analysis can be extended for dealing with more complex portfolios of forcings and can be adapted to treat, in principle, any climate observable. Our conclusion is that climate change is indeed a problem that can be effectively seen through a statistical mechanical lens, and that there is great potential for optimizing the current coordinated modelling exercises run for the preparation of the subsequent reports of the Intergovernmental Panel for Climate Change

    Workflow Support for Complex Grid Applications

    No full text
    Abstract. In this paper we present a workflow solution to support graphically the design, execution, monitoring, and performance visualisation of complex grid applications. The described workflow concept can provide interoperability among different types of legacy applications on heterogeneous computational platforms, such as Condor or Globus based grids. The major design and implementation issues concerning the integration of Condor/Condor-G/DAGman tools, Mercury/GRM grid monitoring infrastructure, PROVE performance visualisation tool, and the new high-level workflow editor and manager of P-GRADE development environment are discussed in the case of the integrated and the portal version as well. The integrated version of P-GRADE represents the thick client concept, while the portal version needs only a thin client and can be accessed by a standard web browser. To illustrate the application of our approach in the grid, an ultra-short range weather prediction system is presented that can be executed on a Condor-G/Globus based testbed and its execution can also be visualised not only at workflow level but at the level of individual jobs, too.

    Korai ment谩lis teszt: az enyhe kognit铆v zavar sz疟r艖tesztj茅nek fejleszt茅se = Early Mental Test: Developing a Screening Test for Mild Cognitive Impairment

    No full text
    BACKGROUND AND PURPOSE: Mild cognitive impairment (MCI) is a heterogenous syndrome considered as a prodromal state of dementia with clinical importance in the early detection of Alzheimer's Disease. We are currently developing an MCI screening instrument, the Early Mental Test (EMT) suitable to the needs of primary care physicians. The present study describes the validation process of the 6.2 version of the test. METHODS: Only subjects (n = 132, female 95, male 37) over the age of 55 (mean age 69.2 years (SD = 6.59)) scoring at least 20 points on Mini-Mental State Examination (MMSE), mean education 11.17 years (SD = 3.86) were included in the study. The psychometric evaluation consisted of Alzheimer's Disease Assessment Scale Cognitive subscale (ADAS-Cog) and the 6.2 version of EMT. The statistical analyses were carried out using the 17.00 version of SPSS statistical package. RESULTS: The optimalised cut-off point was found to be 3.45 points with corresponding 69% sensitivity, 69% specificity and 69% accuracy measures. The Cronbach-alpha, that describes the internal consistence of the test was 0.667, which is higher as compared with the same category in the case of the ADAS-Cog (0.446). A weak negative rank correlation was found between the total score of EMT 6.2 and the age of probands (rs = -0.25, p = 0.003). Similarly, only a weak correlation was found between the education levels and the total score of EMT 6.2 (rs = 0.31, p < 0.001). Two of the subtests, the repeated delayed short-time memory and the letter fluency test with a motorical distraction task had significantly better power to separate MCI and control groups than the other subtests of the EMT. CONCLUSION: The 6.2 version of EMT is a fast and simple detector of MCI with a similar sensitivity-specificity profile to the MMSE, but this version of the test definitely needs further development
    corecore