38 research outputs found

    Climate projections over the Antarctic Peninsula region to the end of the 21st century. Part 1: cold temperature indices

    No full text
    Objective. This paper deals with an estimation of the climate change at the Antarctic Peninsula region. During last decades, the most significant warming is observed in Polar regions, particularly in the Antarctic Peninsula region, where the Ukrainian Antarctic Akademik Vernadsky station is located. Therefore, the providing of the complex estimation of climate change trend is an important task for the region. These changes are taking place nowadays and will happen in the future. So, the main objective of the study is to estimate changes of climate characteristics in the Antarctic Peninsula region in the 21st century, based on calculation of the relevant climate indices. The projections of the temperature and precipitation characteristics in the Antarctic Peninsula region and Akademik Vernadsky station area for RCP4.5 and RCP8.5 scenarios are the objects of the research.Стаття присвячена оцінці змін, що відбуваються в районі Антарктичного півострова. Впродовж останніх десятиліть найсуттєвіше потепління в кліматичній системі спостерігається в полярних регіонах, зокрема в районі Антарктичного півострова, де розташована Українська антарктична станція «Академік Вернадський». У зв’язку з цим необхідно забезпечити кращу комплексну оцінку тенденцій кліматичних змін, які вже зафіксовані та прогнозуються в майбутньому. Мета дослідження — оцінити зміни кліматичних характеристик в регіоні Антарктичного півострова в ХХІ столітті, на основі обчислення відповідних кліматичних показників. Об’єкт дослідження: проекції характеристик температури повітря та режиму зволоження в районі Антарктичного півострову та Української антарктичної станції «Академік Вернадський» за сценаріями RCP4.5 та RCP8.5 (Representative Concentration Pathway, RCP, Траєкторії репрезентативних концентрацій). Методами дослідження є чисельне моделювання та статистичний аналіз даних регіональних кліматичних моделей

    Implementation of FAIR principles in the IPCC: the WGI AR6 Atlas repository

    Get PDF
    The Sixth Assessment Report (AR6) of the Intergovernmental Panel on Climate Change (IPCC) has adopted the FAIR Guiding Principles. We present the Atlas chapter of Working Group I (WGI) as a test case. We describe the application of the FAIR principles in the Atlas, the challenges faced during its implementation, and those that remain for the future. We introduce the open source repository resulting from this process, including coding (e.g., annotated Jupyter notebooks), data provenance, and some aggregated datasets used in some figures in the Atlas chapter and its interactive companion (the Interactive Atlas), open to scrutiny by the scientific community and the general public. We describe the informal pilot review conducted on this repository to gather recommendations that led to significant improvements. Finally, a working example illustrates the re-use of the repository resources to produce customized regional information, extending the Interactive Atlas products and running the code interactively in a web browser using Jupyter notebooks.Peer reviewe

    The ECOMS User Data Gateway: Towards seasonal forecast data provision and research reproducibility in the era of Climate Services

    Get PDF
    Sectorial applications of seasonal forecasting require data for a reduced number of variables from different datasets, mainly (gridded) observations, reanalysis, and predictions from state-of-the-art seasonal forecast systems (such as NCEP/CFSv2, ECMWF/System4 or UKMO/GloSea5). Whilst this information can be obtained directly from the data providers, the resulting formats, temporal aggregations, and vocabularies may not be homogeneous across datasets. Moreover, different data policies hold for the different databases, being only some of them publicly available. Therefore, obtaining and harmonizing multi-model seasonal forecast data for sector-specific applications is an error-prone, time consuming task. In order to facilitate this, the ECOMS User Data Gateway (ECOMS-UDG) was developed in the framework of the ECOMS initiative as a one-stop-service for climate data. To this aim, the variables required by end users were identified, downloaded from the data providers and locally stored as virtual datasets in a THREDDS Data Server (TDS), implementing fine-grained user management and authorization via the THREDDS Access Portal (TAP). As a result, users can retrieve the subsets best suited to their particular research needs in a user-friendly manner using the standard TDS data services. Moreover, an open source, R-based interface for data access and postprocessing was developed in the form of a bundle of packages implementing harmonized data access (one single vocabulary), data collocation, bias adjustment and downscaling, and forecast visualization and validation. This provides a unique comprehensive framework for end-to-end applications of seasonal predictions, hence favoring the reproducibility of the ECOMS scientific outcomes, extensible to the whole scientific community.We thank the European Union’s Seventh Framework Program [FP7/2007–2013] under Grant Agreements 308291 (EUPORIAS Project) and 308378 (SPECS Project). This project took advantage of THREDDS Data Server (TDS) software developed by UCAR/Unidata (http://doi.org/10.5065/D6N014KG). We would like to thank the two anonymous reviewers for their suggestions and comments

    On the suitability of deep convolutional neural networks for continental-wide downscaling of climate change projections

    Get PDF
    In a recent paper, Baño-Medina et al. (Confguration and Intercomparison of deep learning neural models for statistical downscaling. preprint, 2019) assessed the suitability of deep convolutional neural networks (CNNs) for downscaling of temperature and precipitation over Europe using large-scale 'perfect' reanalysis predictors. They compared the results provided by CNNs with those obtained from a set of standard methods which have been traditionally used for downscaling purposes (linear and generalized linear models), concluding that CNNs are well suited for continental-wide applications. That analysis is extended here by assessing the suitability of CNNs for downscaling future climate change projections using Global Climate Model (GCM) outputs as predictors. This is particularly relevant for this type of 'black-box' models, whose results cannot be easily explained based on physical reasons and could potentially lead to implausible downscaled projections due to uncontrolled extrapolation artifacts. Based on this premise, we analyze in this work the two key assumptions that are made in perfect prognosis downscaling: (1) the predictors chosen to build the statistical model should be well reproduced by GCMs and (2) the statistical model should be able to reliably extrapolate out of sample (climate change) conditions. As a first step to test the suitability of these models, the latter assumption is assessed here by analyzing how the CNNs afect the raw GCM climate change signal (defned as the diference, or delta, between future and historical climate). Our results show that, as compared to well-established generalized linear models (GLMs), CNNs yield smaller departures from the raw GCM outputs for the end of century, resulting in more plausible downscaling results for climate change applications. Moreover, as a consequence of the automatic treatment of spatial features, CNNs are also found to provide more spatially homogeneous downscaled patterns than GLMs.The authors acknowledge partial support from the ATLAS project, funded by the Spanish Research Program (PID2019-111481RB-I00). Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature

    Testing bias adjustment methods for regional climate change applications under observational uncertainty and resolution mismatch

    Get PDF
    Systematic biases in climate models hamper their direct use in impact studies and, as a consequence, many statistical bias adjustment methods have been developed to calibrate model outputs against observations. The application of these methods in a climate change context is problematic since there is no clear understanding on how these methods may affect key magnitudes, for example, the climate change signal or trend, under different sources of uncertainty. Two relevant sources of uncertainty, often overlooked, are the sensitivity to the observational reference used to calibrate the method and the effect of the resolution mismatch between model and observations (downscaling effect). In the present work, we assess the impact of these factors on the climate change signal of temperature and precipitation considering marginal, temporal and extreme aspects. We use eight standard and state-of-the-art bias adjustment methods (spanning a variety of methods regarding their nature—empirical or parametric—, fitted parameters and trend-preservation) for a case study in the Iberian Peninsula. The quantile trend-preserving methods (namely quantile delta mapping (QDM), scaled distribution mapping (SDM) and the method from the third phase of ISIMIP-ISIMIP3) preserve better the raw signals for the different indices and variables considered (not all preserved by construction). However, they rely largely on the reference dataset used for calibration, thus presenting a larger sensitivity to the observations, especially for precipitation intensity, spells and extreme indices. Thus, high-quality observational datasets are essential for comprehensive analyses in larger (continental) domains. Similar conclusions hold for experiments carried out at high (approximately 20 km) and low (approximately 120 km) spatial resolutions. © 2020 The Authors. Atmospheric Science Letters published by John Wiley & Sons Ltd on behalf of the Royal Meteorological Society

    Climate projections over the Antarctic Peninsula region to the end of the 21st century. Part ІІ: wet/dry indices

    No full text
    Objective of the study is an assessment of possible climate change in the region of the Antarctic Peninsula from 1986 until the end of the 21st century projected by the RCMs’ ensemble. During the last decades Antarctica has undergone predominantly warming, with the highest rate of surface air temperature increase found over the Antarctic Peninsula, where the Ukrainian Antarctic Akademik Vernadsky station is located. There is a unique ecosystem in the region which is vulnerable and under the growing impact of a changing weather regime due to rapid climate changes with consequent changes in sea ice, land distribution under snow/ice, etc. Thus, an important task for the region is an estimation of climate change trends and definition of possible subregionalization.Мета дослідження — оцінка можливої зміни клімату в регіоні Антарктичного півострова до кінця 21 століття за проекціями ансамблю регіональних кліматичних моделей (РКМ). Впродовж останніх десятиліть на переважаючій території Антарктиди спостерігається потепління, воно найінтенсивніше для Антарктичного півострова, де знаходиться Українська антарктична станція «Академік Вернадський». У регіоні існує унікальна екосистема, яка є вразливою до зміни погодного режиму, що відбувається під впливом швидких змін клімату та їхніх наслідків, зокрема, зміни розподілу морського льоду та суші вкритої снігом / льодом тощо. Отже, для регіону важливим завданням є оцінка проекцій зміни клімату з визначенням окремих районів з подібними тенденціями

    Testing bias adjustment methods for regional climate change applications under observational uncertainty and resolution mismatch

    Get PDF
    ABSTRACT: Systematic biases in climate models hamper their direct use in impact studies and, as a consequence, many statistical bias adjustment methods have been developed to calibrate model outputs against observations. The application of these methods in a climate change context is problematic since there is no clear understanding on how these methods may affect key magnitudes, for example, the climate change signal or trend, under different sources of uncertainty. Two relevant sources of uncertainty, often overlooked, are the sensitivity to the observational reference used to calibrate the method and the effect of the resolution mismatch between model and observations (downscaling effect). In the present work, we assess the impact of these factors on the climate change signal of temperature and precipitation considering marginal, temporal and extreme aspects. We use eight standard and state-of-the-art bias adjustment methods (spanning a variety of methods regarding their nature-empirical or parametric-, fitted parameters and tren-preservation) for a case study in the Iberian Peninsula. The quantile tren-preserving methods (namely quantile delta mapping (QDM), scaled distribution mapping (SDM) and the method from the third phase of ISIMIP-ISIMIP3) preserve better the raw signals for the different indices and variables considered (not all preserved by construction). However, they rely largely on the reference dataset used for calibration, thus presenting a larger sensitivity to the observations, especially for precipitation intensity, spells and extreme indices. Thus, high-quality observational datasets are essential for comprehensive analyses in larger (continental) domains. Similar conclusions hold for experiments carried out at high (approximately 20 km) and low (approximately 120 km) spatial resolutions.Participation of S. Herrera and J.M. Gutiérrez was partially supported by the project AfriCultuReS (European Union's Horizon 2020 program, grant agreement no, 774652). S. Lange acknowledges funding from the European Union's Horizon 2020 research and innovation program under grant agreement no. 641816 (CRESCENDO

    Configuration and intercomparison of deep learning neural models for statistical downscaling

    Get PDF
    Deep learning techniques (in particular convolutional neural networks, CNNs) have recently emerged as a promising approach for statistical downscaling due to their ability to learn spatial features from huge spatiotemporal datasets. However, existing studies are based on complex models, applied to particular case studies and using simple validation frameworks, which makes a proper assessment of the (possible) added value offered by these techniques difficult. As a result, these models are usually seen as black boxes, generating distrust among the climate community, particularly in climate change applications. In this paper we undertake a comprehensive assessment of deep learning techniques for continental-scale statistical downscaling, building on the VALUE validation framework. In particular, different CNN models of increasing complexity are applied to downscale temperature and precipitation over Europe, comparing them with a few standard benchmark methods from VALUE (linear and generalized linear models) which have been traditionally used for this purpose. Besides analyzing the adequacy of different components and topologies, we also focus on their extrapolation capability, a critical point for their potential application in climate change studies. To do this, we use a warm test period as a surrogate for possible future climate conditions. Our results show that, while the added value of CNNs is mostly limited to the reproduction of extremes for temperature, these techniques do outperform the classic ones in the case of precipitation for most aspects considered. This overall good performance, together with the fact that they can be suitably applied to large regions (e.g., continents) without worrying about the spatial features being considered as predictors, can foster the use of statistical approaches in international initiatives such as Coordinated Regional Climate Downscaling Experiment (CORDEX).The authors acknowledge the funding provided by the project MULTI-SDM (CGL2015-66583-R, MINECO/FEDER). They also acknowledge the E-OBS dataset from the EU-FP6 project UERRA (http://www.uerra.eu, last access: 23 April 2020) and the Copernicus Climate Change Service, and the data providers in the ECA&D project (https://www.ecad.eu, last access: 23 April 2020)

    Forecasting water temperature in lakes and reservoirs using seasonal climate prediction

    Get PDF
    ABSTRACT: Seasonal climate forecasts produce probabilistic predictions of meteorological variables for subsequent months. This provides a potential resource to predict the influence of seasonal climate anomalies on surface water balance in catchments and hydro-thermodynamics in related water bodies (e.g., lakes or reservoirs). Obtaining seasonal forecasts for impact variables (e.g., discharge and water temperature) requires a link between seasonal climate forecasts and impact models simulating hydrology and lake hydrodynamics and thermal regimes. However, this link remains challenging for stakeholders and the water scientific community, mainly due to the probabilistic nature of these predictions. In this paper, we introduce a feasible, robust, and open-source workflow integrating seasonal climate forecasts with hydrologic and lake models to generate seasonal forecasts of discharge and water temperature profiles. The workflow has been designed to be applicable to any catchment and associated lake or reservoir, and is optimized in this study for four catchment-lake systems to help in their proactive management. We assessed the performance of the resulting seasonal forecasts of discharge and water temperature by comparing them with hydrologic and lake (pseudo)observations (reanalysis). Precisely, we analysed the historical performance using a data sample of past forecasts and reanalysis to obtain information about the skill (performance or quality) of the seasonal forecast system to predict particular events. We used the current seasonal climate forecast system (SEAS5) and reanalysis (ERA5) of the European Centre for Medium Range Weather Forecasts (ECMWF). We found that due to the limited predictability at seasonal time-scales over the locations of the four case studies (Europe and South of Australia), seasonal forecasts exhibited none to low performance (skill) for the atmospheric variables considered. Nevertheless, seasonal forecasts for discharge present some skill in all but one case study. Moreover, seasonal forecasts for water temperature had higher performance in natural lakes than in reservoirs, which means human water control is a relevant factor affecting predictability, and the performance increases with water depth in all four case studies. Further investigation into the skillful water temperature predictions should aim to identify the extent to which performance is a consequence of thermal inertia (i.e., lead-in conditions).This is a contribution of the WATExR project (watexr.eu/), which is part of ERA4CS, an ERA-NET initiated by JPI Climate, and funded by MINECO-AEI (ES), FORMAS (SE), BMBF (DE), EPA (IE), RCN (NO), and IFD (DK), with co-funding by the European Union (Grant 690462 ). MINECO-AEI funded this research through projects PCIN- 2017-062 and PCIN-2017-092. We thank all water quality and quantity data providers: Ens d’Abastament d’Aigua Ter-Llobregat (ATL, https://www.atl.cat/es ), SA Water ( https://www.sawater.com. au/ ), Ruhrverband ( www.ruhrverband.de ), NIVA ( www.niva.no ) and NVE ( https://www.nve.no/english/ ). We acknowledge the contribution of the Copernicus Climate Change Service (C3S) in the production of SEAS5. C3S provided the computer time for the generation of the re-forecasts for SEAS5 and for the production of the ocean reanalysis (ORAS5), used as initial conditions for the SEAS5 re-forecasts

    A container-based workflow for distributed training of deep learning algorithms in HPC clusters

    Get PDF
    Deep learning has been postulated as a solution for numerous problems in different branches of science. Given the resource-intensive nature of these models, they often need to be executed on specialized hardware such graphical processing units (GPUs) in a distributed manner. In the academic field, researchers get access to this kind of resources through High Performance Computing (HPC) clusters. This kind of infrastructures make the training of these models difficult due to their multi-user nature and limited user permission. In addition, different HPC clusters may possess different peculiarities that can entangle the research cycle (e.g., libraries dependencies). In this paper we develop a workflow and methodology for the distributed training of deep learning models in HPC clusters which provides researchers with a series of novel advantages. It relies on udocker as containerization tool and on Horovod as library for the distribution of the models across multiple GPUs. udocker does not need any special permission, allowing researchers to run the entire workflow without relying on any administrator. Horovod ensures the efficient distribution of the training independently of the deep learning framework used. Additionally, due to containerization and specific features of the workflow, it provides researchers with a cluster-agnostic way of running their models. The experiments carried out show that the workflow offers good scalability in the distributed training of the models and that it easily adapts to different clusters
    corecore