123 research outputs found

    Statistical downscaling with the downscaleR package (v3.1.0): contribution to the VALUE intercomparison experiment

    Get PDF
    The increasing demand for high-resolution climate information has attracted growing attention to statistical downscaling (SDS) methods, due in part to their relative advantages and merits as compared to dynamical approaches (based on regional climate model simulations), such as their much lower computational cost and their fitness for purpose for many local-scale applications. As a result, a plethora of SDS methods is nowadays available to climate scientists, which has motivated recent efforts for their comprehensive evaluation, like the VALUE initiative (http://www.value-cost.eu, last access: 29 March 2020). The systematic intercomparison of a large number of SDS techniques undertaken in VALUE, many of them independently developed by different authors and modeling centers in a variety of languages/environments, has shown a compelling need for new tools allowing for their application within an integrated framework. In this regard, downscaleR is an R package for statistical downscaling of climate information which covers the most popular approaches (model output statistics ? including the so-called ?bias correction? methods ? and perfect prognosis) and state-of-the-art techniques. It has been conceived to work primarily with daily data and can be used in the framework of both seasonal forecasting and climate change studies. Its full integration within the climate4R framework (Iturbide et al., 2019) makes possible the development of end-to-end downscaling applications, from data retrieval to model building, validation, and prediction, bringing to climate scientists and practitioners a unique comprehensive framework for SDS model development. In this article the main features of downscaleR are showcased through the replication of some of the results obtained in VALUE, placing an emphasis on the most technically complex stages of perfect-prognosis model calibration (predictor screening, cross-validation, and model selection) that are accomplished through simple commands allowing for extremely flexible model tuning, tailored to the needs of users requiring an easy interface for different levels of experimental complexity. As part of the open-source climate4R framework, downscaleR is freely available and the necessary data and R scripts to fully replicate the experiments included in this paper are also provided as a companion notebook.We thank the European Union Cooperation in Science and Technology (EU COST) Action ES1102 VALUE (http://www.value-cost.eu) for making publicly available the data used in this article and the tools implementing the comprehensive set of validation measures and indices. We also thank the THREDDS Data Server (TDS) software developed by UCAR/Unidata (https://doi.org/10.5065/D6N014KG, Unidata, 2006) and all R developers and their supporting community for providing free software facilitating open science. We acknowledge the World Climate Research Program’s Working Group on Coupled Modelling, which is responsible for CMIP, and we thank the EC-EARTH Consortium for producing and making available their model output used in this paper. For CMIP the U.S. Department of Energy’s Program for Climate Model Diagnosis and Intercomparison provides coordinating support and led the development of software infrastructure in partnership with the Global Organization for Earth System Science Portals. We are very grateful to the two anonymous referees participating in the interactive discussion for their insightful comments, helping us to considerably improve the original paper. Financial support. The authors acknowledge partial funding from the MULTI-SDM project (MINECO/FEDER, CGL2015-66583-R) and from the project INDECIS, part of the European Research Area for Climate Services Consortium (ERA4CS) with co-funding by the uropean Union (grant no. 690462)

    Programming Massively Parallel Architectures using MARTE: a Case Study

    Get PDF
    International audienceNowadays, several industrial applications are being ported to parallel architectures. These applications take advantage of the potential parallelism provided by multiple core processors. Many-core processors, especially the GPUs(Graphics Processing Unit), have led the race of floating-point performance since 2003. While the performance improvement of general-purpose microprocessors has slowed significantly, the GPUs have continued to improve relentlessly. As of 2009, the ratio between many-core GPUs and multicore CPUs for peak floating-point calculation throughput is about 10 times. However, as parallel programming requires a non-trivial distribution of tasks and data, developers find it hard to implement their applications effectively. Aiming to improve the use of many-core processors, this work presents an case-study using UML and MARTE profile to specify and generate OpenCL code for intensive signal processing applications. Benchmark results show us the viability of the use of MDE approaches to generate GPU applications
    • 

    corecore