We document the data transfer workflow, data transfer performance, and other
aspects of staging approximately 56 terabytes of climate model output data from
the distributed Coupled Model Intercomparison Project (CMIP5) archive to the
National Energy Research Supercomputing Center (NERSC) at the Lawrence Berkeley
National Laboratory required for tracking and characterizing extratropical
storms, a phenomena of importance in the mid-latitudes. We present this
analysis to illustrate the current challenges in assembling multi-model data
sets at major computing facilities for large-scale studies of CMIP5 data.
Because of the larger archive size of the upcoming CMIP6 phase of model
intercomparison, we expect such data transfers to become of increasing
importance, and perhaps of routine necessity. We find that data transfer rates
using the ESGF are often slower than what is typically available to US
residences and that there is significant room for improvement in the data
transfer capabilities of the ESGF portal and data centers both in terms of
workflow mechanics and in data transfer performance. We believe performance
improvements of at least an order of magnitude are within technical reach using
current best practices, as illustrated by the performance we achieved in
transferring the complete raw data set between two high performance computing
facilities. To achieve these performance improvements, we recommend: that
current best practices (such as the Science DMZ model) be applied to the data
servers and networks at ESGF data centers; that sufficient financial and human
resources be devoted at the ESGF data centers for systems and network
engineering tasks to support high performance data movement; and that
performance metrics for data transfer between ESGF data centers and major
computing facilities used for climate data analysis be established, regularly
tested, and published