4 research outputs found
Caching and Reproducibility: Making Data Science experiments faster and FAIRer
Small to medium-scale data science experiments often rely on research
software developed ad-hoc by individual scientists or small teams. Often there
is no time to make the research software fast, reusable, and open access. The
consequence is twofold. First, subsequent researchers must spend significant
work hours building upon the proposed hypotheses or experimental framework. In
the worst case, others cannot reproduce the experiment and reuse the findings
for subsequent research. Second, suppose the ad-hoc research software fails
during often long-running computationally expensive experiments. In that case,
the overall effort to iteratively improve the software and rerun the
experiments creates significant time pressure on the researchers. We suggest
making caching an integral part of the research software development process,
even before the first line of code is written. This article outlines caching
recommendations for developing research software in data science projects. Our
recommendations provide a perspective to circumvent common problems such as
propriety dependence, speed, etc. At the same time, caching contributes to the
reproducibility of experiments in the open science workflow. Concerning the
four guiding principles, i.e., Findability, Accessibility, Interoperability,
and Reusability (FAIR), we foresee that including the proposed recommendation
in a research software development will make the data related to that software
FAIRer for both machines and humans. We exhibit the usefulness of some of the
proposed recommendations on our recently completed research software project in
mathematical information retrieval.Comment: 8 pages, 1 tabl