Cloud computing has become a major approach to help reproduce computational
experiments because it supports on-demand hardware and software resource
provisioning. Yet there are still two main difficulties in reproducing big data
applications in the cloud. The first is how to automate end-to-end execution of
analytics including environment provisioning, analytics pipeline description,
pipeline execution, and resource termination. The second is that an application
developed for one cloud is difficult to be reproduced in another cloud, a.k.a.
vendor lock-in problem. To tackle these problems, we leverage serverless
computing and containerization techniques for automated scalable execution and
reproducibility, and utilize the adapter design pattern to enable application
portability and reproducibility across different clouds. We propose and develop
an open-source toolkit that supports 1) fully automated end-to-end execution
and reproduction via a single command, 2) automated data and configuration
storage for each execution, 3) flexible client modes based on user preferences,
4) execution history query, and 5) simple reproduction of existing executions
in the same environment or a different environment. We did extensive
experiments on both AWS and Azure using four big data analytics applications
that run on virtual CPU/GPU clusters. The experiments show our toolkit can
achieve good execution performance, scalability, and efficient reproducibility
for cloud-based big data analytics