1,117 research outputs found
Testbeds Support for Reproducible Research
International audienceIn the context of experimental research, testbeds play an important role inenabling reproducibility of experiments, by providing a set of services thathelp experiments with setting up the experimental environment, and collectingdata about it. This paper explores the status of three different testbeds(Chameleon, CloudLab and Grid'5000) regarding features required for, or relatedto reproducible research, and discusses some open questions on that topic
Rumba : a Python framework for automating large-scale recursive internet experiments on GENI and FIRE+
It is not easy to design and run Convolutional Neural Networks (CNNs) due to: 1) finding the optimal number of filters (i.e., the width) at each layer is tricky, given an architecture; and 2) the computational intensity of CNNs impedes the deployment on computationally limited devices. Oracle Pruning is designed to remove the unimportant filters from a well-trained CNN, which estimates the filtersâ importance by ablating them in turn and evaluating the model, thus delivers high accuracy but suffers from intolerable time complexity, and requires a given resulting width but cannot automatically find it. To address these problems, we propose Approximated Oracle Filter Pruning (AOFP), which keeps searching for the least important filters in a binary search manner, makes pruning attempts by masking out filters randomly, accumulates the resulting errors, and finetunes the model via a multi-path framework. As AOFP enables simultaneous pruning on multiple layers, we can prune an existing very deep CNN with acceptable time cost, negligible accuracy drop, and no heuristic knowledge, or re-design a model which exerts higher accuracy and faster inferenc
A DevOps approach to integration of software components in an EU research project
We present a description of the development and deployment infrastructure being created to support the integration effort of HARNESS, an EU FP7 project. HARNESS is a multi-partner research project intended to bring the power of heterogeneous resources to the cloud. It consists of a number of different services and technologies that interact with the OpenStack cloud computing platform at various levels. Many of these components are being developed independently by different teams at different locations across Europe, and keeping the work fully integrated is a challenge. We use a combination of Vagrant based virtual machines, Docker containers, and Ansible playbooks to provide a consistent and up-to-date environment to each developer. The same playbooks used to configure local virtual machines are also used to manage a static testbed with heterogeneous compute and storage devices, and to automate ephemeral larger-scale deployments to Grid5000. Access to internal projects is managed by GitLab, and automated testing of services within Docker-based environments and integrated deployments within virtual-machines is provided by Buildbot
KheOps: Cost-effective Repeatability, Reproducibility, and Replicability of Edge-to-Cloud Experiments
Distributed infrastructures for computation and analytics are now evolving
towards an interconnected ecosystem allowing complex scientific workflows to be
executed across hybrid systems spanning from IoT Edge devices to Clouds, and
sometimes to supercomputers (the Computing Continuum). Understanding the
performance trade-offs of large-scale workflows deployed on such complex
Edge-to-Cloud Continuum is challenging. To achieve this, one needs to
systematically perform experiments, to enable their reproducibility and allow
other researchers to replicate the study and the obtained conclusions on
different infrastructures. This breaks down to the tedious process of
reconciling the numerous experimental requirements and constraints with
low-level infrastructure design choices.To address the limitations of the main
state-of-the-art approaches for distributed, collaborative experimentation,
such as Google Colab, Kaggle, and Code Ocean, we propose KheOps, a
collaborative environment specifically designed to enable cost-effective
reproducibility and replicability of Edge-to-Cloud experiments. KheOps is
composed of three core elements: (1) an experiment repository; (2) a notebook
environment; and (3) a multi-platform experiment methodology.We illustrate
KheOps with a real-life Edge-to-Cloud application. The evaluations explore the
point of view of the authors of an experiment described in an article (who aim
to make their experiments reproducible) and the perspective of their readers
(who aim to replicate the experiment). The results show how KheOps helps
authors to systematically perform repeatable and reproducible experiments on
the Grid5000 + FIT IoT LAB testbeds. Furthermore, KheOps helps readers to
cost-effectively replicate authors experiments in different infrastructures
such as Chameleon Cloud + CHI@Edge testbeds, and obtain the same conclusions
with high accuracies (> 88% for all performance metrics)
- âŠ