6 research outputs found

    Testbeds in Computer Science

    Get PDF
    International audienceComputer scientists working on the design of hardware and software architectures and in particular on the design of distributed architectures (network, high performance computing, cloud, sensor networks, IoT, etc.) need to evaluate the relevance of their proposal at scale on a regular basis. Hence, their practice relies on frequent experimental evaluations, which leads to specific needs in term of experimental control. In this context, reproducing the work of other colleagues happens to be very difficult as it requires not only to have precise information about experimental conditions (software stack, external load, hardware type and configuration, etc.) but also to have a testbed allowing to recreate similar experimental conditions. A few experimental testbeds allowing for fine-grain experimental control have been built in the last decade (Grid5000, R2Lab, 
). Such testbeds are mutualized and generally open-access, which makes it possible to compare in a fair and truthful way alternative approaches at large scale. In this webinar, Lucas Nussbaum will provide an overview of these projects and of their internals

    Scaling your experiments

    Get PDF
    National audienceThere is a wide range of options to experiment on distributed systems and networking. Simulators running on a laptop or self-made testbeds are sometimes enough, but our field usually targets large to very large systems with potentially millions or billions of elements. In such a case, relying on a laptop or a self-made testbed is impossible. To scale up our experimental research, we can rely on larger-scale infrastructures and testbeds.In a first part, this talk will provide an overview of the landscape of infrastructures and testbeds supporting experimental research in distributed systems and networking.In a second part, we will focus on SDN/NFV experimentation, and will provide some feedback about the current state of available experimentation tools targeting large scale systems

    Supporting Experimental Computer Science

    Get PDF
    The ability to conduct consistent, controlled, and repeatable large-scale experiments in all areas of computer science related to parallel, large-scale, or distributed computing and networking is critical to the future and development of computer science. Yet conducting such experiments is still too often a challenge for researchers, students, and practitioners because of the unavailability of dedicated resources, inability to create controlled experimental conditions, and variability in software. Availability, repeatability, and open sharing of electronic products are all still difficult to achieve. To discuss those challenges and share experiences in their solution, the Workshop on Experimental Support for Computer Science brought together scientists involved in building and operating infrastructures dedicated to sup- porting computer science experiments to discuss challenges and solutions in this space. The workshop was held in November 2011 and was collocated with the SC11 conference in Seattle, Washington. Our objec- tives were to share experiences and knowledge related to supporting large-scale experiments conducted on experimental infrastructures, understand user requirements, and discuss methodologies and opportunities created by emerging technologies. This report ties together the workshop presentations and discussion and the consensus that emerged on the state of the field and directions for moving forward.La possibilité d'effectuer des expériences à grande échelle consistantes, contrÎlées, et reproductibles dans tous les domaines de l'informatique liés au parallélisme et au calcul distribué est critique pour le futur et le développement de l'informatique. Le lancement de telles expérimentations est souvent difficile pour les chercheurs, les étudiants et les développeurs et ceci en partie à cause de l'absence de ressources dédiées, l'impossibilité de créer des conditions expérimentales contrÎlées et l'évolution des logiciels. La disponibilité, la reproductibilité, et le partage ouvert de plates-formes informatiques sont toujours difficiles à atteindre. Afin de discuter de ces challenges et de partager nos expériences sur les solutions à ces problÚmes, le workshop "Experimental Support for Computer Science" a réuni des scientifiques impliqués dans la construction et la maintenance de plates-formes expérimentales dédiées au support pour les expériences informatiques pour discuter des challenges et de leurs solutions. Ce workshop s'est tenu en novembre 2011 au sein de la conférence SC11 à Seattle, Washington. Notre objectif était de partager notre expériences et nos connais- sances autour du support pour les expériences à grande échelle lancées sur des plates-formes d'expérimentation, comprendre les besoins des utilisateurs et discuter des méthodes et des opportunités créées par ces technologies émergentes. Ce rapport présente les contributions liées aux présentations du workshop et aux discussions qui ont eu lieu et le consensus issu sur l'état de l'art et des directions pour les travaux futurs

    Towards reproducibility of experiments

    Get PDF
    International audienceExcerpts from two recent PhD theses done using Grid’5000:▶ Tomasz Buchert. Managing large-scale, distributed systems research experiments with control- lows. Directed by LucasNussbaum and Jens Gustedt. Defended on 2016-01-06.▶ Cristian Ruiz. Methods and Tools for Challenging Experiments on Grid’5000: a use case on electromagnetic hybrid simulation.Directed by Olivier Richard and Thierry Monteil. Defended on 2014-12-15

    Emulation of Storage Performance in Testbed Experiments with Distem

    No full text
    International audienceTogether with the CPU and the network, storage plays an essential role in the overall performance of applications, especially in the context of big data applications, that can deal with enormous datasets. However, testbeds have scarce support for experiments involving storage performance. Experimenters can use the storage devices provided on the testbed directly, but (1) they might not provide the suitable performance characteristics for their experiments; (2) it might leave an uncontrolled bias over the experiments' results. In this paper, we explore the feasibility of using Linux's control groups to emulate I/O performance for testbed experiments. We then use it in the Distem emulator to create a customizable I/O experimental environment. Using Distem, we perform experiments on Hadoop to highlight the advantage of emulating I/O performance. Results obtained from a cluster of 25 nodes show how the performance of Hadoop changes according to emulated storage performance
    corecore