87 research outputs found

    Personal Volunteer Computing

    Full text link
    We propose personal volunteer computing, a novel paradigm to encourage technical solutions that leverage personal devices, such as smartphones and laptops, for personal applications that require significant computations, such as animation rendering and image processing. The paradigm requires no investment in additional hardware, relying instead on devices that are already owned by users and their community, and favours simple tools that can be implemented part-time by a single developer. We show that samples of personal devices of today are competitive with a top-of-the-line laptop from two years ago. We also propose new directions to extend the paradigm

    Gestion adaptative de l'énergie pour les infrastructures de type grappe ou nuage

    Get PDF
    National audienceDans un contexte d'utilisation de ressources hétérogènes, la performance reste le critère traditionnel pour la planification de capacité. Mais, de nos jours, tenir compte de la variable énergétique est devenu une nécessité. Cet article s'attaque au problème de l'efficacité énergétique pour la répartition de charge dans les systèmes distribués. Nous proposons une gestion efficace en énergie des ressources par l'ajout de fonctionnalités de gestion des évènements liés à l'énergie, selon des règles définies par l'utilisateur. Nous implémentons ces fonctionnalités au sein de l'intergiciel DIET, qui permet de gérer la répartition de charge afin de mettre en évidence le cout des compromis entre la performance et la consommation d'énergie. Notre solution et son intérêt sont validés au travers d'expériences en évaluant la performance et la consommation électrique mettant en concurrence trois politiques d'ordonnancement. Nous mettons en avant le gain obtenu en terme énergétique tout en essayant de minimiser les écarts de performance. Nous offrons également à l'intergiciel responsable de l'ordonnancement une réactivité face aux variations énergétiques

    Flauncher and DVMS -- Deploying and Scheduling Thousands of Virtual Machines on Hundreds of Nodes Distributed Geographically

    Get PDF
    International audienceAlthough live migration of virtual machines has been an active area of research over the past decade, it has been mainly evaluated by means of simulations and small scale deployments. Proving the relevance of live migration at larger scales is a technical challenge that requires to be able to deploy and schedule virtual machines. In the last year, we succeeded to tackle such a challenge by conducting experiments with Flauncher and DVMS, two frameworks that can respectively deploy and schedule thousands of virtual machines over hundreds of nodes distributed geographically across the Grid'5000 testbed

    Nu@ge: Towards a solidary and responsible cloud computing service

    Get PDF
    Best Paper AwardInternational audienceThe adoption of cloud computing is still limited by several legal concerns from companies. One of those reasons is the data sovereignty, as data can be physically host in sensible locations, resulting in a lack of control for companies. In this paper, we present the Nu@ge project aimed at building a federation of container-sized datacenter on the French territory. Nu@ge provides a software stack that enables companies to put independent datacenters in cooperation in a national mesh. Additionally, a prototype of a container-sized datacenter has been validated and patented

    Parallel Differential Evolution approach for Cloud workflow placements under simultaneous optimization of multiple objectives

    Get PDF
    International audienceThe recent rapid expansion of Cloud computing facilities triggers an attendant challenge to facility providers and users for methods for optimal placement of workflows on distributed resources, under the often-contradictory impulses of minimizing makespan, energy consumption, and other metrics. Evolutionary Optimization techniques that from theoretical principles are guaranteed to provide globally optimum solutions, are among the most powerful tools to achieve such optimal placements. Multi-Objective Evolutionary algorithms by design work upon contradictory objectives, gradually evolving across generations towards a converged Pareto front representing optimal decision variables – in this case the mapping of tasks to resources on clusters. However the computation time taken by such algorithms for convergence makes them prohibitive for real time placements because of the adverse impact on makespan. This work describes parallelization, on the same cluster, of a Multi-Objective Differential Evolution method (NSDE-2) for optimization of workflow placement, and the attendant speedups that bring the implicit accuracy of the method into the realm of practical utility. Experimental validation is performed on a real-life testbed using diverse Cloud traces. The solutions under different scheduling policies demonstrate significant reduction in energy consumption with some improvement in makespan

    Impact of Shutdown Techniques for Energy-Efficient Cloud Data Centers

    Get PDF
    International audienceElectricity consumption is a worrying concern in current large-scale systems like datacenters and supercomputers. These infrastructures are often dimensioned according to the workload peak. However, their consumption is not power-proportional: when the workload is low, the consumption is still high. Shutdown techniques have been developed to adapt the number of switched-on servers to the actual workload. However, datacenter operators are reluctant to adopt such approaches because of their potential impact on reactivity and hardware failures, and their energy gain which is often largely misjudged. In this article, we evaluate the potential gain of shutdown techniques by taking into account shutdown and boot up costs in time and energy. This evaluation is made on recent server architectures and future hypothetical energy-aware architectures. We also determine if the knowledge of future is required for saving energy with such techniques. We present simulation results exploiting real traces collected on different infrastructures under various machine configurations with several shutdown policies, with and without workload prediction

    Energy-Aware Server Provisioning by Introducing Middleware-Level Dynamic Green Scheduling

    Get PDF
    International audienceSeveral approaches to reduce the power consumption of datacenters have been described in the literature, most of which aim to improve energy efficiency by trading off performance for reducing power consumption. However, these approaches do not always provide means for administrators and users to specify how they want to explore such trade-offs. This work provides techniques for assigning jobs to distributed resources, exploring energy efficient resource provisioning. We use middleware-level mechanisms to adapt resource allocation according to energy-related events and user-defined rules. A proposed framework enables developers, users and system administrators to specify and explore energy efficiency and performance trade-offs without detailed knowledge of the underlying hardware platform. Evaluation of the proposed solution under three scheduling policies shows gains of 25% in energy-efficiency with minimal impact on the overall application performance. We also evaluate reactivity in the adaptive resource provisioning

    Pando: Personal Volunteer Computing in Browsers

    Full text link
    The large penetration and continued growth in ownership of personal electronic devices represents a freely available and largely untapped source of computing power. To leverage those, we present Pando, a new volunteer computing tool based on a declarative concurrent programming model and implemented using JavaScript, WebRTC, and WebSockets. This tool enables a dynamically varying number of failure-prone personal devices contributed by volunteers to parallelize the application of a function on a stream of values, by using the devices' browsers. We show that Pando can provide throughput improvements compared to a single personal device, on a variety of compute-bound applications including animation rendering and image processing. We also show the flexibility of our approach by deploying Pando on personal devices connected over a local network, on Grid5000, a French-wide computing grid in a virtual private network, and seven PlanetLab nodes distributed in a wide area network over Europe.Comment: 14 pages, 12 figures, 2 table

    Adding Virtualization Capabilities to Grid'5000

    Get PDF
    Ce rapport révisé a fait l'objet d'une publication, voir hal-00946971Almost ten years after its premises, the Grid'5000 testbed has become one of the most complete testbed for designing or evaluating large-scale distributed systems. Initially dedicated to the study of High Performance Computing, the infrastructure has evolved to address wider concerns related to Desktop Computing, the Internet of Services and more recently the Cloud Computing paradigm. This report present recent improvements of the Grid'5000 software and services stack to support large-scale experiments using virtualization technologies as building blocks. Such contributions include the deployment of customized software environments, the reservation of dedicated network domain and the possibility to isolate them from the others, and the automation of experiments with a REST API. We illustrate the interest of these contributions by describing three different use-cases of large-scale experiments on the Grid'5000 testbed. The first one leverages virtual machines to conduct larger experiments spread over 4000 peers. The second one describes the deployment of 10000 KVM instances over 4 Grid'5000 sites. Finally, the last use case introduces a one-click deployment tool to easily deploy major IaaS solutions. The conclusion highlights some important challenges of Grid'5000 related to the use of OpenFlow and to the management of applications dealing with tremendous amount of data.Dix ans environ après ses prémisses, la plate-forme Grid'5000 est devenue une des plates-formes les plus complètes utilisée pour la conception et l'évaluation de systèmes distribués à grande échelle. Dédiée initialement au calcul à haute performance, l'infrastructure a évolué pour supporter un ensemble de problèmes plus vaste liés au calcul de type Desktop, l'internet des objets et plus récemment l'informatique dans les nuages (aussi appelé Cloud Computing). Ce rapport présente les améliorations récentes apportées au logiciels et pile de services pour supporter les expérimentations à grande échelle utilisant les technologies de virtualisation comme blocs de base. Nos contributions incluent le déploiement d'environnements logiciels customisés, la réservation de domaines réseaux dédiés et la possibilité de les isoler entre eux, et l'automatisation des expérimentations grâce à une API REST. Nous illustrons l'intérêt de ces contributions en décrivant trois expériences à large échelle sur la plate-forme Grid'5000. La première expérience utilise des machines virtuelles pour conduire des expérimentations de grande taille sur 4000 pairs. La seconde expérience décrit le déploiement de 10000 instances KVM sur 4 sites Grid'5000. Enfin le dernier exemple présente un outil de déploiement simple pour déployer des solutions de Cloud de type IaaS. La conclusion discute de prochains défis importants de Grid'5000 liés à l'utilisation d'OpenFlow et à la gestion d'applications gérant des grandes masses de données
    corecore