11 research outputs found

    Impact of Shutdown Techniques for Energy-Efficient Cloud Data Centers

    Get PDF
    International audienceElectricity consumption is a worrying concern in current large-scale systems like datacenters and supercomputers. These infrastructures are often dimensioned according to the workload peak. However, their consumption is not power-proportional: when the workload is low, the consumption is still high. Shutdown techniques have been developed to adapt the number of switched-on servers to the actual workload. However, datacenter operators are reluctant to adopt such approaches because of their potential impact on reactivity and hardware failures, and their energy gain which is often largely misjudged. In this article, we evaluate the potential gain of shutdown techniques by taking into account shutdown and boot up costs in time and energy. This evaluation is made on recent server architectures and future hypothetical energy-aware architectures. We also determine if the knowledge of future is required for saving energy with such techniques. We present simulation results exploiting real traces collected on different infrastructures under various machine configurations with several shutdown policies, with and without workload prediction

    TailX: Scheduling Heterogeneous Multiget Queries to Improve Tail Latencies in Key-Value Stores

    Get PDF
    International audienceUsers of interactive services such as e-commerce platforms have high expectations for the performance and responsiveness of these services. Tail latency, denoting the worst service times, contributes greatly to user dissatisfaction and should be minimized. Maintaining low tail latency for interactive services is challenging because a request is not complete until all its operations are completed. The challenge is to identify bottleneck operations and schedule them on uncoordinated backend servers with minimal overhead, when the duration of these operations are heterogeneous and unpredictable. In this paper, we focus on improving the latency of multiget operations in cloud data stores. We present TailX, a task-aware multiget scheduling algorithm that improves tail latencies under heterogeneous workloads. TailX schedules operations according to an estimation of the size of the corresponding data, and allows itself to procrastinate some operations to give way to higher priority ones. We implement TailX in Cassandra, a widely used key-value store. The result is an improved overall performance of the cloud data stores for a wide variety of heterogeneous workloads. Specifically, our experiments under heterogeneous YCSB workloads show that TailX outperforms state-of-the-art solutions and reduces tail latencies by up to 70% and median latencies by up to 75%

    Occidentalis Banca. Viaggio tra banche scomparse e banche ritrovate

    Get PDF
    [La necessità per le banche europee di rispettare gli obblighi imposti dalla vigilanza bancaria non esclude strategie ad impatto sociale ed orientate alla creazione di valore per azionisti e risparmiatori: per questo, abbiamo bisogno di supervisori illuminati e banchieri d’immaginazione] Il tempo perduto C’è ancora spazio per una madeleinette di memoria proustiana tra i nuovi “aromi fast-food” della banca moderna? O l’anima di questa Europa, tanto evocata da Jacques Delors, è condannata a respirare i fumi effimeri di un sistema bancario incapace di visioni orientate alla coesione sociale ed alla crescita sostenibile, alla base del progetto europeo? La nostalgia inconfessata per “l’età dell’oro”, in cui i margini assicuravano comodi ritorni agli azionisti, credito generoso ai clienti, stabilità ai mercati, sembra a tutti una vana speranza, ancora più mortificata dalla fretta di correre ai ripari. Il dolce sapore del passato si mescola, inesorabilmente, al nuovo profumo d’Europa, che sembra imporre soluzioni prét-à- porter, anche per la gestione delle crisi bancarie, ad oggi sostanzialmente dissociate dalle politiche di coesione economica e sociale e, in fondo, anche in contrasto con la stessa sostenibilità bancaria. E’ tutto qui? E’ il tramonto dell’occidentalis banca, o l’ouverture di una nuova alba dal profumo d’Oriente? Una prima chiave di lettura ai fatti moderni risiede nella combinazione tra azione di vigilanza e forze di mercato. Se il meccanismo di vigilanza unico mira a ristabilire, per le banche europee, rassicuranti parametri di sostenibilità, la pressione del mercato induce i banchieri a scelte dolorose e, apparentemente irrinunciabili, che non sempre coincidono con quelle utili alla creazione di valore e con gli obiettivi ultimi della vigilanza stessa. Come ricordato da Carmelo Barbagallo – Capo Dipartimento Vigilanza Bancaria e Finanziaria della Banca d’Italia – nel suo intervento all’incontro su Banche e mercato: nuove sfide per operatori e istituzioni “Gli impulsi provenienti dal mercato e dalla vigilanza impongono di aumentare l’efficienza, contenere i costi, amministrativi e del personale, razionalizzare la presenza sul territorio

    Keynote Talk: Leveraging the Edge-Cloud Continuum to Manage the Impact of Wildfires on Air Quality

    No full text
    International audienceThe emergence of large-scale cyberinfrastructure composed of heterogeneous computing capabilities and diverse sensors and other data sources are enabling new classes of dynamic data-driven "urgent" applications. However, as the variety of data sources, and the volume and velocity of data grow, processing this data while considering the uncertainty of infrastructure and timeliness constraints of urgent application workflows can be nontrivial and presents a new set of challenges. In this paper, we use an application workflow that monitors and manage the air quality impacts of remote wildfires to illustrate how the R-Pulsar programming system, leveraging the SAGE and WIFIRE platforms, can enable urgent analytics across the computing continuum. R-Pulsar supports urgent data-processing pipelines that tradeoff the content of data, cost of computation, and urgency of the results to support such workflows. We also discuss research challenges associated with programming urgent application workflows and managing resources in an autonomic manner

    Using Simulation to Evaluate and Tune the Performance of Dynamic Load Balancing of an Over-decomposed Geophysics Application

    Get PDF
    Finite difference methods are, in general, well suited to execution on parallel machines and are thus commonplace in High Performance Computing. Yet, despite their apparent regularity, they often exhibit load imbalance that damages their efficiency. In this article, we first characterize the spatial and temporal load imbalance of Ondes3D, a seismic wave propagation simulator used to conduct regional scale risk assessment. Our analysis reveals that this imbalance originates from the structure of the input data and from low-level CPU optimizations. Such dynamic imbalance should, therefore, be quite common and can not be solved by any static approach or classical code reorganization. An effective solution for such scenarios, incurring minimal code modification, is to use AMPI/CHARM++. By over-decomposing the application, the CHARM++ runtime can dynamically rebalance the load by migrating data and computation at the granularity of an MPI rank. We show that this approach is effective to balance the spatial/temporal dynamic load of the application, thus drastically reducing its execution time. However, this approach requires a careful selection of the load balancing algorithm, its activation frequency, and of the over-decomposition level. These choices are, unfortunately, quite dependent on application structure and platform characteristics. Therefore, we propose a methodology that leverages the capabilities of the SimGrid simulation framework and allows to conduct such study at low experimental cost. Our approach relies on a combination of emulation, simulation, and application modeling that requires minimal code modification and yet manages to capture both spatial and temporal load imbalance and to faithfully predict the performance of dynamic load balancing. We evaluate the quality of our simulation by systematically comparing simulation results with the outcome of real executions and demonstrate how this approach can be used to quickly find the optimal load balancing configuration for a given application/hardware configuration.Les méthodes aux différences finies sont en général bien adaptées au machines parallèles et donc assez courantes dans le domaine du calcul à haute performance. Pourtant, en dépit de leurs apparentes régularités, il n'est pas rare qu'elles souffrent d'un déséquilibre de charge dommageable. Dans cet article, nous commençons par caractériser le déséquilibre de charge spatial et temporel d'Ondes3D, une application de simulation de propagation d'ondes sismiques utilisée pour faire de l'évaluation de risque sismique à l'échelle régionale. Notre analyse révèle que ce déséquilibre provient de la nature même des données d'entrées et d'optimisations bas niveau du CPU. Ce type de déséquilibre dynamique est donc a priori relativement courant et ne peut être résolu par des approches statiques ou par des réorganisations de code classiques. Une approche pragmatique et ne nécessitant que des modifications mineures du code consiste à utiliser AMPI/CHARM++. En sur-décomposant l'application, le runtime CHARM++ peut rééquilibrer dynamiquement la charge en migrant les données et les calculs à la granularité du processus MPI. Nous montrons que cette approche permet effectivement de résoudre le problème de déséquilibre spatial et temporel de charge et ainsi de réduire drastiquement le temps d'exécution total. Cependant, cette approche nécessite a priori une sélection minutieuse de l'algorithme d'équilibrage de charge, de la fréquence d'activation ou du niveau de sur-décomposition. Ces choix sont hélas en général assez dépendants de la structure de l'application et des caractéristiques de la plate-forme (\ie le nombre de processeurs et leur vitesse, la topologie et la vitesse du réseau). Nous proposons donc une méthodologie se basant sur l'environnement de simulation SimGrid et permettant de réaliser ce type d'étude à faible coût. Notre approche repose sur une combinaison d'émulation, de simulation et de modélisation d'application qui ne nécessite que des modifications minimes du code d'origine et permet à la fois de capturer le déséquilibre spatial et temporel et de prédire de façon fiable les performances de l'équilibrage de charge. Nous évaluons la qualité de notre proposition en comparant de façon systématique les résultats de notre simulation avec ceux d'expériences réelles. Nous montrons ensuite comment cette approche peut être utilisée pour déterminer rapidement les paramètres optimaux d'équilibrage de charge pour une configuration applicative/matérielle donnée

    RES: Real-time Video Stream Analytics using Edge Enhanced Clouds

    Full text link
    IEEE With increasing availability and use of Internet of Things (IoT) devices large amounts of streaming data is now being produced at high velocity. Applications which require low latency response such as video surveillance demand a swift and efficient analysis of this data. Existing approaches employ cloud infrastructure to store and perform machine learning based analytics on this data. This centralized approach has limited ability to support analysis of real-time, large-scale streaming data due to network bandwidth and latency constraints between data source and cloud. We propose RealEdgeStream (RES) an edge enhanced stream analytics system for large-scale, high performance data analytics. The proposed approach investigates the problem of video stream analytics by proposing (i) filtration and (ii) identification phases. The filtration phase reduces the amount of data by filtering low value stream objects using configurable rules. The identification phase uses deep learning inference to perform analytics on the streams of interest. The stages are mapped onto available in-transit and cloud resources using a placement algorithm to satisfy the Quality of Service (QoS) constraints identified by a user. The job completion in the proposed system takes 49\% less time and saves 99\% bandwidth compared to a centralized cloud-only based approach

    Accurately Simulating Energy Consumption of I/O-intensive Scientific Workflows

    Get PDF
    International audienceWhile distributed computing infrastructures can provide infrastructure level techniques for managing energy consumption, application level energy consumption models have also been developed to support energy-efficient scheduling and resource provisioning algorithms. In this work, we analyze the accuracy of a widely-used application-level model that have been developed and used in the context of scientific workflow executions. To this end, we profile two production scientific workflows on a distributed platform instrumented with power meters. We then conduct an analysis of power and energy consumption measurements. This analysis shows that power consumption is not linearly related to CPU utilization and that I/O operations significantly impact power, and thus energy, consumption. We then propose a power consumption model that accounts for I/O operations, including the impact of waiting for these operations to complete, and for concurrent task executions on multi-socket, multi-core compute nodes. We implement our proposed model as part of a simulator that allows us to draw direct comparisons between real-world and modeled power and energy consumption. We find that our model has high accuracy when compared to real-world executions. Furthermore, our model improves accuracy by about two orders of magnitude when compared to the traditional models used in the energy-efficient workflow scheduling literature
    corecore