930 research outputs found

    Taming Energy Consumption Variations in Systems Benchmarking

    Get PDF
    International audienceThe past decade witnessed the inclusion of power measurements to evaluate the energy efficiency of software systems, thus making energy a prime indicator along with performance. Nevertheless, measuring the energy consumption of a software system remains a tedious task for practitioners. In particular, the energy measurement process may be subject to a lot of variations that hinder the relevance of potential comparisons. While the state of the art mostly acknowledged the impact of hardware factors (chip printing process, CPU temperature), this paper investigates the impact of controllable factors on these variations. More specifically, we conduct an empirical study of multiple controllable parameters that one can easily tune to tame the energy consumption variations when benchmarking software systems.To better understand the causes of such variations, we ran more than a 1000 experiments on more than 100 machines with different workloads and configurations. The main factors we studied encompass: experimental protocol, CPU features (C-states, Turbo~Boost, core pinning) and generations, as well as the operating system. Our experiments showed that, for some workloads, it is possible to tighten the energy variation by up to 30Ă—. Finally, we summarize our results as guidelines to tame energy consumption variations. We argue that the guidelines we deliver are the minimal requirements to be considered prior to any energy efficiency evaluatio

    The Mole & The Snake

    Get PDF
    This article starts from the Foucaultanian notions of biopower and discipline, deal- ing with the strategies of the modern and contemporary capitalism. Introducing the term biopower into his research, Foucault is alluding to a series of transformations re- lated to the capitalist system: life enters into the scope of power in terms of \u201ccontrolled insertion of bodies\u201d in the social apparatus of production, as well as in terms of an \u201cadaptation of population phenomena to economic processes\u201d. It involves the exchange of services on which the Fordist social pact was founded in the twentieth century. The life that is claimed in and against the relationship of capital concerns \u201cneeds\u201d that refer to a \u201cconcrete essence of man\u201d. In the undeniable awareness of a \u201ctriangulation\u201d between sovereignty, discipline and biopower, the author, as a criterion for reading the dynamics of contemporary power, analyzes the theme of control referring to Deleuze. This is de- lineated in the double form of \u201cbiopolitical algorithms\u201d and of the normalization that by means of the selection and targeted processing of big data and information packages, incessantly produced by social activity in and on the network, capture forms of life at the service of capitalism

    Benchmarking Resource Management For Serverless Computing

    Get PDF
    Serverless computing is a way in which users or companies can build and run applications and services without having to worry about acquiring or maintaining servers and their software stacks. This new technology is a significant innovation because server management incurs a large amount of overhead and can be very complex and difficult to work with. The serverless model also allows for fine-grain billing and demand resource allocation, allowing for better scalability and cost reduction. Academic researchers and industry practitioners agree that serverless computing is an amazing innovation, but it introduces new challenges. The algorithms and protocols currently deployed for virtual server optimization in traditional cloud computing environments are not able to simultaneously achieve low latency, high throughput, and fine-grained scalability while maintaining low cost for the cloud service providers. Furthermore, in the serverless computing paradigm, computation units (i.e., functions) are stateless. Applications, specified through function workflows, do not have control over specific states or their scheduling and placement, which can sometimes lead to significant latency increases and some opportunities to optimize the usage of physical servers. Overcoming these challenges highlights some of the tension between giving programmers control and allowing providers to optimize automatically. This research identifies some of the challenges in exploring new resource management approaches for serverless computing (more specifically, FaaS) as well as attempts to deal with one of these challenges. Our experimental approach includes the deployment of an open-source serverless function framework, OpenFaaS. We focus on faasd, a more lightweight variant of OpenFaaS. Faasd was chosen over the normal OpenFaaS due to not having the higher complexity and cost of Kubernetes. As researchers in academia and industry develop new approaches for optimizing the usage of CPU, memory, and I/O for serverless platforms, the community needs to establish benchmark workloads for evaluating proposed methods. Several research groups have proposed benchmark suites in the last two years, and many others are still in development. A commonality among these benchmark tools is their complexity; for junior researchers without experience in the deployment of distributed systems, a lot of time and effort goes into deploying the benchmarking, hindering their progress in evaluating newly proposed ideas. In our work, we demonstrate that even well-regarded proposals still introduce deficiencies and deployment challenges, proposing that a simplified, constrained benchmark can be useful in preparing execution environments for the experimental evaluation with serverless services

    Polymers in Fractal Disorder

    Get PDF
    This work presents a numerical investigation of self-avoiding walks (SAWs) on percolation clusters, a canonical model for polymers in disordered media. A new algorithm has been developed allowing exact enumeration of over ten thousand steps. This is an increase of several orders of magnitude compared to previously existing enumeration methods, which allow for barely more than forty steps. Such an increase is achieved by exploiting the fractal structure of critical percolation clusters: they are hierarchically organized into a tree of loosely connected nested regions in which the walks segments are enumerated separately. After the enumeration process, a region is \"decimated\" and behaves in the following effectively as a single point. Since this method only works efficiently near the percolation threshold, a chain-growth Monte Carlo algorithm has also been used. Main focus of the investigations was the asymptotic scaling behavior of the average end-to-end distance as function of the number of steps on critical clusters in different dimensions. Thanks the highly efficient new method, existing estimates of the scaling exponents could be improved substantially. Also investigated were the number of possible chain conformation and the average entropy, which were found to follow an unusual scaling behavior. For concentrations above the percolation threshold the exponent describing the growth of the end-to-end distance turned out to differ from that on regular lattices, defying the prediction of the accepted theory. Finally, SAWs with short range attractions on percolation clusters are discussed. Here, it emerged that there seems to be no temperature-driven collapse transition as the asymptotic scaling behavior of the end-to-end distance even at zero temperature is the same as for athermal SAWs.Die vorliegenden Arbeit präsentiert eine numerische Studie von selbstvermeidenden Zufallswegen (SAWs) auf Perkolationsclustern, ein kanonisches Modell für Polymere in stark ungeordneten Medien. Hierfür wurde ein neuer Algorithmus entwickelt, welcher es ermöglicht SAWs von mehr als zehntausend Schritten exakt auszuzählen. Dies bedeutet eine Steigerung von mehreren Größenordnungen gegenüber der zuvor existierenden Methode, welche kaum mehr als vierzig Schritte zulässt. Solch eine Steigerung wird erreicht, indem die fraktale Struktur der Perkolationscluster geziehlt ausgenutzt wird: Die Cluster werden hierarchisch in lose verbundene Gebiete unterteilt, innerhalb welcher Wegstücke separat ausgezählt werden können. Nach dem Auszählen wird ein Gebiet \"dezimiert\" und verhält sich während der Behandlung größerer Gebiete effektiv wie ein Gitterpunkt. Da diese neue Methode nur nahe der Perkolationsschwelle funktioniert, wurde zum Erzielen der Ergebnisse zudem ein Kettenwachstums-Monte-Carlo-Algorithmus (PERM) eingesetzt. Untersucht wurde zunächst das asymptotische Skalenverhalten des Abstands der beiden Kettenenden als Funktion der Schrittzahl auf kritischen Clustern in verschiedenen Dimensionen. Dank der neuen hochperformanten Methode konnten die bisherigen Schätzer für den dies beschreibenden Exponenten signifikant verbessert werden. Neben dem Abstand wurde zudem die Anzahl der möglichen Konformationen und die mittlere Entropie angeschaut, für welche ein ungewöhnliches Skalenverhalten gefunden wurde. Für Konzentrationen oberhalb der Perkolationsschwelle wurde festgestellt, dass der Exponent, welcher das Wachstum des Endabstands beschreibt, nicht dem für freie SAWs entspricht, was nach gängiger Lehrmeinung der Fall sein sollte. Schlussendlich wurden SAWs mit Anziehung zwischen benachbarten Monomeren untersucht. Hier zeigte sich, dass es auf kritischen Perkolationsclustern keinen Phasenübergang zu geben scheint, an welchem die Ketten kollabieren, sondern dass das Skalenverhalten des Endabstands selbst am absoluten Nullpunkt der Temperatur unverändert ist

    Tales from the Code #1: The Effective Impact of Code Refactorings on Software Energy Consumption

    Get PDF
    International audienceSoftware maintenance and evolution enclose a broad set of actions that aim to improve both functional and non-functional concerns of a software system. Among the non-functional concerns, energy consumption is getting more and more traction in the industry, no matter the software is mobile or deployed in the cloud. In this context, the impact of code refactorings on energy consumption remains unclear, though. In particular, while the state of the art investigated the impact of some specific code refactorings on dedicated benchmarks, we miss an assessment that those apply to more comprehensive and complex software. To address this threat, this paper studies the evolution of the energy consumption of 7 open-source software developed for more than 5 years. Then, by focusing on the impact on energy consumption of changes involving code refactorings, we intend to assess the effects induced by such code refactorings in practice. For all these software systems we studied, our empirical results report that the code refactorings we mined do not substantially impact energy consumption. Interestingly, these results highlight that i) structural code refactorings bring energy-preserving changes to the code, and ii) major energy variations seem to be related to functional and computational code evolutions

    Evaluating the Impact of Java Virtual Machines on Energy Consumption

    Get PDF
    International audienceBackground. The Java Virtual Machine (JVM) platforms have known multiple evolutions along the last decades to enhance both the performance they exhibit and the features they offer. With regards to energy consumption, few studies have investigated the energy consumption of code and data structures. Yet, we keep missing an evaluation of the energy efficiency of existing JVM platforms and an identification of the configurations that minimize the energy consumption of software hosted on the JVM. Aims. The purpose of this paper is to investigate the variations in energy consumption between different JVM distributions and parameters to help developers configuring the least consuming environment for their Java application. Method. We thus assess the energy consumption of some of the most popular and supported JVM platforms using 12 Java benchmarks that explore different performance objectives. Moreover, we investigate the impact of the different JVM parameters and configurations on the energy consumption of software. Results. Our results show that some JVM platforms can exhibit up to 100% more energy consumption. JVM configurations can also play a substantial role to reduce the energy consumption during the software execution. Interestingly, the default configuration of the garbage collector was energy efficient in only 50% of our experiments. Conclusion. Finally, we provide an OSS tool, named J-Referral that recommends an energy-efficient JVM distribution and configuration for any Java application

    An Empirical Investigation of Performance Overhead in Cross-Platform Mobile Development Frameworks

    Get PDF
    The heterogeneity of the leading mobile platforms in terms of user interfaces, user experience, programming language, and ecosystem have made cross-platform development frameworks popular. These aid the creation of mobile applications – apps – that can be executed across the target platforms (typically Android and iOS) with minimal to no platform-specific code. Due to the cost- and time-saving possibilities introduced through adopting such a framework, researchers and practitioners alike have taken an interest in the underlying technologies. Examining the body of knowledge, we, nonetheless, frequently encounter discussions on the drawbacks of these frameworks, especially with regard to the performance of the apps they generate. Motivated by the ongoing discourse and a lack of empirical evidence, we scrutinised the essential piece of the cross-platform frameworks: the bridge enabling cross-platform code to communicate with the underlying operating system and device hardware APIs. The study we present in the article benchmarks and measures the performance of this bridge to reveal its associated overhead in Android apps. The development of the artifacts for this experiment was conducted using five cross-platform development frameworks to generate Android apps, in addition to a baseline native Android app implementation. Our results indicate that – for Android apps – the use of cross-platform frameworks for the development of mobile apps may lead to decreased performance compared to the native development approach. Nevertheless, certain cross-platform frameworks can perform equally well or even better than native on certain metrics which highlights the importance of well-defined technical requirements and specifications for deliberate selection of a cross-platform framework or overall development approach.publishedVersio

    Effective memory management for mobile environments

    Get PDF
    Smartphones, tablets, and other mobile devices exhibit vastly different constraints compared to regular or classic computing environments like desktops, laptops, or servers. Mobile devices run dozens of so-called “apps” hosted by independent virtual machines (VM). All these VMs run concurrently and each VM deploys purely local heuristics to organize resources like memory, performance, and power. Such a design causes conflicts across all layers of the software stack, calling for the evaluation of VMs and the optimization techniques specific for mobile frameworks. In this dissertation, we study the design of managed runtime systems for mobile platforms. More specifically, we deepen the understanding of interactions between garbage collection (GC) and system layers. We develop tools to monitor the memory behavior of Android-based apps and to characterize GC performance, leading to the development of new techniques for memory management that address energy constraints, time performance, and responsiveness. We implement a GC-aware frequency scaling governor for Android devices. We also explore the tradeoffs of power and performance in vivo for a range of realistic GC variants, with established benchmarks and real applications running on Android virtual machines. We control for variation due to dynamic voltage and frequency scaling (DVFS), Just-in-time (JIT) compilation, and across established dimensions of heap memory size and concurrency. Finally, we provision GC as a global service that collects statistics from all running VMs and then makes an informed decision that optimizes across all them (and not just locally), and across all layers of the stack. Our evaluation illustrates the power of such a central coordination service and garbage collection mechanism in improving memory utilization, throughput, and adaptability to user activities. In fact, our techniques aim at a sweet spot, where total on-chip energy is reduced (20–30%) with minimal impact on throughput and responsiveness (5–10%). The simplicity and efficacy of our approach reaches well beyond the usual optimization techniques

    Benchmarking the performance of controllers for power grid transient stability

    Get PDF
    As the energy transition transforms power grids across the globe, it poses several challenges regarding grid design and control. In particular, high levels of intermittent renewable generation complicate the task of continuously balancing power supply and demand, requiring sufficient control actions. Although there exist several proposals to control the grid, most of them have not demonstrated to be cost efficient in terms of optimal control theory. Here, we mathematically formulate an optimal centralized (therefore non-local) control problem for stable operation of power grids and determine the minimal amount of active power necessary to guarantee a stable service within the operational constraints, minimizing a suitable cost function at the same time. This optimal control can be used to benchmark control proposals and we demonstrate this benchmarking process by investigating the performance of three distributed controllers, two of which are fully decentralized, that have been recently studied in the physics and power systems engineering literature. Our results show that cost efficient controllers distribute the controlled response amongst all nodes in the power grid. Additionally, superior performance can be achieved by incorporating sufficient information about the disturbance causing the instability. Overall, our results can help design and benchmark secure and cost-efficient controllers
    • …
    corecore