56 research outputs found

    Towards Energy-Proportional Computing for Enterprise-Class Server Workloads

    Get PDF
    Massive data centers housing thousands of computing nodes have become commonplace in enterprise computing, and the power consumption of such data centers is growing at an unprecedented rate. Adding to the problem is the inability of the servers to exhibit energy proportionality, i.e., provide energy-ecient execution under all levels of utilization, which diminishes the overall energy eciency of the data center. It is imperative that we realize eective strategies to control the power consumption of the server and improve the energy eciency of data centers. With the advent of Intel Sandy Bridge processors, we have the ability to specify a limit on power consumption during runtime, which creates opportunities to design new power-management techniques for enterprise workloads and make the systems that they run on more energy-proportional. In this paper, we investigate whether it is possible to achieve energy proportionality for an enterprise-class server workload, namely SPECpower ssj2008 benchmark, by using Intel's Running Average Power Limit (RAPL) interfaces. First, we analyze the power consumption and characterize the instantaneous power prole of the SPECpower benchmark at a subsystem-level using the on-chip energy meters exposed via the RAPL interfaces. We then analyze the impact of RAPL power limiting on the performance, per-transaction response time, power consumption, and energy eciency of the benchmark under dierent load levels. Our observations and results shed light on the ecacy of the RAPL interfaces and provide guidance for designing power-management techniques for enterprise-class workloads

    Load-Varying LINPACK: A Benchmark for Evaluating Energy Efficiency in High-End Computing

    Get PDF
    For decades, performance has driven the high-end computing (HEC) community. However, as highlighted in recent exascale studies that chart a path from petascale to exascale computing, power consumption is fast becoming the major design constraint in HEC. Consequently, the HEC community needs to address this issue in future petascale and exascale computing systems. Current scientific benchmarks, such as LINPACK and SPEChpc, only evaluate HEC systems when running at full throttle, i.e., 100% workload, resulting in a focus on performance and ignoring the issues of power and energy consumption. In contrast, efforts like SPECpower evaluate the energy efficiency of a compute server at varying workloads. This is analogous to evaluating the energy efficiency (i.e., fuel efficiency) of an automobile at varying speeds (e.g., miles per gallon highway versus city). SPECpower, however, only evaluates the energy efficiency of a single compute server rather than a HEC system; furthermore, it is based on SPEC's Java Business Benchmarks (SPECjbb) rather than a scientific benchmark. Given the absence of a load-varying scientific benchmark to evaluate the energy efficiency of HEC systems at different workloads, we propose the load-varying LINPACK (LV-LINPACK) benchmark. In this paper, we identify application parameters that affect performance and provide a methodology to vary the workload of LINPACK, thus enabling a more rigorous study of energy efficiency in supercomputers, or more generally, HEC

    The Green500 List: Escapades to Exascale

    Get PDF
    Energy efficiency is now a top priority. The first four years of the Green500 have seen the importance of en- ergy efficiency in supercomputing grow from an afterthought to the forefront of innovation as we near a point where sys- tems will be forced to stop drawing more power. Even so, the landscape of efficiency in supercomputing continues to shift, with new trends emerging, and unexpected shifts in previous predictions. This paper offers an in-depth analysis of the new and shifting trends in the Green500. In addition, the analysis of- fers early indications of the track we are taking toward exas- cale, and what an exascale machine in 2018 is likely to look like. Lastly, we discuss the new efforts and collaborations toward designing and establishing better metrics, method- ologies and workloads for the measurement and analysis of energy-efficient supercomputing

    Computational Evaluation of Potential Correction Methods for Unicoronal Craniosynostosis

    Get PDF
    Unicoronal craniosynostosis is the second most common type of nonsyndromic craniosynostosis: it is characterized by ipsilateral forehead and fronto-parietal region flattening with contralateral compensatory bossing. It is a complex condition; therefore, which is difficult to treat because of the asymmetry in the orbits, cranium, and face. The aim of this study is to understand optimal osteotomy locations, dimensions, and force requirements for surgical operations of unicoronal craniosynostosis using a patient-specific finite element model and — at the same time — to evaluate the potential application of a new device made from Nitinol which was developed to expand the affected side of a unicoronal craniosynostosis skull without performing osteotomies. The model geometry was reconstructed using Simpleware ScanIP. The bone and sutures were modeled using elastic properties to perform the finite element analyses in MSc Marc software. The simulation results showed that expanding the cranium without osteotomy requires a significant amount of force. Therefore, expansion of the cranium achieved by Nitinol devices may not be sufficient to correct the deformity. Moreover, the size and locations of the osteotomies are crucial for an optimal outcome from surgical operations in unicoronal craniosynostosis

    Scaling Smart Appliances for Spatial Data Synthesis

    Get PDF
    International audienceWith the rapidly growing number of dynamic data streams produced by sensing and experimental devices as well as social networks, scientists are given an unprecedented opportunity to explore a variety of environmental and social phenomena ranging from understanding of weather and climate to population dynamics. One of the main challenges is that dynamic data streams and their computation requirements are volatile: sensors or social networks may generate data at highly variable rates, processing time in an application may significantly change from one stage to the next one, or different phenomena may simply generate different levels of interest. Cloud computing is a promising platform allowing us to cope with such volatility because it enables us to allocate computational resources on demand, for short periods of time, and at an acceptable cost. At the same time using clouds for this purpose is challenging because an application may yield a very different performance depending on the hosting infrastructure, requiring us to pay special attention to how and where we schedule resources. In this poster, we describe our experiences using an application relying on input from social networks, notably geo-located tweets, to discover correlation between users’ work and home locations, with focus in the Illinois area. Our overall intent is to assess the impact of running the same application in offerings from different providers; to this end, we execute data filtering and per-user classification applications in two flavors of Chameleon cloud instances, namely bare-metal and KVM. Also, we analyze specific configuration parameters, such as data block size, replication factor and parallel processing, towards statistically modeling the application performance in a given infrastructure. We then identify and discuss the key parameters that influence the execution time. Finally, we look into the gains brought by accounting for data proximity when scheduling a resource in a multi-site environment

    Global patient outcomes after elective surgery: prospective cohort study in 27 low-, middle- and high-income countries.

    Get PDF
    BACKGROUND: As global initiatives increase patient access to surgical treatments, there remains a need to understand the adverse effects of surgery and define appropriate levels of perioperative care. METHODS: We designed a prospective international 7-day cohort study of outcomes following elective adult inpatient surgery in 27 countries. The primary outcome was in-hospital complications. Secondary outcomes were death following a complication (failure to rescue) and death in hospital. Process measures were admission to critical care immediately after surgery or to treat a complication and duration of hospital stay. A single definition of critical care was used for all countries. RESULTS: A total of 474 hospitals in 19 high-, 7 middle- and 1 low-income country were included in the primary analysis. Data included 44 814 patients with a median hospital stay of 4 (range 2-7) days. A total of 7508 patients (16.8%) developed one or more postoperative complication and 207 died (0.5%). The overall mortality among patients who developed complications was 2.8%. Mortality following complications ranged from 2.4% for pulmonary embolism to 43.9% for cardiac arrest. A total of 4360 (9.7%) patients were admitted to a critical care unit as routine immediately after surgery, of whom 2198 (50.4%) developed a complication, with 105 (2.4%) deaths. A total of 1233 patients (16.4%) were admitted to a critical care unit to treat complications, with 119 (9.7%) deaths. Despite lower baseline risk, outcomes were similar in low- and middle-income compared with high-income countries. CONCLUSIONS: Poor patient outcomes are common after inpatient surgery. Global initiatives to increase access to surgical treatments should also address the need for safe perioperative care. STUDY REGISTRATION: ISRCTN5181700

    Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition)

    Get PDF
    In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure fl ux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defi ned as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (inmost higher eukaryotes and some protists such as Dictyostelium ) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the fi eld understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation it is imperative to delete or knock down more than one autophagy-related gene. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways so not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field

    Zero-Rating, Net Neutrality and the Progressive Realisation of Human Rights

    No full text
    The net neutrality debate today, specifically with respect to zero-rating, is invariably characterised as a clash between the noble aspiration to universalise access on one hand, and a handful of “core values of the internet” on the other. Such framing makes for a lively dialogue – neutrality proponents can extol the virtues of an “open internet”, and can argue that access universalisation is impossible, and therefore any failed attempt toward that goal is not worth the risk of permanently altering the nature of the network. Proponents of net neutrality argue that zero-rating would stifle innovation and distort consumer choice to create internet oligopolies – in a nutshell, that the practice is “anti-competitive, patronizing, and counter-productive”. Advocates of zero-rating need only point to the virtues of universal internet access – bridging the digital divide,10 as it were. It is possible, however, to re-articulate these values in terms that could transcend such a clash of interests. Over this article, I argue that it is possible to carry out such a reformulation, and that this reformulation results in the subordination of some values to greater human rights claims. In Part I, I attempt to establish that internet access is fundamentally linked to the effective delivery of several human rights. In Part II, a model that optimises the realisation of the human rights claims previously enumerated is outlined. In Part III, the circumstances under which the human rights approach would be incompatible with the zero-rating debate is examined, which is then used to further refine the model
    corecore