1,153 research outputs found

    Architecting Efficient Data Centers.

    Full text link
    Data center power consumption has become a key constraint in continuing to scale Internet services. As our society’s reliance on “the Cloud” continues to grow, companies require an ever-increasing amount of computational capacity to support their customers. Massive warehouse-scale data centers have emerged, requiring 30MW or more of total power capacity. Over the lifetime of a typical high-scale data center, power-related costs make up 50% of the total cost of ownership (TCO). Furthermore, the aggregate effect of data center power consumption across the country cannot be ignored. In total, data center energy usage has reached approximately 2% of aggregate consumption in the United States and continues to grow. This thesis addresses the need to increase computational efficiency to address this grow- ing problem. It proposes a new classes of power management techniques: coordinated full-system idle low-power modes to increase the energy proportionality of modern servers. First, we introduce the PowerNap server architecture, a coordinated full-system idle low- power mode which transitions in and out of an ultra-low power nap state to save power during brief idle periods. While effective for uniprocessor systems, PowerNap relies on full-system idleness and we show that such idleness disappears as the number of cores per processor continues to increase. We expose this problem in a case study of Google Web search in which we demonstrate that coordinated full-system active power modes are necessary to reach energy proportionality and that PowerNap is ineffective because of a lack of idleness. To recover full-system idleness, we introduce DreamWeaver, architectural support for deep sleep. DreamWeaver allows a server to exchange latency for full-system idleness, allowing PowerNap-enabled servers to be effective and provides a better latency- power savings tradeoff than existing approaches. Finally, this thesis investigates workloads which achieve efficiency through methodical cluster provisioning techniques. Using the popular memcached workload, this thesis provides examples of provisioning clusters for cost-efficiency given latency, throughput, and data set size targets.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/91499/1/meisner_1.pd

    Multitype Maximal Covering Location Problems: Hybridizing discrete and continuous problems

    Get PDF
    Acknowledgements This research has been partially supported by Spanish Ministerio de Ciencia e Innovación, AEI/FEDER grant number PID2020-114594GBC21, Junta de Andalucía projects P18-FR- 1422/2369 and projects FEDERUS-1256951, B-FQM-322-UGR20, CEI-3-FQM331 and Netmeet- Data (Fundación BBVA 2019). The first author was also partially supported by the IMAG-Maria de Maeztu grant CEX2020-001105-M /AEI /10.13039/501100011033. The second author was partially supported by Spanish Ministry of Education and Science grant number PEJ2018- 002962-A, the PhD Program in Mathematics at the Universidad de Granada and Becas de Movilidad entre Universidades Andaluzas e Iberoamericanas (AUIP). The third author was partially funded by grant UIDB/04561/2020 from National Funding from FCT|Fundaçao para a Ciencia e Tecnologia, Portugal.This paper introduces a general modeling framework for a multi-type maximal covering location problem in which the position of facilities in different metric spaces are simultaneously decided to maximize the demand generated by a set of points. From the need of intertwining location decisions in discrete and in continuous sets, a general hybridized problem is considered in which some types of facilities are to be located in finite sets and the others in continuous metric spaces. A natural non-linear model is proposed for which an integer linear programming reformulation is derived. A branch-and-cut algorithm is developed for better tackling the problem. The study proceeds considering the particular case in which the continuous facilities are to be located in the Euclidean plane. In this case, taking advantage from some geometrical properties it is possible to propose an alternative integer linear programming model. The results of an extensive battery of computational experiments performed to assess the methodological contribution of this work is reported on. The data consists of up to 920 demand nodes using real geographical and demographic data.Spanish Ministerio de Ciencia e Innovación, AEI/FEDER grant number PID2020-114594GBC21Junta de Andalucía projects P18-FR- 1422/2369FEDERUS-1256951B-FQM-322-UGR20CEI-3-FQM331Netmeet- Data (Fundación BBVA 2019)MAG-Maria de Maeztu grant CEX2020-001105-M /AEI /10.13039/501100011033Spanish Ministry of Education and Science grant number PEJ2018- 002962-Agrant UIDB/04561/2020 from National Funding from FCT|Fundaçao para a Ciencia e Tecnologia, Portuga

    Qos-aware fine-grained power management in networked computing systems

    Get PDF
    Power is a major design concern of today\u27s networked computing systems, from low-power battery-powered mobile and embedded systems to high-power enterprise servers. Embedded systems are required to be power efficiency because most embedded systems are powered by battery with limited capacity. Similar concern of power expenditure rises as well in enterprise server environments due to cooling requirement, power delivery limit, electricity costs as well as environment pollutions. The power consumption in networked computing systems includes that on circuit board and that for communication. In the context of networked real-time systems, the power dissipation on wireless communication is more significant than that on circuit board. We focus on packet scheduling for wireless real-time systems with renewable energy resources. In such a scenario, it is required to transmit data with higher level of importance periodically. We formulate this packet scheduling problem as an NP-hard reward maximization problem with time and energy constraints. An optimal solution with pseudo polynomial time complexity is presented. In addition, we propose a sub-optimal solution with polynomial time complexity. Circuit board, especially processor, power consumption is still the major source of system power consumption. We provide a general-purposed, practical and comprehensive power management middleware for networked computing systems to manage circuit board power consumption thus to affect system-level power consumption. It has the functionalities of power and performance monitoring, power management (PM) policy selection and PM control, as well as energy efficiency analysis. This middleware includes an extensible PM policy library. We implemented a prototype of this middleware on Base Band Units (BBUs) with three PM policies enclosed. These policies have been validated on different platforms, such as enterprise servers, virtual environments and BBUs. In enterprise environments, the power dissipation on circuit board dominates. Regulation on computing resources on board has a significant impact on power consumption. Dynamic Voltage and Frequency Scaling (DVFS) is an effective technique to conserve energy consumption. We investigate system-level power management in order to avoid system failures due to power capacity overload or overheating. This management needs to control the power consumption in an accurate and responsive manner, which cannot be achieve by the existing black-box feedback control. Thus we present a model-predictive feedback controller to regulate processor frequency so that power budget can be satisfied without significant loss on performance. In addition to providing power guarantee alone, performance with respect to service-level agreements (SLAs) is required to be guaranteed as well. The proliferation of virtualization technology imposes new challenges on power management due to resource sharing. It is hard to achieve optimization in both power and performance on shared infrastructures due to system dynamics. We propose vPnP, a feedback control based coordination approach providing guarantee on application-level performance and underlying physical host power consumption in virtualized environments. This system can adapt gracefully to workload change. The preliminary results show its flexibility to achieve different levels of tradeoffs between power and performance as well as its robustness over a variety of workloads. It is desirable for improve energy efficiency of systems, such as BBUs, hosting soft-real time applications. We proposed a power management strategy for controlling delay and minimizing power consumption using DVFS. We use the Robbins-Monro (RM) stochastic approximation method to estimate delay quantile. We couple a fuzzy controller with the RM algorithm to scale CPU frequency that will maintain performance within the specified QoS

    Probabilistic Modeling Approach to Crack Nucleation from Forging Flaws

    Get PDF
    Der Prozess der Rissbildung aus Schmiedefehlern ist in der Literatur kaum beschrieben. In dieser Arbeit wurde dies durch umfangreiche Experimente und verschiedene Modellierungsans ätze gründlich untersucht. Sie ist eine Fortsetzung des Probabilistic Fracture Mechanics (ProbFM) Projektes, welches eine konservative Bewertung des Versagensrisikos basierend auf dem Wachstum von Ermüdungsrissen aus inhärenten Schmiedefehlern in großen Hochleistungsgasturbinen ermöglicht. Der Fokus der Versuchsreihe lag auf der Charakterisierung von typischen Schmiedefehlern und der Quantifizierung der Nukleationslebensdauer durch zyklische Belastung des Materials. Die Nukleationslebensdauer wurde definiert als die Anzahl der Belastungszyklen, die erforderlich sind, um einen scharfen Riss aus einem anfänglichen Fehler zu erzeugen. Dieser Teil des Lebenszyklus wird normalerweise beim Design ignoriert und konservativ gleich null Zyklen angenommen. Aus den Experimenten wissen wir, dass dies nicht stimmt und dass oft etwa 50% der Lebensdauer auf den Nukleationsprozess entfallen. Das Hauptziel der Untersuchungen ist die Entwicklung eines effektiven Modellierungsansatzes zur Quantifizierung der Nukleationslebensdauer von Fehlern unter variierenden Belastungsbedingungen. Dieser Ansatz muss dann in ein Engineering- Tool eingebettet werden, welches zum Design realer Komponenten verwendet wird. Es wurden drei verschiedene Modellierungsansätze untersucht, von denen einer in ProbFM implementiert wurde. Im ersten Ansatz wird der Fehler als eingebettetes Ellipsoid innerhalb des Matrixmaterials mit einer Finite-Elemente-Methode modelliert. Das Dehnungsfeld um das Ellipsoid, das sich aus den angelegten Spannungen und Temperaturen ergibt, dient als Eingabe für die lokale probabilistische LCF Bewertung. Bei dieser Bewertung wird die Wahrscheinlichkeit für Rissbildung als Oberfl¨achenintegral der lokalen Hazarddichte berechnet. Die zugrunde liegende Modellannahme ist, dass die lokale LCF-Lebensdauer Weibull verteilt ist. Der zweite und dritte Versuch basieren auf der Annahme eines Fehlers, der als flacher Oberflächenbereich modelliert ist. Auch hier wird die lokale probabilistische LCF-Lebensdauer berechnet und, in einem Fall, mit einem Lebensdauermultiplikationsfaktor w und, in dem anderen Fall, mit einem Spannungskonzentrationsfaktor Kt korrigiert. Das flächenbasierte Modell mit dem kalibrierten Spannungskonzentrationsfaktor passt am besten zu den experimentellen Ergebnissen und wird zur Implementierung vorgeschlagen. Der Konservatismus wurde während der gesamten Analyse berücksichtigt, um ein zuverlässiges Modell zu gewährleisten, das für eine technische Anwendung geeignet ist. Das resultierende Modell wurde in einer Zuverlässigkeitsbewertung eines realen Rotorscheibendesigns vorgestellt. Die erhaltenen Ausfallwahrscheinlichkeiten sind geringer, wenn die Nukleationslebensdauer berücksichtigt wird, und die Lebensdauer der Komponenten kann potenziell verlängert werden

    Technical Design Report for the PANDA Micro Vertex Detector

    Get PDF
    This document illustrates the technical layout and the expected performance of the Micro Vertex Detector (MVD) of the PANDA experiment. The MVD will detect charged particles as close as possible to the interaction zone. Design criteria and the optimisation process as well as the technical solutions chosen are discussed and the results of this process are subjected to extensive Monte Carlo physics studies. The route towards realisation of the detector is outlined

    A theoretical and computational basis for CATNETS

    Get PDF
    The main content of this report is the identification and definition of market mechanisms for Application Layer Networks (ALNs). On basis of the structured Market Engineering process, the work comprises the identification of requirements which adequate market mechanisms for ALNs have to fulfill. Subsequently, two mechanisms for each, the centralized and the decentralized case are described in this document. These build the theoretical foundation for the work within the following two years of the CATNETS project. --Grid Computing

    Coverage Optimization with a Dynamic Network of Drone Relays

    Get PDF
    The integration of aerial base stations carried by drones in cellular networks offers promising opportunities to enhance the connectivity enjoyed by ground users. In this paper, we propose an optimization framework for the 3-D placement and repositioning of a fleet of drones with a realistic inter-drone interference model and drone connectivity constraints. We show how to maximize network coverage by means of an extremal-optimization algorithm. The design of our algorithm is based on a mixed-integer non-convex program formulation for a coverage problem that is NP-Complete, as we prove in the paper. We not only optimize drone positions in a 3-D space in polynomial time, but also assign flight routes solving an assignment problem and using a strong geometrical tool, namely Bézier curves, which are extremely useful for non-uniform and realistic topologies. Specifically, we propose to fly drones following Bézier curves to seek the chance of approaching to clusters of ground users. This enhances coverage over time while users and drones move. We assess the performance of our proposal for synthetic scenarios as well as realistic maps extracted from the topology of a capital city. We demonstrate that our framework is near-optimal and using Bézier curves increases coverage up to 47 percent while drones move

    Evaluation of Tradeoffs in Resource Management Techniques for Multimedia Storage Servers

    Get PDF
    Many modern applications can benefit from sharing of resources such as network bandwidth, disk bandwidth, and so on. In addition, many information systems store (or would like to store) data that can be of use to many different classes of applications, e.g., digital libraries type systems. Part of the difficulty in efficient resource management of such systems can then occur when these applications have vastly different performance and quality-of-service (QoS) requirements as well as resource demand characteristics. In this work we present a performance study of a multimedia storage system which serves multiple types of workloads, specifically a mixture of real-time and non-real-time workloads, by allowing sharing of resources among these different workloads while satisfying their performance requirements and QoS constraints. The broad aim of this work is to examine the issues and tradeoffs associated with mixing multiple workloads on the same server to explore the possibility of maintaining reasonable performance and QoS requirements without having to partition the resources. The main contribution of this work is the exposition of the tradeoffs involved in resource management in such systems. Although many different resources can be considered, here we concentrate mostly on the I/O bandwidth resource. The performance metrics of interest are the mean and variance of the response time for the non-real-time applications and the probability of missing a deadline for the real-time applications. The increased use of buffer space resources is also considered as a tradeoff for improvements in the above stated performance metrics, i.e., response time and probability of missing deadlines. (Also cross-referenced as UMIACS-TR-98-30
    corecore