19 research outputs found

    Using Machine Learning to Optimize Web Interactions on Heterogeneous Mobile Systems

    Get PDF
    The web has become a ubiquitous application development platform for mobile systems. Yet, web access on mobile devices remains an energy-hungry activity. Prior work in the field mainly focuses on the initial page loading stage, but fails to exploit the opportunities for energy-efficiency optimization while the user is interacting with a loaded page. This paper presents a novel approach for performing energy optimization for interactive mobile web browsing. At the heart of our approach is a set of machine learning models, which estimate at runtime the frames per second for a given user interaction input by running the computation-intensive web render engine on a specific processor core under a given clock speed. We use the learned predictive models as a utility function to quickly search for the optimal processor setting to carefully trade responsive time for reduced energy consumption. We integrate our techniques to the open-source Chromium browser and apply it to two representative mobile user events: scrolling and pinching (i.e., zoom in and out). We evaluate the developed system on the landing pages of the top-100 hottest websites and two big.LITTLE heterogeneous mobile platforms. Our extensive experiments show that the proposed approach reduces the system-wide energy consumption by over 36% on average and up to 70%. This translates to an over 17% improvement on energy-efficiency over a state-of-the-art event-based web browser scheduler, but with significantly fewer violations on the quality of service

    Efficient mobile computing

    Get PDF
    Smart handheld devices such as phones, tablets and watches are becoming more and more common rapidly. From a computer architect point of view, processor design for such computer systems is a complex problem. Since the Li-ion battery manufacturing technology is strictly limited by physical and technological limitations, the new generation of mobile processors should have a better energy efficiency to support an acceptable battery life. On the other hand, TLP of current mobile applications is measured to be mostly less than 2, which implies mobile processors should have high performance on user's demand to provide an acceptable QoE. By shrinking the chip manufacturing technology size, SoC design has been the most preferred integration approach in such applications. For example, Apple's A10, iPhone7 SoC, has more than 3 billion transistors including 4 big.LITTLE cores, 6 GPU cores and caches. Due to power-density and heat-dissipation constraints of such integration level, providing high performance on demand in an efficient way is a complex control problem. In the A10 architecture, assigning the right cores at the right time to running threads is a challenging complex control problem. The state of the art systems have control loops for controlling architectural parameters in different ways. In mobile devices controllers are heuristics-based mainly for simplicity. Considering the power-density and heat-dissipation issues in such systems, we propose an OS architecture and interface to provide an environment for improving the functionality of controllers in mobile computer systems

    User experience driven CPU frequency scaling on mobile devices towards better energy efficiency

    Get PDF
    With the development of modern smartphones, mobile devices have become ubiquitous in our daily lives. With high processing capabilities and a vast number of applications, users now need them for both business and personal tasks. Unfortunately, battery technology did not scale with the same speed as computational power. Hence, modern smartphone batteries often last for less than a day before they need to be recharged. One of the most power hungry components is the central processing unit (CPU). Multiple techniques are applied to reduce CPU energy consumption. Among them is dynamic voltage and frequency scaling (DVFS). This technique reduces energy consumption by dynamically changing CPU supply voltage depending on the currently running workload. Reducing voltage, however, also makes it necessary to reduce the clock frequency, which can have a significant impact on task performance. Current DVFS algorithms deliver a good user experience, however, as experiments conducted later in this thesis will show, they do not deliver an optimal energy efficiency for an interactive mobile workload. This thesis presents methods and tools to determine where energy can be saved during mobile workload execution when using DVFS. Furthermore, an improved DVFS technique is developed that achieves a higher energy efficiency than the current standard. One important question when developing a DVFS technique is: How much can you slow down a task to save energy before the negative effect on performance becomes intolerable? The ultimate goal when optimising a mobile system is to provide a high quality of experience (QOE) to the end user. In that context, task slowdowns become intolerable when they have a perceptible effect on QOE. Experiments conducted in this thesis answer this question by identifying workload periods in which performance changes are directly perceptible by the end user and periods where they are imperceptible, namely interaction lags and interaction idle periods. Interaction lags are the time it takes the system to process a user interaction and display a corresponding response. Idle periods are the periods between interactions where the user perceives the system as idle and ready for the next input. By knowing where those periods are and how they are affected by frequency changes, a more energy efficient DVFS governor can be developed. This thesis begins by introducing a methodology that measures the duration of interaction lags as perceived by the user. It uses them as an indicator to benchmark the quality of experience for a workload execution. A representative benchmark workload is generated comprising 190 minutes of interactions collected from real users. In conjunction with this QOE benchmark, a DVFS Oracle study is conducted. It is able to find a frequency profile for an interactive mobile workload which has the maximum energy savings achievable without a perceptible performance impact on the user. The developed Oracle performance profile achieves a QOE which is indistinguishable from always running on the fastest frequency while needing 45% less energy. Furthermore, this Oracle is used as a baseline to evaluate how well current mobile frequency governors are performing. It shows that none of these governors perform particularly well and up to 32% energy savings are possible. Equipped with a benchmark and an optimisation baseline, a user perception aware DVFS technique is developed in the second part of this thesis. Initially, a runtime heuristic is introduced which is able to detect interaction lags as the user would perceive them. Using this heuristic, a reinforcement learning driven governor is developed which is able to learn good frequency settings for interaction lag and idle periods based on sample observations. It consumes up to 22% less energy than current standard governors on mobile devices, and maintains a low impact on QOE

    Impacto das comunicações M2M em redes celulares de telecomunicações

    Get PDF
    Mestrado em Engenharia Electrónica e de TelecomunicaçõesAs comunicações Máquina-Máquina (M2M) apresentam um crescimento muito significativo e algumas projeções apontam para que esta tendência se acentue drasticamente ao longo dos próximos anos. O tráfego gerado por este tipo de comunicações tem caraterísticas muito diferentes do tráfego de dados, ou voz, que atualmente circula nas redes celulares de telecomunicações. Assim, é fundamental estudar as caraterísticas dos tipos de tráfego associados com comunicações M2M, por forma a compreender os efeitos que tais caraterísticas podem provocar nas redes celulares de telecomunicações. Esta dissertação procura identificar e estudar algumas das caraterísticas do tráfego M2M, com especial enfoque na sinalização gerada por serviços M2M. Como resultado principal deste trabalho surge o desenvolvimento de modelos que permitem a construção de uma ferramenta analítica de orquestração de serviços e análise de rede. Esta ferramenta permite orquestrar serviços e modelar padrões de tráfego numa rede UMTS, possibilitando uma análise simultânea aos efeitos produzidos no segmento core da mesma rede. Ao longo deste trabalho procura-se que a abordagem aos problemas apresentados permita que os resultados obtidos sejam válidos, ou adaptáveis, num âmbito mais abrangente do que apenas as comunicações M2M.Machine to Machine (M2M) communications present significant growth and some projections indicate that this trend is going to increase dramatically over the coming years. The traffic generated by this type of communication has very different characteristics when compared to data or voice traffic currently going through cellular telecommunications networks. Thus, it is essential to study the characteristics of traffic associated with M2M communications in order to understand the effects that its features can imply to cellular telecommunications networks. This dissertation tries to identify and study some of the characteristics of M2M traffic, with particular focus on signaling generated by M2M services. A number of models, that enable the development of an analytic tool for service orchestration and network analysis, are presented. This tool enables service orchestration and traffic modeling on a UMTS network, with simultaneous visualization of the impacts on the core of such network. The work presented in this document seeks to approach the problems at study in ways ensuring that its outcomes are valid for a wider scope than just M2M communications

    Energy Awareness for Multiple Network Interface-Activated Smartphone

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 공과대학 전기·컴퓨터공학부, 2017. 8. 최성현.최신 스마트폰은 LTE와 Wi-Fi와 같은 네트워크 인터페이스 여러 개를 동시에 사용하여 전송율을 증가시키거나 네트워크 접속성을 향상시킬 수 있다. 이런 경우에, 스마트폰의 에너지 소모는 LTE와 Wi-Fi를 동시에 사용함에 따라 증가할 수 있다. 특히, 제한된 용량을 가지는 배터리로 동작을 하는 스마트폰에서 에너지 소모는 중요한 이슈이기 때문에, 에너지 증가와 성능 향상 사이의 trade-off를 고려하는 것이 요구된다. 에너지 인지 기술과 함께 스마트폰의 LTE와 Wi-Fi 인터페이스를 전략적으로 잘 사용함으로써, 배터리 에너지 소모를 줄임과 동시에 스마트폰 어플리케이션의 성능을 최적화 하는 것이 가능해진다. 본 논문에서는 LTE와 Wi-Fi 링크를 동시에 활용할 수 있는 스마트폰에서 에너지 인지를 가능하게 하는 다음 세 가지 전략을 고려하였다. (i) LTE와 Wi-Fi를 동시에 사용하는 스마트폰의 전력 소모 모델링, (ii) 스마트폰의 배터리 소모율의 실시간 예측 기법, (iii) dynamic adaptive streaming over HTTP (DASH) 기반의 비디오 스트리밍의 성능 최적화. 먼저, 동시에 여러 네트워크를 활성화하여 사용하는 스마트폰의 정밀한 전력 소모 모델링을 제시한다. 패킷 처리에 의해 소모되는 전력과 네트워크 인터페이스에서 소모되는 전력을 분해하여, 다중 네트워크 인터페이스를 활성화시키는 경우의 정밀한 전력 소모 모델을 구성한다. 다양한 시나리오에서 측정된 전력과 모델을 통해 예측된 전력을 비교하며 제안하는 전력 소모 모델의 정확성을 평가하였다. 기존 전력 소모 모델에 비해 단일 네트워크 통신의 경우에도 7%-35% 만큼 추정 오차를 줄였으며, 다중 네트워크 통신의 경우 추정 오차를 25% 줄임을 보였다. 두 번째로, 스마트폰의 실시간 에너지 인지를 가능하게 하기 위해서, 스마트폰에서 사용하고 있는 Li-ion 배터리의 특성을 고려하여 배터리 소모율을 추정하는 기법을 고안한다. Li-ion 배터리는 온도와 노화 상태에 따라서 가용 용량과 내부 저항이 달라지기 때문에, 온도와 노화 상태에 따라서 배터리 소모율이 달라질 수 있다. 배터리 특성을 모델링하는 것은 어려운 일이기 때문에, effective resistance 개념을 도입하여 용량과 내부 저항을 모르고도 배터리 소모율을 예측할 수 있는 BattTracker를 고안한다. BattTracker는 실시간 배터리 소모율을 최대 0.5초 마다 추정할 수 있다. 실제 스마트폰으로 다양한 실험을 통하여 BattTracker가 배터리 소모 예측을 5% 오차율 이내로 예측함을 보였으며, 이를 활용하면 높은 시간 해상도로 스마트폰의 에너지 인지 동작이 가능해진다. 마지막으로, 애너지 인지 기법과 LTE와 Wi-Fi 링크를 동시에 활용하는 것을 dynamic adaptive streaming over HTTP (DASH) 기반 비디오 스트리밍 어플리케이션에 적용한다. 스마트폰에서 LTE와 Wi-Fi 링크를 동시에 활용하는 것은 다양한 측면에서 DASH 기반의 비디오 스트리밍의 성능을 향상시킬 수 있다. 그러나, 배터리 에너지와 LTE 데이터 사용량을 절약하면서 끊김없는 고화질의 비디오를 스트리밍하는 것은 도전적인 일이다. 따라서, DASH 비디오를 시청하는 사용자의 Quality of Experience (QoE)를 향상시키기 위하여 LTE와 Wi-Fi를 동시에 활용하는 DASH video chunk 요청 기법인 REQUEST를 제안한다. REQUEST는 주어진 배터리 에너지와 LTE 사용량 예산 내에서 최적에 가까운 품질의 비디오를 끊김없이 스트리밍해 주는 것을 가능하게 한다. 다양한 환경에서의 시뮬레이션과 실 환경에서의 측정을 통하여, REQUEST가 기존 비디오 스트리밍 기법에 비해 평균 비디오 품질, 재버퍼링, 자원 낭비량 관점에서 상당히 우수함을 보인다. 요약하자면, 우리는 LTE와 Wi-Fi의 동시 사용하는 스마트폰에서의 전력 모델링 방법론, 실시간 배터리 소모율 추정 기법, DASH 기반 비디오 스트리밍 성능 최적화를 제안한다. 이 연구를 통해서, 우리는 프로토타입 구현과 실험 장비들을 통한 실측 기반으로 LTE와 Wi-Fi를 동시에 활용하는 스마트폰을 위한 에너지 인지 기술을 제안한다. 제안하는 기법들의 성능은 상용 스마트폰에 구현하여 실 환경에서의 성능 평가를 통해 검증하였다.State-of-the-art smartphones can utilize multiple network interfaces simultaneously, e.g., LTE and Wi-Fi, to enhance throughput and network connectivity in various use cases. In this situation, energy consumption of smartphones can increase while using both LTE and Wi-Fi interfaces simultaneously. Since energy consumption is an important issue for smartphones powered by batteries with limited capacity, it is required to consider the trade-off between energy increase and performance enhancement. By judiciously utilizing both LTE and Wi-Fi interfaces of smartphones and energy awareness techniques, it is enabled to optimize the performance of smartphones applications while saving battery energy. In this dissertation, we consider the following three strategies to enable the energy awareness for smartphones which utilize both LTE and Wi-Fi links: (i) Power modeling for smartphone which utilizes both LTE and Wi-Fi links simultaneously, (ii) real-time battery drain rate estimation for smartphones, and (iii) optimizing the performance of dynamic adaptive streaming over HTTP (DASH)-based video streaming for smartphones. First, an accurate power modeling is presented for the smartphones, especially, those capable of activating/utilizing multiple networks simultaneously. By decomposing packet processing power and power consumed by network interfaces, we construct the accurate power model for multiple network interface-activated cases. The accuracy of our model is comparatively evaluated by comparing the estimated power with the measured power in various scenarios. We find that our model reduces estimation error by 7%–35% even for single network transmissions, and by 25% for multiple network transmissions compared with existing power models. Second, in order to enable real-time energy awareness for smartphones, we develop a battery drain rate monitoring technique by considering characteristics of Li-ion batteries which are used by smartphones. With Li-ion battery, battery drain rate varies with temperature and battery aging, since they affect battery characteristics such as capacity and internal resistance. Since it is difficult to model the battery characteristics, we develop BattTracker, an algorithm to estimate battery drain rate without knowing the exact capacity and internal resistance by incorporating the concept of effective resistance. BattTracker tracks the instantaneous battery drain rate with up to 0.5 second time granularity. Extensive evaluation with smartphones demonstrates that BattTracker accurately estimates the battery drain rate with less than 5% estimation error, thus enabling energy-aware operation of smartphones with fine-grained time granularity. Finally, we adapt an energy awareness and utilize both LTE and Wi-Fi links for a Dynamic Adaptive Streaming over HTTP (DASH)-based video streaming application. Exploiting both LTE and Wi-Fi links simultaneously enhances the performance of DASH-based video streaming in various aspects. However, it is challenging to achieve seamless and high quality video while saving battery energy and LTE data usage to prolong the usage time of a smartphone. Thus, we propose REQUEST, a video chunk request policy for DASH in a smartphone, which can utilize both LTE andWi-Fi to enhance users Quality of Experience (QoE). REQUEST enables seamless DASH video streaming with near optimal video quality under given budgets of battery energy and LTE data usage. Through extensive simulation and measurement in a real environment, we demonstrate that REQUEST significantly outperforms other existing schemes in terms of average video bitrate, rebuffering, and resource waste. In summary, we propose a power modeling methodology, a real-time battery drain rate estimation method, and performance optimization of DASH-based video streaming for a smartphone which utilizes both LTE and Wi-Fi simultaneously. Through this research, we propose several energy-aware techniques for the smartphone, which especially utilizes both LTE and Wi-Fi, based on prototype implementation and the real measurement with experimental equipment. The performance of the proposed methods are validated by implementation on off-the-shelf smartphones and evaluations in real environments.1 Introduction 1 1.1 Motivation 1 1.2 Overview of Existing Approaches 3 1.2.1 Power modeling of smartphone 3 1.2.2 Battery drain rate estimation 4 1.2.3 Optimizing the performance of video streaming 5 1.3 Main Contributions 6 1.3.1 PIMM: Power Modeling of Multiple Network Interface-Activated Smartphone 6 1.3.2 BattTracker: Real-Time Battery Drain Rate Estimation 7 1.3.3 REQUEST: Performance Optimization of DASH Video Streaming 8 1.4 Organization of the Dissertation 8 2 PIMM: Packet Interval-Based Power Modeling of Multiple Network Interface-Activated Smartphones 10 2.1 Introduction 10 2.2 Background 12 2.2.1 Power saving operations of Wi-Fi and LTE 12 2.2.2 Related work 13 2.3 Power Consumption Modeling 14 2.3.1 Modeling Methodology 15 2.3.2 Packet interval-based power modeling 21 2.4 Practical issues 30 2.4.1 Impact of packet length 30 2.4.2 Impact of channel quality 33 2.5 Performance Evaluation 34 2.5.1 On-line power estimation 35 2.5.2 Single network data connections 37 2.5.3 Multiple network data connections 42 2.5.4 Model generation complexity 45 2.6 Summary 46 3 BattTracker: Enabling Energy Awareness for Smartphone Using Li-ion Battery Characteristics 47 3.1 Introduction 47 3.2 Background 50 3.2.1 Smartphones battery interface 50 3.2.2 State of Charge (SoC) and Open Circuit Voltage (Voc) 50 3.2.3 Batterys internal resistance 53 3.2.4 Related Work 53 3.3 Li-ion Battery Characteristics 54 3.3.1 Impact of temperature and aging on Li-ion battery 54 3.3.2 Measuring battery characteristics 54 3.3.3 Battery characteristics models 57 3.4 Effective Internal Resistance 58 3.4.1 Effective internal resistance (re) 58 3.4.2 Battery drain rate and lifetime derived from re 60 3.5 BattTracker Design 61 3.5.1 Rd estimator 62 3.5.2 Voc estimator 62 3.5.3 re estimator 64 3.6 Performance Evaluation 65 3.6.1 Comparison schemes and performance metrics 66 3.6.2 Convergence time of re 67 3.6.3 Power consumption overhead of BattTracker 67 3.6.4 Comparison with measurement using equipment 68 3.6.5 BattTracker with aged batteries 70 3.6.6 BattTracker with various video applications 70 3.6.7 BattTracker with varying temperature 73 3.7 Summary 74 4 REQUEST: Seamless Dynamic Adaptive Streaming over HTTP for Multi-Homed Smartphone under Resource Constraints 75 4.1 Introduction 75 4.2 Background and Related Work 77 4.2.1 Background 77 4.2.2 Related Work 79 4.3 Motivation 80 4.3.1 Wi-Fi Throughput Fluctuation 80 4.3.2 Optimizing Resource Utilization 82 4.4 Proposed Chunk Request Policy 83 4.5 Problem Formulation 86 4.6 REQUEST Algorithm 90 4.6.1 Request Interval Adaptation 91 4.6.2 Chunk Request Adaptation 94 4.7 Performance Evaluation 96 4.7.1 Prototype-Based Evaluation 98 4.7.2 Trace-Driven Simulation 102 4.8 Summary 104 5 Concluding Remarks 106 5.1 Research Contributions 106 5.2 Future Work 107 Abstract (In Korean) 117Docto

    Device characteristics-based differentiated energy-efficient adaptive solution for multimedia delivery over heterogeneous wireless networks

    Get PDF
    Energy efficiency is a key issue of highest importance to mobile wireless device users, as those devices are powered by batteries with limited power capacity. It is of very high interest to provide device differentiated user centric energy efficient multimedia content delivery based on current application type, energy-oriented device features and user preferences. This thesis presents the following research contributions in the area of energy efficient multimedia delivery over heterogeneous wireless networks: 1. ASP: Energy-oriented Application-based System profiling for mobile devices: This profiling provides services to other contributions in this thesis. By monitoring the running applications and the corresponding power demand on the smart mobile device, a device energy model is obtained. The model is used in conjunction with applications’ power signature to provide device energy constraints posed by running applications. 2. AWERA 3. DEAS: A Device characteristics-based differentiated Energy-efficient Adaptive Solution for video delivery over heterogeneous wireless networks. Based on the energy constraint, DEAS performs energy efficient content delivery adaptation for the current application. Unlike the existing solutions, DEAS takes all the applications running on the system into account and better balances QoS and energy efficiency. 4. EDCAM 5. A comprehensive survey on state-of-the-art energy-efficient network protocols and energy-saving network technologies

    Energy efficient heterogeneous virtualized data centers

    Get PDF
    Meine Dissertation befasst sich mit software-gesteuerter Steigerung der Energie-Effizienz von Rechenzentren. Deren Anteil am weltweiten Gesamtstrombedarf wurde auf 1-2%geschätzt, mit stark steigender Tendenz. Server verursachen oft innerhalb von 3 Jahren Stromkosten, die die Anschaffungskosten übersteigen. Die Steigerung der Effizienz aller Komponenten eines Rechenzentrums ist daher von hoher ökonomischer und ökologischer Bedeutung. Meine Dissertation befasst sich speziell mit dem effizienten Betrieb der Server. Ein Großteil wird sehr ineffizient genutzt, Auslastungsbereiche von 10-20% sind der Normalfall, bei gleichzeitig hohem Strombedarf. In den letzten Jahren wurde im Bereich der Green Data Centers bereits Erhebliches an Forschung geleistet, etwa bei Kühltechniken. Viele Fragestellungen sind jedoch derzeit nur unzureichend oder gar nicht gelöst. Dazu zählt, inwiefern eine virtualisierte und heterogene Server-Infrastruktur möglichst stromsparend betrieben werden kann, ohne dass Dienstqualität und damit Umsatzziele Schaden nehmen. Ein Großteil der bestehenden Arbeiten beschäftigt sich mit homogenen Cluster-Infrastrukturen, deren Rahmenbedingungen nicht annähernd mit Business-Infrastrukturen vergleichbar sind. Hier dürfen verringerte Stromkosten im Allgemeinen nicht durch Umsatzeinbußen zunichte gemacht werden. Insbesondere ist ein automatischer Trade-Off zwischen mehreren Kostenfaktoren, von denen einer der Energiebedarf ist, nur unzureichend erforscht. In meiner Arbeit werden mathematische Modelle und Algorithmen zur Steigerung der Energie-Effizienz von Rechenzentren erforscht und bewertet. Es soll immer nur so viel an stromverbrauchender Hardware online sein, wie zur Bewältigung der momentan anfallenden Arbeitslast notwendig ist. Bei sinkender Arbeitslast wird die Infrastruktur konsolidiert und nicht benötigte Server abgedreht. Bei steigender Arbeitslast werden zusätzliche Server aufgedreht, und die Infrastruktur skaliert. Idealerweise geschieht dies vorausschauend anhand von Prognosen zur Arbeitslastentwicklung. Die Arbeitslast, gekapselt in VMs, wird in beiden Fällen per Live Migration auf andere Server verschoben. Die Frage, welche VM auf welchem Server laufen soll, sodass in Summe möglichst wenig Strom verbraucht wird und gewisse Nebenbedingungen nicht verletzt werden (etwa SLAs), ist ein kombinatorisches Optimierungsproblem in mehreren Variablen. Dieses muss regelmäßig neu gelöst werden, da sich etwa der Ressourcenbedarf der VMs ändert. Weiters sind Server hinsichtlich ihrer Ausstattung und ihres Strombedarfs nicht homogen. Aufgrund der Komplexität ist eine exakte Lösung praktisch unmöglich. Eine Heuristik aus verwandten Problemklassen (vector packing) wird angepasst, ein meta-heuristischer Ansatz aus der Natur (Genetische Algorithmen) umformuliert. Ein einfach konfigurierbares Kostenmodell wird formuliert, um Energieeinsparungen gegenüber der Dienstqualität abzuwägen. Die Lösungsansätze werden mit Load-Balancing verglichen. Zusätzlich werden die Forecasting-Methoden SARIMA und Holt-Winters evaluiert. Weiters werden Modelle entwickelt, die den negativen Einfluss einer Live Migration auf die Dienstqualität voraussagen können, und Ansätze evaluiert, die diesen Einfluss verringern. Abschließend wird untersucht, inwiefern das Protokollieren des Energieverbrauchs Auswirkungen auf Aspekte der Security und Privacy haben kann.My thesis is about increasing the energy efficiency of data centers by using a management software. It was estimated that world-wide data centers already consume 1-2%of the globally provided electrical energy. Furthermore, a typical server causes higher electricity costs over a 3 year lifespan than the purchase cost. Hence, increasing the energy efficiency of all components found in a data center is of high ecological as well as economic importance. The focus of my thesis is to increase the efficiency of servers in a data center. The vast majority of servers in data centers are underutilized for a significant amount of time, operating regions of 10-20%utilization are common. Still, these servers consume huge amounts of energy. A lot of efforts have been made in the area of Green Data Centers during the last years, e.g., regarding cooling efficiency. Nevertheless, there are still many open issues, e.g., operating a virtualized, heterogeneous business infrastructure with the minimum possible power consumption, under the constraint that Quality of Service, and in consequence, revenue are not severely decreased. The majority of existing work is dealing with homogeneous cluster infrastructures, where large assumptions can be made. Especially, an automatic trade-off between competing cost categories, with energy costs being just one of them, is insufficiently studied. In my thesis, I investigate and evaluate mathematical models and algorithms in the context of increasing the energy efficiency of servers in a data center. The amount of online, power consuming resources should at all times be close to the amount of actually required resources. If the workload intensity is decreasing, the infrastructure is consolidated by shutting down servers. If the intensity is rising, the infrastructure is scaled by waking up servers. Ideally, this happens pro-actively by making forecasts about the workload development. Workload is encapsulated in VMs and is live migrated to other servers. The problem of mapping VMs to physical servers in a way that minimizes power consumption, but does not lead to severe Quality of Service violations, is a multi-objective combinatorial optimization problem. It has to be solved frequently as the VMs' resource demands are usually dynamic. Further, servers are not homogeneous regarding their performance and power consumption. Due to the computational complexity, exact solutions are practically intractable. A greedy heuristic stemming from the problem of vector packing and a meta-heuristic genetic algorithm are investigated and evaluated. A configurable cost model is created in order to trade-off energy cost savings with QoS violations. The base for comparison is load balancing. Additionally, the forecasting methods SARIMA and Holt-Winters are evaluated. Further, models able to predict the negative impact of live migration on QoS are developed, and approaches to decrease this impact are investigated. Finally, an examination is carried out regarding the possible consequences of collecting and storing energy consumption data of servers on security and privacy
    corecore