24 research outputs found

    A survey on energy efficiency in information systems

    Get PDF
    Concerns about energy and sustainability are growing everyday involving a wide range of fields. Even Information Systems (ISs) are being influenced by the issue of reducing pollution and energy consumption and new fields are rising dealing with this topic. One of these fields is Green Information Technology (IT), which deals with energy efficiency with a focus on IT. Researchers have faced this problem according to several points of view. The purpose of this paper is to understand the trends and the future development of Green IT by analyzing the state-of-the-art and classifying existing approaches to understand which are the components that have an impact on energy efficiency in ISs and how this impact can be reduced. At first, we explore some guidelines that can help to understand the efficiency level of an organization and of an IS. Then, we discuss measurement and estimation of energy efficiency and identify which are the components that mainly contribute to energy waste and how it is possible to improve energy efficiency, both at the hardware and at the software level

    Energy efficient heterogeneous virtualized data centers

    Get PDF
    Meine Dissertation befasst sich mit software-gesteuerter Steigerung der Energie-Effizienz von Rechenzentren. Deren Anteil am weltweiten Gesamtstrombedarf wurde auf 1-2%geschätzt, mit stark steigender Tendenz. Server verursachen oft innerhalb von 3 Jahren Stromkosten, die die Anschaffungskosten übersteigen. Die Steigerung der Effizienz aller Komponenten eines Rechenzentrums ist daher von hoher ökonomischer und ökologischer Bedeutung. Meine Dissertation befasst sich speziell mit dem effizienten Betrieb der Server. Ein Großteil wird sehr ineffizient genutzt, Auslastungsbereiche von 10-20% sind der Normalfall, bei gleichzeitig hohem Strombedarf. In den letzten Jahren wurde im Bereich der Green Data Centers bereits Erhebliches an Forschung geleistet, etwa bei Kühltechniken. Viele Fragestellungen sind jedoch derzeit nur unzureichend oder gar nicht gelöst. Dazu zählt, inwiefern eine virtualisierte und heterogene Server-Infrastruktur möglichst stromsparend betrieben werden kann, ohne dass Dienstqualität und damit Umsatzziele Schaden nehmen. Ein Großteil der bestehenden Arbeiten beschäftigt sich mit homogenen Cluster-Infrastrukturen, deren Rahmenbedingungen nicht annähernd mit Business-Infrastrukturen vergleichbar sind. Hier dürfen verringerte Stromkosten im Allgemeinen nicht durch Umsatzeinbußen zunichte gemacht werden. Insbesondere ist ein automatischer Trade-Off zwischen mehreren Kostenfaktoren, von denen einer der Energiebedarf ist, nur unzureichend erforscht. In meiner Arbeit werden mathematische Modelle und Algorithmen zur Steigerung der Energie-Effizienz von Rechenzentren erforscht und bewertet. Es soll immer nur so viel an stromverbrauchender Hardware online sein, wie zur Bewältigung der momentan anfallenden Arbeitslast notwendig ist. Bei sinkender Arbeitslast wird die Infrastruktur konsolidiert und nicht benötigte Server abgedreht. Bei steigender Arbeitslast werden zusätzliche Server aufgedreht, und die Infrastruktur skaliert. Idealerweise geschieht dies vorausschauend anhand von Prognosen zur Arbeitslastentwicklung. Die Arbeitslast, gekapselt in VMs, wird in beiden Fällen per Live Migration auf andere Server verschoben. Die Frage, welche VM auf welchem Server laufen soll, sodass in Summe möglichst wenig Strom verbraucht wird und gewisse Nebenbedingungen nicht verletzt werden (etwa SLAs), ist ein kombinatorisches Optimierungsproblem in mehreren Variablen. Dieses muss regelmäßig neu gelöst werden, da sich etwa der Ressourcenbedarf der VMs ändert. Weiters sind Server hinsichtlich ihrer Ausstattung und ihres Strombedarfs nicht homogen. Aufgrund der Komplexität ist eine exakte Lösung praktisch unmöglich. Eine Heuristik aus verwandten Problemklassen (vector packing) wird angepasst, ein meta-heuristischer Ansatz aus der Natur (Genetische Algorithmen) umformuliert. Ein einfach konfigurierbares Kostenmodell wird formuliert, um Energieeinsparungen gegenüber der Dienstqualität abzuwägen. Die Lösungsansätze werden mit Load-Balancing verglichen. Zusätzlich werden die Forecasting-Methoden SARIMA und Holt-Winters evaluiert. Weiters werden Modelle entwickelt, die den negativen Einfluss einer Live Migration auf die Dienstqualität voraussagen können, und Ansätze evaluiert, die diesen Einfluss verringern. Abschließend wird untersucht, inwiefern das Protokollieren des Energieverbrauchs Auswirkungen auf Aspekte der Security und Privacy haben kann.My thesis is about increasing the energy efficiency of data centers by using a management software. It was estimated that world-wide data centers already consume 1-2%of the globally provided electrical energy. Furthermore, a typical server causes higher electricity costs over a 3 year lifespan than the purchase cost. Hence, increasing the energy efficiency of all components found in a data center is of high ecological as well as economic importance. The focus of my thesis is to increase the efficiency of servers in a data center. The vast majority of servers in data centers are underutilized for a significant amount of time, operating regions of 10-20%utilization are common. Still, these servers consume huge amounts of energy. A lot of efforts have been made in the area of Green Data Centers during the last years, e.g., regarding cooling efficiency. Nevertheless, there are still many open issues, e.g., operating a virtualized, heterogeneous business infrastructure with the minimum possible power consumption, under the constraint that Quality of Service, and in consequence, revenue are not severely decreased. The majority of existing work is dealing with homogeneous cluster infrastructures, where large assumptions can be made. Especially, an automatic trade-off between competing cost categories, with energy costs being just one of them, is insufficiently studied. In my thesis, I investigate and evaluate mathematical models and algorithms in the context of increasing the energy efficiency of servers in a data center. The amount of online, power consuming resources should at all times be close to the amount of actually required resources. If the workload intensity is decreasing, the infrastructure is consolidated by shutting down servers. If the intensity is rising, the infrastructure is scaled by waking up servers. Ideally, this happens pro-actively by making forecasts about the workload development. Workload is encapsulated in VMs and is live migrated to other servers. The problem of mapping VMs to physical servers in a way that minimizes power consumption, but does not lead to severe Quality of Service violations, is a multi-objective combinatorial optimization problem. It has to be solved frequently as the VMs' resource demands are usually dynamic. Further, servers are not homogeneous regarding their performance and power consumption. Due to the computational complexity, exact solutions are practically intractable. A greedy heuristic stemming from the problem of vector packing and a meta-heuristic genetic algorithm are investigated and evaluated. A configurable cost model is created in order to trade-off energy cost savings with QoS violations. The base for comparison is load balancing. Additionally, the forecasting methods SARIMA and Holt-Winters are evaluated. Further, models able to predict the negative impact of live migration on QoS are developed, and approaches to decrease this impact are investigated. Finally, an examination is carried out regarding the possible consequences of collecting and storing energy consumption data of servers on security and privacy

    Enabling and Understanding Failure of Engineering Structures Using the Technique of Cohesive Elements

    Get PDF
    In this paper, we describe a cohesive zone model for the prediction of failure of engineering solids and/or structures. A damage evolution law is incorporated into a three-dimensional, exponential cohesive law to account for material degradation under the influence of cyclic loading. This cohesive zone model is implemented in the finite element software ABAQUS through a user defined subroutine. The irreversibility of the cohesive zone model is first verified and subsequently applied for studying cyclic crack growth in specimens experiencing different modes of fracture and/or failure. The crack growth behavior to include both crack initiation and crack propagation becomes a natural outcome of the numerical simulation. Numerical examples suggest that the irreversible cohesive zone model can serve as an efficient tool to predict fatigue crack growth. Key issues such as crack path deviation, convergence and mesh dependency are also discussed

    Banking theory based distributed resource management and scheduling for hybrid cloud computing

    Get PDF
    Cloud computing is a computing model in which the network offers a dynamically scalable service based on virtualized resources. The resources in the cloud environment are heterogeneous and geographically distributed. The user does not need to know how to manage those who support the cloud computing infrastructure. From the view of cloud computing, all hardware, software and networks are resources. All of the resources are dynamically scalable on demand. It can offer a complete service for the user even when these service resources are geographically distributed. The user pays for only what they use (pay-per-use). Meanwhile, the transaction environment will decide how to manage resource usage and cost, because all of the transactions have to follow the rule of the market. How to manage and schedule resources effectively becomes a very important part of cloud computing, and how to setup a new framework to offer a reliable, safe and executable service are very important issues. The approach herein is a new contribution to cloud computing. It not only proposes a hybrid cloud computing model based on banking theory to manage transactions among all participants in the hybrid cloud computing environment, but also proposes a "Cloud Bank" framework to support all the related issues. There are some of technology and theory been used to offer contributions as below: 1. This thesis presents an Optimal Deposit-loan Ratio Theory to adjust the pricing between the resource provider and resource consumer to realize both benefit maximization and cloud service optimization for all participants. 2. It also offers a new pricing schema using Centralized Synchronous Algorithm and Distributed Price Adjustment Algorithm to control all lifecycles and dynamically price all resources. 3. Normally, commercial banks apply four factors mitigation and to predict the risk: Probability of Default, Loss Given Default, Exposure at Default and Maturity. This thesis applies Probability of Default model of credit risk to forecast the safety supply of the resource. The Logistic Regression Model been used to control some factors in resource allocation. At the same time, the thesis uses Multivariate Statistical analysis to predict risk. 4. The Cloud Bank model applies an improved Pareto Optimality Algorithm to build its own scheduling system. 5. In order to archive the above purpose, this thesis proposes a new QoS-based SLA-CBSAL to describe all the physical resource and the processing of thread. In order to support all the related algorithms and theories, the thesis uses the CloudSim simulation tools give a test result to support some of the Cloud Bank management strategies and algorithms. The experiment shows us that the Cloud Bank Model is a new possible solution for hybrid cloud computing. For future research direction, the author will focus on building real hybrid cloud computing and simulate actual user behaviour in a real environment, and continue to modify and improve the feasibility and effectiveness of the project. For the risk mitigation and prediction, the risks can be divided into the four categories: credit risk, liquidity risk, operational risk, and other risks. Although this thesis uses credit risk and liquidity risk research, in a real trading environment operational risks and other risks exist. Only through improvements to the designation of all risk types of analysis and strategy can our Cloud Bank be considered relatively complete

    An Energy-Efficient and Reliable Data Transmission Scheme for Transmitter-based Energy Harvesting Networks

    Get PDF
    Energy harvesting technology has been studied to overcome a limited power resource problem for a sensor network. This paper proposes a new data transmission period control and reliable data transmission algorithm for energy harvesting based sensor networks. Although previous studies proposed a communication protocol for energy harvesting based sensor networks, it still needs additional discussion. Proposed algorithm control a data transmission period and the number of data transmission dynamically based on environment information. Through this, energy consumption is reduced and transmission reliability is improved. The simulation result shows that the proposed algorithm is more efficient when compared with previous energy harvesting based communication standard, Enocean in terms of transmission success rate and residual energy.This research was supported by Basic Science Research Program through the National Research Foundation by Korea (NRF) funded by the Ministry of Education, Science and Technology(2012R1A1A3012227)

    Analysis Of Aircraft Arrival Delay And Airport On-time Performance

    Get PDF
    While existing grid environments cater to specific needs of a particular user community, we need to go beyond them and consider general-purpose large-scale distributed systems consisting of large collections of heterogeneous computers and communication systems shared by a large user population with very diverse requirements. Coordination, matchmaking, and resource allocation are among the essential functions of large-scale distributed systems. Although deterministic approaches for coordination, matchmaking, and resource allocation have been well studied, they are not suitable for large-scale distributed systems due to the large-scale, the autonomy, and the dynamics of the systems. We have to seek for nondeterministic solutions for large-scale distributed systems. In this dissertation we describe our work on a coordination service, a matchmaking service, and a macro-economic resource allocation model for large-scale distributed systems. The coordination service coordinates the execution of complex tasks in a dynamic environment, the matchmaking service supports finding the appropriate resources for users, and the macro-economic resource allocation model allows a broker to mediate resource providers who want to maximize their revenues and resource consumers who want to get the best resources at the lowest possible price, with some global objectives, e.g., to maximize the resource utilization of the system

    Future of Wireless Data Communication

    Get PDF
    This thesis develops four scenarios, illustrating the future of wireless data communication
    corecore