608 research outputs found
Using Ant Colony Optimization on the Quadratic Assignment Problem to Achieve Low Energy Cost in Geo-distributed Data Centers
There are many problems associated with operating a data center. Some of these problems include data security, system performance, increasing infrastructure complexity, increasing storage utilization, keeping up with data growth, and increasing energy costs. Energy cost differs by location, and at most locations fluctuates over time. The rising cost of energy makes it harder for data centers to function properly and provide a good quality of service. With reduced energy cost, data centers will have longer lasting servers/equipment, higher availability of resources, better quality of service, a greener environment, and reduced service and software costs for consumers. Some of the ways that data centers have tried to using to reduce energy costs include dynamically switching on and off servers based on the number of users and some predefined conditions, the use of environmental monitoring sensors, and the use of dynamic voltage and frequency scaling (DVFS), which enables processors to run at different combinations of frequencies with voltages to reduce energy cost. This thesis presents another method by which energy cost at data centers could be reduced. This method involves the use of Ant Colony Optimization (ACO) on a Quadratic Assignment Problem (QAP) in assigning user request to servers in geo-distributed data centers.In this paper, an effort to reduce data center energy cost involves the use of front portals, which handle users� requests, were used as ants to find cost effective ways to assign users requests to a server in heterogeneous geo-distributed data centers. The simulation results indicate that the ACO for Optimal Server Activation and Task Placement algorithm reduces energy cost on a small and large number of users� requests in a geo-distributed data center and its performance increases as the input data grows. In a simulation with 3 geo-distributed data centers, and user�s resource request ranging from 25,000 to 25,000,000, the ACO algorithm was able to reduce energy cost on an average of $.70 per second. The ACO for Optimal Server Activation and Task Placement algorithm has proven to work as an alternative or improvement in reducing energy cost in geo-distributed data centers.Computer Scienc
Hybrid Computing for Interactive Datacenter Applications
Field-Programmable Gate Arrays (FPGAs) are more energy efficient and cost
effective than CPUs for a wide variety of datacenter applications. Yet, for
latency-sensitive and bursty workloads, this advantage can be difficult to
harness due to high FPGA spin-up costs. We propose that a hybrid FPGA and CPU
computing framework can harness the energy efficiency benefits of FPGAs for
such workloads at reasonable cost. Our key insight is to use FPGAs for
stable-state workload and CPUs for short-term workload bursts. Using this
insight, we design Spork, a lightweight hybrid scheduler that can realize these
energy efficiency and cost benefits in practice. Depending on the desired
objective, Spork can trade off energy efficiency for cost reduction and vice
versa. It is parameterized with key differences between FPGAs and CPUs in terms
of power draw, performance, cost, and spin-up latency. We vary this parameter
space and analyze various application and worker configurations on production
and synthetic traces. Our evaluation of cloud workloads shows that
energy-optimized Spork is not only more energy efficient but it is also cheaper
than homogeneous platforms--for short application requests with tight
deadlines, it is 1.53x more energy efficient and 2.14x cheaper than using only
FPGAs. Relative to an idealized version of an existing cost-optimized hybrid
scheduler, energy-optimized Spork provides 1.2-2.4x higher energy efficiency at
comparable cost, while cost-optimized Spork provides 1.1-2x higher energy
efficiency at 1.06-1.2x lower cost.Comment: 13 page
Cutting the Electric Bill for Internet-Scale Systems
Energy expenses are becoming an increasingly important fraction of data center operating costs. At the same time, the energy expense per unit of computation can vary significantly between two different locations. In this paper, we characterize the variation due to fluctuating electricity prices and argue that existing distributed systems should be able to exploit this variation for significant economic gains. Electricity prices exhibit both temporal and geographic variation, due to regional demand differences, transmission inefficiencies, and generation diversity. Starting with historical electricity prices, for twenty nine locations in the US, and network traffic data collected on Akamai's CDN, we use simulation to quantify the possible economic gains for a realistic workload. Our results imply that existing systems may be able to save millions of dollars a year in electricity costs, by being cognizant of locational computation cost differences.NokiaNational Science Foundatio
Recommended from our members
Energy-Aware Algorithms for Greening Internet-Scale Distributed Systems Using Renewables
Internet-scale Distributed Systems (IDSs) are large distributed systems that are comprised of hundreds of thousands of servers located in hundreds of data centers around the world. A canonical example of an IDS is a content delivery network (CDN) that delivers content to users from a large global deployment of servers around the world. IDSs consume large amounts of energy and their energy requirements are projected to increase significantly in the future. With carbon emissions from data centers increasing every year, use of renewables to power data centers is critical for the sustainability of data centers and for the environment.
In this thesis we design energy-aware algorithms that leverage renewable sources of energy and study their potential to reduce brown energy consumption in IDSs. Firstly, we study the use of renewable solar energy to power IDS data centers. A net-zero IDS produces as much energy from renewables (green energy) as it needs to entirely off-set its energy consumption. We develop effective algorithms to help minimize the number of solar panels provisioned for net-zero IDSs. We empirically evaluate our algorithms using load traces from Akamai\u27s global CDN and solar data from PVWatts. Our results show that for net-zero year, net-zero month, and net-zero week, our optimal algorithm can reduce the number of panels by 36%, 68%, and 82% respectively, thereby making sustainability of IDSs significantly more achievable.
IDSs consume a significant amount of energy for cooling their infrastructure. Therefore, next, we study the potential benefits of using open air cooling (OAC) to reduce the energy usage as well as the capital costs incurred by an IDS for cooling. We develop an algorithm to incorporate OAC into the IDS architecture and empirically evaluate its efficacy using extensive work load traces from Akamai\u27s global CDN and global weather data from NOAA. Our results show that by using OAC, a global IDS can extract a 51% cooling energy reduction during summers and a 92% reduction in the winter.
Finally, we study the greening potential of combining two contrasting sources of renewable energy, namely solar energy and open air cooling (OAC). OAC involves the use of outside air to cool data centers if the weather outside is sufficiently cold and dry. Therefore OAC is likely to be abundant in colder weather and at night-time. In contrast, solar energy generation is correlated with sunny weather and day-time. Given their contrasting natures, we study whether synthesizing these two renewable sources of energy can yield complementary benefits. Given the intermittent nature of renewable energy, we use energy storage and load shifting to facilitate the use of green energy and study trade-offs in brown energy reduction based on key parameters like battery size, number of solar panels, and radius of load movement. We do a detailed cost analysis, including amortized cost savings as well as a break-even analysis for different energy prices. Our results show that we can significantly reduce brown energy consumption by about 55% to 59% just by combining the two technologies. We can increase our savings further to between 60% to 65% by adding load movement within a radius of 5000kms, and to between 73% to 89% by adding energy storage
Competitive Online Peak-Demand Minimization Using Energy Storage
We study the problem of online peak-demand minimization under energy storage
constraints. It is motivated by an increasingly popular scenario where
large-load customers utilize energy storage to reduce the peak procurement from
the grid, which accounts for up to of their electric bills. The problem
is uniquely challenging due to (i) the coupling of online decisions across time
imposed by the inventory constraints and (ii) the noncumulative nature of the
peak procurement. In this paper, we develop an optimal online algorithm for the
problem, attaining the best possible competitive ratio (CR) among all
deterministic and randomized algorithms. We show that the optimal CR can be
computed in polynomial time, by solving a linear number of linear-fractional
problems. More importantly, we generalize our approach to develop an
\emph{anytime-optimal} online algorithm that achieves the best possible CR at
any epoch, given the inputs and online decisions so far. The algorithm retains
the optimal worst-case performance and achieves adaptive average-case
performance. Simulation results based on real-world traces show that, under
typical settings, our algorithms improve peak reduction by over as
compared to baseline alternatives
A new MDA-SOA based framework for intercloud interoperability
Cloud computing has been one of the most important topics in Information Technology which aims to assure scalable and reliable on-demand services over the Internet. The expansion of the application scope of cloud services would require cooperation between clouds from different providers that have heterogeneous functionalities. This collaboration between different cloud vendors can provide better Quality of Services (QoS) at the lower price. However, current cloud systems have been developed without concerns of seamless cloud interconnection, and actually they do not support intercloud interoperability to enable collaboration between cloud service providers. Hence, the PhD work is motivated to address interoperability issue between cloud providers as a challenging research objective.
This thesis proposes a new framework which supports inter-cloud interoperability in a heterogeneous computing resource cloud environment with the goal of dispatching the workload to the most effective clouds available at runtime.
Analysing different methodologies that have been applied to resolve various problem scenarios related to interoperability lead us to exploit Model Driven Architecture (MDA) and Service Oriented Architecture (SOA) methods as appropriate approaches for our inter-cloud framework. Moreover, since distributing the operations in a cloud-based environment is a nondeterministic polynomial time (NP-complete) problem, a Genetic Algorithm (GA) based job scheduler proposed as a part of interoperability framework, offering workload migration with the best performance at the least cost. A new Agent Based Simulation (ABS) approach is proposed to model the inter-cloud environment with three types of agents: Cloud Subscriber agent, Cloud Provider agent, and Job agent. The ABS model is proposed to evaluate the proposed framework.Fundação para a Ciência e a Tecnologia (FCT) - (Referencia da bolsa: SFRH SFRH / BD / 33965 / 2009) and EC 7th Framework Programme under grant agreement n° FITMAN 604674 (http://www.fitman-fi.eu
Revenue maximization problems in commercial data centers
PhD ThesisAs IT systems are becoming more important everyday, one of the main concerns is that users may
face major problems and eventually incur major costs if computing systems do not meet the expected
performance requirements: customers expect reliability and performance guarantees, while
underperforming systems loose revenues. Even with the adoption of data centers as the hub of
IT organizations and provider of business efficiencies the problems are not over because it is extremely
difficult for service providers to meet the promised performance guarantees in the face of
unpredictable demand. One possible approach is the adoption of Service Level Agreements (SLAs),
contracts that specify a level of performance that must be met and compensations in case of failure.
In this thesis I will address some of the performance problems arising when IT companies sell
the service of running ‘jobs’ subject to Quality of Service (QoS) constraints. In particular, the aim
is to improve the efficiency of service provisioning systems by allowing them to adapt to changing
demand conditions.
First, I will define the problem in terms of an utility function to maximize. Two different models
are analyzed, one for single jobs and the other useful to deal with session-based traffic. Then,
I will introduce an autonomic model for service provision. The architecture consists of a set of
hosted applications that share a certain number of servers. The system collects demand and performance
statistics and estimates traffic parameters. These estimates are used by management policies
which implement dynamic resource allocation and admission algorithms. Results from a number of
experiments show that the performance of these heuristics is close to optimal.QoSP (Quality of Service Provisioning)British Teleco
Smart grid
Tese de mestrado integrado em Engenharia da Energia e do Ambiente, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2016The SG concept arises from the fact that there is an increase in global energy consumption. One of the factors delaying an energetic paradigm change worldwide is the electric grids.
Even though there is no specific definition for the SG concept there are several characteristics that describe it. Those features represent several advantages relating to reliability and efficiency. The most important one is the two way flow of energy and information between utilities and consumers. The infrastructures in standard grids and the SG can classified the same way but the second one has several components contributing for monitoring and management improvement. The SG’s management system allows peak reduction, using several techniques underlining many advantages like controlling costs and emissions. Furthermore, it presents a new concept called demand response that allows consumers to play an important role in the electric systems. This factor brings benefits for utilities, consumers and the whole grid but it increases problems in security and that is why the SG relies in a good protection system. There are many schemes and components to create it.
The MG can be considered has an electric grid in small scale which can connect to the whole grid. To implement a MG it is necessary economic and technical studies. For that, software like HOMER can be used. However, the economic study can be complex because there are factors that are difficult to evaluate beyond energy selling. On top of that, there are legislation and incentive programs that should be considered. Two case studies prove that MG can be profitable. In the first study, recurring to HOMER, and a scenario with energy selling only, it was obtained a 106% reduction on production cost and 32% in emissions. The installer would have an 41,386, the MG owner had 196,125 profit. We can conclude that the MG with SG concepts can be profitable in many cases
- …