956 research outputs found
Extending Demand Response to Tenants in Cloud Data Centers via Non-intrusive Workload Flexibility Pricing
Participating in demand response programs is a promising tool for reducing
energy costs in data centers by modulating energy consumption. Towards this
end, data centers can employ a rich set of resource management knobs, such as
workload shifting and dynamic server provisioning. Nonetheless, these knobs may
not be readily available in a cloud data center (CDC) that serves cloud
tenants/users, because workloads in CDCs are managed by tenants themselves who
are typically charged based on a usage-based or flat-rate pricing and often
have no incentive to cooperate with the CDC operator for demand response and
cost saving. Towards breaking such "split incentive" hurdle, a few recent
studies have tried market-based mechanisms, such as dynamic pricing, inside
CDCs. However, such mechanisms often rely on complex designs that are hard to
implement and difficult to cope with by tenants. To address this limitation, we
propose a novel incentive mechanism that is not dynamic, i.e., it keeps pricing
for cloud resources unchanged for a long period. While it charges tenants based
on a Usage-based Pricing (UP) as used by today's major cloud operators, it
rewards tenants proportionally based on the time length that tenants set as
deadlines for completing their workloads. This new mechanism is called
Usage-based Pricing with Monetary Reward (UPMR). We demonstrate the
effectiveness of UPMR both analytically and empirically. We show that UPMR can
reduce the CDC operator's energy cost by 12.9% while increasing its profit by
4.9%, compared to the state-of-the-art approaches used by today's CDC operators
to charge their tenants
Sustainable Edge Computing: Challenges and Future Directions
An increasing amount of data is being injected into the network from IoT
(Internet of Things) applications. Many of these applications, developed to
improve society's quality of life, are latency-critical and inject large
amounts of data into the network. These requirements of IoT applications
trigger the emergence of Edge computing paradigm. Currently, data centers are
responsible for a global energy use between 2% and 3%. However, this trend is
difficult to maintain, as bringing computing infrastructures closer to the edge
of the network comes with its own set of challenges for energy efficiency. In
this paper, we propose our approach for the sustainability of future computing
infrastructures to provide (i) an energy-efficient and economically viable
deployment, (ii) a fault-tolerant automated operation, and (iii) a
collaborative resource management to improve resource efficiency. We identify
the main limitations of applying Cloud-based approaches close to the data
sources and present the research challenges to Edge sustainability arising from
these constraints. We propose two-phase immersion cooling, formal modeling,
machine learning, and energy-centric federated management as Edge-enabling
technologies. We present our early results towards the sustainability of an
Edge infrastructure to demonstrate the benefits of our approach for future
computing environments and deployments.Comment: 26 pages, 16 figure
Greening Multi-Tenant Data Center Demand Response
Data centers have emerged as promising resources for demand response,
particularly for emergency demand response (EDR), which saves the power grid
from incurring blackouts during emergency situations. However, currently, data
centers typically participate in EDR by turning on backup (diesel) generators,
which is both expensive and environmentally unfriendly. In this paper, we focus
on "greening" demand response in multi-tenant data centers, i.e., colocation
data centers, by designing a pricing mechanism through which the data center
operator can efficiently extract load reductions from tenants during emergency
periods to fulfill energy reduction requirement for EDR. In particular, we
propose a pricing mechanism for both mandatory and voluntary EDR programs,
ColoEDR, that is based on parameterized supply function bidding and provides
provably near-optimal efficiency guarantees, both when tenants are price-taking
and when they are price-anticipating. In addition to analytic results, we
extend the literature on supply function mechanism design, and evaluate ColoEDR
using trace-based simulation studies. These validate the efficiency analysis
and conclude that the pricing mechanism is both beneficial to the environment
and to the data center operator (by decreasing the need for backup diesel
generation), while also aiding tenants (by providing payments for load
reductions).Comment: 34 pages, 6 figure
Energy efficiency embedded service lifecycle: Towards an energy efficient cloud computing architecture
The paper argues the need to provide novel methods and tools to support software developers aiming to optimise energy efficiency and minimise the carbon footprint resulting from designing, developing, deploying and running software in Clouds, while maintaining other quality aspects of software to adequate and agreed levels. A cloud architecture to support energy efficiency at service construction, deployment, and operation is discussed, as well as its implementation and evaluation plans.Postprint (published version
Technological trends in last-mile contexts: A European perspective
This paper presents an empirical analysis on 65 European cities where different Last Mile technologies have been implemented by both private and public operators, aiming to assess the importance of each technology involved and identify adoption trends. Results show that parcel lockers and low emissions vehicles are the most adopted technologies, while the diffusion of Intelligent Transportation Systems and dynamic routing appears limited. Finally, crowd shipping services are still scattered but show medium level of adoption. Future research will extend the analysis to international projects and include external aspects to refine the understanding of technological adoption patterns
A truthful incentive mechanism for emergency demand response in colocation data centers
Data centers are key participants in demand response programs, including emergency demand response (EDR), where the grid coordinates large electricity consumers for demand reduction in emergency situations to prevent major economic losses. While existing literature concentrates on owner-operated data centers, this work studies EDR in multi-tenant colocation data centers where servers are owned and managed by individual tenants. EDR in colocation data centers is significantly more challenging, due to lack of incentives to reduce energy consumption by tenants who control their servers and are typically on fixed power contracts with the colocation operator. Consequently, to achieve demand reduction goals set by the EDR program, the operator has to rely on the highly expensive and/or environmentally-unfriendly on-site energy backup/generation. To reduce cost and environmental impact, an efficient incentive mechanism is therefore in need, motivating tenants’ voluntary energy reduction in case of EDR. This work proposes a novel incentive mechanism, Truth-DR, which leverages a reverse auction to provide monetary remuneration to tenants according to their agreed energy reduction. Truth-DR is computationally efficient, truthful, and achieves 2-approximation in colocation-wide social cost. Trace-driven simulations verify the efficacy of the proposed auction mechanism.published_or_final_versio
Model-Based Design, Analysis, and Implementations for Power and Energy-Efficient Computing Systems
Modern computing systems are becoming increasingly complex. On one end of
the spectrum, personal computers now commonly support multiple processing
cores, and, on the other end, Internet services routinely employ thousands of
servers in distributed locations to provide the desired service to its users. In
such complex systems, concerns about energy usage and power consumption
are increasingly important. Moreover, growing awareness of environmental
issues has added to the overall complexity by introducing new variables to the
problem. In this regard, the ability to abstractly focus on the relevant details
allows model-based design to help significantly in the analysis and solution of
such problems.
In this dissertation, we explore and analyze model-based design for energy
and power considerations in computing systems. Although the presented techniques
are more generally applicable, we focus their application on large-scale
Internet services operating in U.S. electricity markets. Internet services are becoming
increasingly popular in the ICT ecosystem of today. The physical infrastructure
to support such services is commonly based on a group of cooperative
data centers (DCs) operating in tandem. These DCs are geographically
distributed to provide security and timing guarantees for their customers. To
provide services to millions of customers, DCs employ hundreds of thousands
of servers. These servers consume a large amount of energy that is traditionally
produced by burning coal and employing other environmentally hazardous
methods, such as nuclear and gas power generation plants. This large energy
consumption results in significant and fast-growing financial and environmental
costs. Consequently, for protection of local and global environments, governing
bodies around the globe have begun to introduce legislation to encourage
energy consumers, especially corporate entities, to increase the share of
renewable energy (green energy) in their total energy consumption. However,
in U.S. electricity markets, green energy is usually more expensive than energy
generated from traditional sources like coal or petroleum.
We model the overall problem in three sub-areas and explore different approaches
aimed at reducing the environmental foot print and operating costs
of multi-site Internet services, while honoring the Quality of Service (QoS) constraints
as contracted in service level agreements (SLAs).
Firstly, we model the load distribution among member DCs of a multi-site Internet
service. The use of green energy is optimized considering different factors
such as (a) geographically and temporally variable electricity prices, (b)
the multitude of available energy sources to choose from at each DC, (c) the necessity
to support more than one SLA, and, (d) the requirements to offer more
than one service at each DC. Various approaches are presented for solving this
problem and extensive simulations using Google’s setup in North America are
used to evaluate the presented approaches.
Secondly, we explore the area of shaving the peaks in the energy demand of
large electricity consumers, such as DCs by using a battery-based energy storage
system. Electrical demand of DCs is typically peaky based on the usage
cycle of their customers. Resultant peaks in the electrical demand require development
and maintenance of a costlier energy delivery mechanism, and are
often met using expensive gas or diesel generators which often have a higher
environmental impact. To shave the peak power demand, a battery can be used
which is charged during low load and is discharged during the peak loads.
Since the batteries are costly, we present a scheme to estimate the size of battery
required for any variable electrical load. The electrical load is modeled using
the concept of arrival curves from Network Calculus. Our analysis mechanism
can help determine the appropriate battery size for a given load arrival curve
to reduce the peak.
Thirdly, we present techniques to employ intra-DC scheduling to regulate the
peak power usage of each DC. The model we develop is equally applicable to
an individual server with multi-/many-core chips as well as a complete DC
with an intermix of homogeneous and heterogeneous servers. We evaluate
these approaches on single-core and multi-core chip processors and present the
results.
Overall, our work demonstrates the value of model-based design for intelligent
load distribution across DCs, storage integration, and per DC optimizations
for efficient energy management to reduce operating costs and environmental
footprint for multi-site Internet services
- …