223 research outputs found
REDUCING COST OF POWER CONSUMPTION USING GOAL PROGRAMMING OPTIMIZATION AND RENEWABLE ENERGY SOURCE
The demand of cloud computing grows rapidly with the rapid increase in IT infrastructure. Cloud computing now a day, are widely used by industry, organization and society to deliver IT services. This rapid growth leads to the creation of large data centers. These data centers requires enormous amount of electrical power for its operation and thus, result in high operational cost and carbon-dioxide emission. In this work the key idea is to reduce the requirement of power consumption by efficient task allocation and its cost. Datacenters are connected to conventional power grid as well as to renewable energy source. We performed this work in two phases: first used goal programming optimization for energy efficient task allocation to reduce power consumption and then analyzed the reduction in cost if we use RES for power supply. We used solar power panel as renewable energy source and analyzed the significant reduction in cost
Holistic Resource Management for Sustainable and Reliable Cloud Computing:An Innovative Solution to Global Challenge
Minimizing the energy consumption of servers within cloud computing systems is of upmost importance to cloud providers towards reducing operational costs and enhancing service sustainability by consolidating services onto fewer active servers. Moreover, providers must also provision high levels of availability and reliability, hence cloud services are frequently replicated across servers that subsequently increases server energy consumption and resource overhead. These two objectives can present a potential conflict within cloud resource management decision making that must balance between service consolidation and replication to minimize energy consumption whilst maximizing server availability and reliability, respectively. In this paper, we propose a cuckoo optimization-based energy-reliability aware resource scheduling technique (CRUZE) for holistic management of cloud computing resources including servers, networks, storage, and cooling systems. CRUZE clusters and executes heterogeneous workloads on provisioned cloud resources and enhances the energy-efficiency and reduces the carbon footprint in datacenters without adversely affecting cloud service reliability. We evaluate the effectiveness of CRUZE against existing state-of-the-art solutions using the CloudSim toolkit. Results indicate that our proposed technique is capable of reducing energy consumption by 20.1% whilst improving reliability and CPU utilization by 17.1% and 15.7% respectively without affecting other Quality of Service parameters
In-Datacenter Performance Analysis of a Tensor Processing Unit
Many architects believe that major improvements in cost-energy-performance
must now come from domain-specific hardware. This paper evaluates a custom
ASIC---called a Tensor Processing Unit (TPU)---deployed in datacenters since
2015 that accelerates the inference phase of neural networks (NN). The heart of
the TPU is a 65,536 8-bit MAC matrix multiply unit that offers a peak
throughput of 92 TeraOps/second (TOPS) and a large (28 MiB) software-managed
on-chip memory. The TPU's deterministic execution model is a better match to
the 99th-percentile response-time requirement of our NN applications than are
the time-varying optimizations of CPUs and GPUs (caches, out-of-order
execution, multithreading, multiprocessing, prefetching, ...) that help average
throughput more than guaranteed latency. The lack of such features helps
explain why, despite having myriad MACs and a big memory, the TPU is relatively
small and low power. We compare the TPU to a server-class Intel Haswell CPU and
an Nvidia K80 GPU, which are contemporaries deployed in the same datacenters.
Our workload, written in the high-level TensorFlow framework, uses production
NN applications (MLPs, CNNs, and LSTMs) that represent 95% of our datacenters'
NN inference demand. Despite low utilization for some applications, the TPU is
on average about 15X - 30X faster than its contemporary GPU or CPU, with
TOPS/Watt about 30X - 80X higher. Moreover, using the GPU's GDDR5 memory in the
TPU would triple achieved TOPS and raise TOPS/Watt to nearly 70X the GPU and
200X the CPU.Comment: 17 pages, 11 figures, 8 tables. To appear at the 44th International
Symposium on Computer Architecture (ISCA), Toronto, Canada, June 24-28, 201
ACUTA Journal of Telecommunications in Higher Education
In This Issue
Making Dollars and Sense Out of Cloud Computing
Surfing the Wave of Cloud Computing
VolP Meets the Cloud
A Quick Look at Cloud Computing in Higher Education,2012
Cloud Computing: ls the Forecast Bright or Overcast?
Cloud E-Mail Momentum Swells
Institutional Excellence Award
lndividual Awards
President\u27s Message
From the Executive Director
Q&A with the CI
Cloud computing models
Thesis (S.M. in Engineering and Management)--Massachusetts Institute of Technology, Engineering Systems Division, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 79-80).Information Technology has always been considered a major pain point of enterprise organizations, from the perspectives of both cost and management. However, the information technology industry has experienced a dramatic shift in the past decade - factors such as hardware commoditization, open-source software, virtualization, workforce globalization, and agile IT processes have supported the development of new technology and business models. Cloud computing now offers organizations more choices regarding how to run infrastructures, save costs, and delegate liabilities to third-party providers. It has become an integral part of technology and business models, and has forced businesses to adapt to new technology strategies. Accordingly, the demand for cloud computing has forced the development of new market offerings, representing various cloud service and delivery models. These models significantly expand the range of available options, and task organizations with dilemmas over which cloud computing model to employ. This thesis poses analysis of available cloud computing models and potential future cloud computing trends. Comparative analysis includes cloud services delivery (SaaS, PaaS, IaaS) and deployment models (private, public, and hybrid). Cloud computing paradigms are discussed in the context of technical, business, and human factors, analyzing how business and technology strategy could be impacted by the following aspects of cloud computing: --Architecture --Security --Costs --Hardware/software trends (commodity vs. brands, open vs. closed-source) --Organizational/human Factors To provide a systematic approach to the research presented in this paper, cloud taxonomy is introduced to classify and compare the available cloud service offerings. In particular, this thesis focuses on the services of a few major cloud providers. Amazon Web Services (AWS) will be used as a base in many examples because this cloud provider represents approximately 70% of the current public cloud services market. Amazon's AWS has become a cloud services trend-setter, and a reference point for other cloud service providers. The analysis of cloud computing models has shown that public cloud deployment model is likely to stay dominant and keep expanding further. Private and Hybrid deployment models are going to stay for years ahead but their market share is going to continuously drop. In the long-term private and Hybrid cloud models most probably will be used only for specific business cases. IaaS service delivery model is likely to keep losing market share to PaaS and SaaS models because companies realize more value and resource-savings from software and platform services rather than infrastructure. In the near future we can expect significant number of market consolidations with few large players retaining market control at the end.by Eugene Gorelik.S.M.in Engineering and Managemen
Quo Vadis IT Infrastructure: Decision Support for Cloud Computing Adoption From a Business Perspective (29)
Many IT organizations are confronted with the question whether to modernize their IT infrastructure. While most data centers run on a virtualized environment, Cloud Computing technology emerges with new characteristics on fast provision of standardized resources in a scalable IT infrastructure. Public cloud vendors offer IT services on demand, so that IT organizations do not have to operate their own hardware. Moreover, private cloud architectures gain influence, claiming to provide flexible and elastic IT infrastructure. The paper at hand guides the strategic decision for adoption of Cloud Computing on IT infrastructure. Therefore, we first introduce a taxonomy for IT infrastructure encompassing a technological and a sourcing perspective. Second, we evaluate selective areas of the taxonomy adopting the SWOT framework to understand both opportunities and challenges of Cloud Computing for IT infrastructure from a business perspective
Green Cloud - Load Balancing, Load Consolidation using VM Migration
Recently, cloud computing is a new trend emerging in computer technology with a massive demand from the clients. To meet all requirements, a lot of cloud data centers have been constructed since 2008 when Amazon published their cloud service. The rapidly growing data center leads to the consumption of a tremendous amount of energy even cloud computing has better improved in the performance and energy consumption, but cloud data centers still absorb an immense amount of energy. To raise company’s income annually, the cloud providers start considering green cloud concepts which gives an idea about how to optimize CPU’s usage while guaranteeing the quality of service. Many cloud providers are paying more attention to both load balancing and load consolidation which are two significant components of a cloud data center.
Load balancing is taken into account as a vital part of managing income demand, improving the cloud system’s performance. Live virtual machine migration is a technique to perform the dynamic load balancing algorithm. To optimize the cloud data center, three issues are considered: First, how does the cloud cluster distribute the virtual machine (VM) requests from clients to all physical machine (PM) when each computer has a different capacity. Second, what is the solution to make CPU’s usage of all PMs to be nearly equal? Third, how to handle two extreme scenarios: rapidly rising CPU’s usage of a PM due to sudden massive workload requiring VM migration immediately and resources expansion to respond to substantial cloud cluster through VM requests. In this chapter, we provide an approach to work with those issues in the implementation and results. The results indicated that the performance of the cloud cluster was improved significantly.
Load consolidation is the reverse process of load balancing which aims to provide sufficient cloud servers to handle the client requests. Based on the advance of live VM migration, cloud data center can consolidate itself without interrupting the cloud service, and superfluous PMs are turned to save mode to reduce the energy consumption. This chapter provides a solution to approach load consolidation including implementation and simulation of cloud servers
Empirical characterization and modeling of power consumption and energy aware scheduling in data centers
Energy-efficient management is key in modern data centers in order to reduce
operational cost and environmental contamination. Energy management
and renewable energy utilization are strategies to optimize energy consumption
in high-performance computing. In any case, understanding the power consumption
behavior of physical servers in datacenter is fundamental to implement
energy-aware policies effectively. These policies should deal with possible
performance degradation of applications to ensure quality of service.
This thesis presents an empirical evaluation of power consumption for scientific
computing applications in multicore systems. Three types of applications
are studied, in single and combined executions on Intel and AMD servers, for
evaluating the overall power consumption of each application. The main results
indicate that power consumption behavior has a strong dependency with
the type of application. Additional performance analysis shows that the best
load of the server regarding energy efficiency depends on the type of the applications,
with efficiency decreasing in heavily loaded situations. These results
allow formulating models to characterize applications according to power consumption,
efficiency, and resource sharing, which provide useful information
for resource management and scheduling policies. Several scheduling strategies
are evaluated using the proposed energy model over realistic scientific computing
workloads. Results confirm that strategies that maximize host utilization
provide the best energy efficiency.Agencia Nacional de Investigación e Innovación FSE_1_2017_1_14478
Computing server power modeling in a data center: survey,taxonomy and performance evaluation
Data centers are large scale, energy-hungry infrastructure serving the
increasing computational demands as the world is becoming more connected in
smart cities. The emergence of advanced technologies such as cloud-based
services, internet of things (IoT) and big data analytics has augmented the
growth of global data centers, leading to high energy consumption. This upsurge
in energy consumption of the data centers not only incurs the issue of surging
high cost (operational and maintenance) but also has an adverse effect on the
environment. Dynamic power management in a data center environment requires the
cognizance of the correlation between the system and hardware level performance
counters and the power consumption. Power consumption modeling exhibits this
correlation and is crucial in designing energy-efficient optimization
strategies based on resource utilization. Several works in power modeling are
proposed and used in the literature. However, these power models have been
evaluated using different benchmarking applications, power measurement
techniques and error calculation formula on different machines. In this work,
we present a taxonomy and evaluation of 24 software-based power models using a
unified environment, benchmarking applications, power measurement technique and
error formula, with the aim of achieving an objective comparison. We use
different servers architectures to assess the impact of heterogeneity on the
models' comparison. The performance analysis of these models is elaborated in
the paper
- …