391,619 research outputs found
Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges
Cloud computing is offering utility-oriented IT services to users worldwide.
Based on a pay-as-you-go model, it enables hosting of pervasive applications
from consumer, scientific, and business domains. However, data centers hosting
Cloud applications consume huge amounts of energy, contributing to high
operational costs and carbon footprints to the environment. Therefore, we need
Green Cloud computing solutions that can not only save energy for the
environment but also reduce operational costs. This paper presents vision,
challenges, and architectural elements for energy-efficient management of Cloud
computing environments. We focus on the development of dynamic resource
provisioning and allocation algorithms that consider the synergy between
various data center infrastructures (i.e., the hardware, power units, cooling
and software), and holistically work to boost data center energy efficiency and
performance. In particular, this paper proposes (a) architectural principles
for energy-efficient management of Clouds; (b) energy-efficient resource
allocation policies and scheduling algorithms considering quality-of-service
expectations, and devices power usage characteristics; and (c) a novel software
technology for energy-efficient management of Clouds. We have validated our
approach by conducting a set of rigorous performance evaluation study using the
CloudSim toolkit. The results demonstrate that Cloud computing model has
immense potential as it offers significant performance gains as regards to
response time and cost saving under dynamic workload scenarios.Comment: 12 pages, 5 figures,Proceedings of the 2010 International Conference
on Parallel and Distributed Processing Techniques and Applications (PDPTA
2010), Las Vegas, USA, July 12-15, 201
JUMMP: Job Uninterrupted Maneuverable MapReduce Platform
In this paper, we present JUMMP, the Job Uninterrupted
Maneuverable MapReduce Platform, an automated
scheduling platform that provides a customized Hadoop environment
within a batch-scheduled cluster environment. JUMMP
enables an interactive pseudo-persistent MapReduce platform
within the existing administrative structure of an academic high
performance computing center by “jumping” between nodes with
minimal administrative effort. Jumping is implemented by the
synchronization of stopping and starting daemon processes on
different nodes in the cluster. Our experimental evaluation shows
that JUMMP can be as efficient as a persistent Hadoop cluster
on dedicated computing resources, depending on the jump time.
Additionally, we show that the cluster remains stable, with good
performance, in the presence of jumps that occur as frequently
as the average length of reduce tasks of the currently executing
MapReduce job. JUMMP provides an attractive solution to
academic institutions that desire to integrate Hadoop into their
current computing environment within their financial, technical,
and administrative constraints
Software-Defined Cloud Computing: Architectural Elements and Open Challenges
The variety of existing cloud services creates a challenge for service
providers to enforce reasonable Software Level Agreements (SLA) stating the
Quality of Service (QoS) and penalties in case QoS is not achieved. To avoid
such penalties at the same time that the infrastructure operates with minimum
energy and resource wastage, constant monitoring and adaptation of the
infrastructure is needed. We refer to Software-Defined Cloud Computing, or
simply Software-Defined Clouds (SDC), as an approach for automating the process
of optimal cloud configuration by extending virtualization concept to all
resources in a data center. An SDC enables easy reconfiguration and adaptation
of physical resources in a cloud infrastructure, to better accommodate the
demand on QoS through a software that can describe and manage various aspects
comprising the cloud environment. In this paper, we present an architecture for
SDCs on data centers with emphasis on mobile cloud applications. We present an
evaluation, showcasing the potential of SDC in two use cases-QoS-aware
bandwidth allocation and bandwidth-aware, energy-efficient VM placement-and
discuss the research challenges and opportunities in this emerging area.Comment: Keynote Paper, 3rd International Conference on Advances in Computing,
Communications and Informatics (ICACCI 2014), September 24-27, 2014, Delhi,
Indi
Dynamic server selection in a multithreaded network computing environment
Research has been conducted at the Iowa State University Center for Nondestructive Evaluation (CNDE) to create a structure in which existing numerical modeling programs can be converted to execute in a network computing environment. This research task is to include the development of an extensible architecture which accommodates the timely integration of new processing capabilities and requirements. The research was motivated by many needs within the CNDE to reduce the predicted run times associated with the current and future modeling programs
EXA2PRO programming environment:Architecture and applications
The EXA2PRO programming environment will integrate a set of tools and methodologies that will allow to systematically address many exascale computing challenges, including performance, performance portability, programmability, abstraction and reusability, fault tolerance and technical debt. The EXA2PRO tool-chain will enable the efficient deployment of applications in exascale computing systems, by integrating high-level software abstractions that offer performance portability and efficient exploitation of exascale systems' heterogeneity, tools for efficient memory management, optimizations based on trade-offs between various metrics and fault-tolerance support. Hence, by addressing various aspects of productivity challenges, EXA2PRO is expected to have significant impact in the transition to exascale computing, as well as impact from the perspective of applications. The evaluation will be based on 4 applications from 4 different domains that will be deployed in JUELICH supercomputing center. The EXA2PRO will generate exploitable results in the form of a tool-chain that support diverse exascale heterogeneous supercomputing centers and concrete improvements in various exascale computing challenges
Recommended from our members
Modeling Natural Hazards Engineering Data to Cyberinfrastructure
DesignSafe-CI is an end-to-end data lifecycle management, analysis, and publication cloud platform for natural hazards engineering. To facilitate ongoing data curation and sharing in a cloud environment that is intuitive to the end users, developers and curators teamed with experts in the different hazards to design data models and vocabularies that map their research workflows and domain terminology. The experimental data models - six - emphasize provenance through relationships between research processes, data and their documentation, and highlight commonalities between experiment types. They mediate between the user interface and the repository layers of the cyberinfrastructure to automate tasks such as organizing data and facilitating its description. Using data from triaxial experiments, we conducted a user evaluation of the geotechnical data model, both for its fitness to real data and for purposes of data understandability during reuse. The results of the evaluation guided testing and selection of the Fedora 4 repository backend to enhance data discovery and reuse.National Science FoundationTexas Advanced Computing Center (TACC
Modelling the Shifts in Activity Centres along the Subway Stations. The Case Study of Metropolitan Tehran
Activity centers are areas of strong development of a particular activity, such as residence, employment or services. Understanding the
subway system impacts on the type, combination, distribution and the development of basic activities in such centers plays an important role
in managing development opportunities created along the Tehran subway lines. The multi criteria and fuzzy nature of evaluating the
development of activity centers makes the issue so complex that it cannot be addressed with conventional logical systems. One of the most
important methods of multi criteria evaluation is Fuzzy Inference System. Fuzzy inference system is a popular computing framework based on
the concepts of Fuzzy Sets Theory, which is capable of accommodating inherent uncertainty in the multi-criteria evaluation process. This
paper analyses shifts in activity centers along two lines of the Tehran subway system based on three major criteria by designing a
comprehensive fuzzy inference system. The data for the present study were collected through documentary analysis, questionnaires and
semi-structured interviews. The result revealed that the level of the subway system influence on the pattern and process of the development
of activities varied with the location, physical environment and entity of each station. Furthermore, empirical findings indicated that the
subway line might weaken residential activities while attracting employment and service activities to the city center. Specifically, residential
estates have moved away from the city center to the suburbs whereas employment and service activities have expanded from the existing
central business district (CBD). The results can be applied to suggest planning policies aimed at improving the effects of public transit on
property development and land use change in a developing country
Computing server power modeling in a data center: survey,taxonomy and performance evaluation
Data centers are large scale, energy-hungry infrastructure serving the
increasing computational demands as the world is becoming more connected in
smart cities. The emergence of advanced technologies such as cloud-based
services, internet of things (IoT) and big data analytics has augmented the
growth of global data centers, leading to high energy consumption. This upsurge
in energy consumption of the data centers not only incurs the issue of surging
high cost (operational and maintenance) but also has an adverse effect on the
environment. Dynamic power management in a data center environment requires the
cognizance of the correlation between the system and hardware level performance
counters and the power consumption. Power consumption modeling exhibits this
correlation and is crucial in designing energy-efficient optimization
strategies based on resource utilization. Several works in power modeling are
proposed and used in the literature. However, these power models have been
evaluated using different benchmarking applications, power measurement
techniques and error calculation formula on different machines. In this work,
we present a taxonomy and evaluation of 24 software-based power models using a
unified environment, benchmarking applications, power measurement technique and
error formula, with the aim of achieving an objective comparison. We use
different servers architectures to assess the impact of heterogeneity on the
models' comparison. The performance analysis of these models is elaborated in
the paper
Towards green computing in wireless sensor networks: controlled mobility-aided balanced tree approach
Virtualization technology has revolutionized the mobile network and widely used in 5G innovation. It is a way of computing that allows dynamic leasing of server capabilities in the form of services like SaaS, PaaS, and IaaS. The proliferation of these services among the users led to the establishment of large-scale cloud data centers that consume an enormous amount of electrical energy and results into high metered bill cost and carbon footprint. In this paper, we propose three heuristic models namely Median Migration Time (MeMT), Smallest Void Detection (SVD) and Maximum Fill (MF) that can reduce energy consumption with minimal variation in SLAs negotiated. Specifically, we derive the cost of running cloud data center, cost optimization problem and resource utilization optimization problem. Power consumption model is developed for cloud computing environment focusing on liner relationship between power consumption and resource utilization. A virtual machine migration technique is considered focusing on synchronization oriented shorter stop-and-copy phase. The complete operational steps as algorithms are developed for energy aware heuristic models including MeMT, SVD and MF. To evaluate proposed heuristic models, we conduct experimentations using PlanetLab server data often ten days and synthetic workload data collected randomly from the similar number of VMs employed in PlanetLab Servers. Through evaluation process, we deduce that proposed approaches can significantly reduce the energy consumption, total VM migration, and host shutdown while maintaining the high system performance
- …