335 research outputs found
Adaptive Dispatching of Tasks in the Cloud
The increasingly wide application of Cloud Computing enables the
consolidation of tens of thousands of applications in shared infrastructures.
Thus, meeting the quality of service requirements of so many diverse
applications in such shared resource environments has become a real challenge,
especially since the characteristics and workload of applications differ widely
and may change over time. This paper presents an experimental system that can
exploit a variety of online quality of service aware adaptive task allocation
schemes, and three such schemes are designed and compared. These are a
measurement driven algorithm that uses reinforcement learning, secondly a
"sensible" allocation algorithm that assigns jobs to sub-systems that are
observed to provide a lower response time, and then an algorithm that splits
the job arrival stream into sub-streams at rates computed from the hosts'
processing capabilities. All of these schemes are compared via measurements
among themselves and with a simple round-robin scheduler, on two experimental
test-beds with homogeneous and heterogeneous hosts having different processing
capacities.Comment: 10 pages, 9 figure
Holistic Resource Management for Sustainable and Reliable Cloud Computing:An Innovative Solution to Global Challenge
Minimizing the energy consumption of servers within cloud computing systems is of upmost importance to cloud providers towards reducing operational costs and enhancing service sustainability by consolidating services onto fewer active servers. Moreover, providers must also provision high levels of availability and reliability, hence cloud services are frequently replicated across servers that subsequently increases server energy consumption and resource overhead. These two objectives can present a potential conflict within cloud resource management decision making that must balance between service consolidation and replication to minimize energy consumption whilst maximizing server availability and reliability, respectively. In this paper, we propose a cuckoo optimization-based energy-reliability aware resource scheduling technique (CRUZE) for holistic management of cloud computing resources including servers, networks, storage, and cooling systems. CRUZE clusters and executes heterogeneous workloads on provisioned cloud resources and enhances the energy-efficiency and reduces the carbon footprint in datacenters without adversely affecting cloud service reliability. We evaluate the effectiveness of CRUZE against existing state-of-the-art solutions using the CloudSim toolkit. Results indicate that our proposed technique is capable of reducing energy consumption by 20.1% whilst improving reliability and CPU utilization by 17.1% and 15.7% respectively without affecting other Quality of Service parameters
Practical service placement approach for microservices architecture
Community networks (CNs) have gained momentum in the last few years with the increasing number of spontaneously deployed WiFi hotspots and home networks. These networks, owned and managed by volunteers, offer various services to their members and to the public. To reduce the complexity of service deployment, community micro-clouds have recently emerged as a promising enabler for the delivery of cloud services to community users. By putting services closer to consumers, micro-clouds pursue not only a better service performance, but also a low entry barrier for the deployment of mainstream Internet services within the CN. Unfortunately, the provisioning of the services is not so simple. Due to the large and irregular topology, high software and hardware diversity of CNs, it requires of aPeer ReviewedPostprint (author's final draft
Mage: Online Interference-Aware Scheduling in Multi-Scale Heterogeneous Systems
Heterogeneity has grown in popularity both at the core and server level as a
way to improve both performance and energy efficiency. However, despite these
benefits, scheduling applications in heterogeneous machines remains
challenging. Additionally, when these heterogeneous resources accommodate
multiple applications to increase utilization, resources are prone to
contention, destructive interference, and unpredictable performance. Existing
solutions examine heterogeneity either across or within a server, leading to
missed performance and efficiency opportunities. We present Mage, a practical
interference-aware runtime that optimizes performance and efficiency in systems
with intra- and inter-server heterogeneity. Mage leverages fast and online data
mining to quickly explore the space of application placements, and determine
the one that minimizes destructive interference between co-resident
applications. Mage continuously monitors the performance of active
applications, and, upon detecting QoS violations, it determines whether
alternative placements would prove more beneficial, taking into account any
overheads from migration. Across 350 application mixes on a heterogeneous CMP,
Mage improves performance by 38% and up to 2x compared to a greedy scheduler.
Across 160 mixes on a heterogeneous cluster, Mage improves performance by 30%
on average and up to 52% over the greedy scheduler, and by 11% over the
combination of Paragon [15] for inter- and intra-server heterogeneity
- …