687 research outputs found
Workload shaping for QoS and power efficiency of storage systems
The growing popularity of hosted storage services and shared storage infrastructure in data centers is driving the recent interest in resource management and QoS in storage systems. The bursty nature of storage workloads raises significant performance and provisioning challenges, leading to increased resource requirements, management costs, and energy consumption. We present a novel dynamic workload shaping framework to handle bursty server workloads, where the arrival stream is dynamically decomposed to isolate its bursty, and then rescheduled to exploit available slack. An optimal decomposition algorithm RTT and a recombination algorithm Miser make up the scheduling framework. We evaluate this framework using several real world storage workloads traces. The results show that workload shaping: (i) reduces the server capacity requirements and power consumption dramatically while affecting QoS guarantees minimally, (ii) provides better response time distributions over non-decomposed traditional scheduling methods, and (iii) decomposition can be used to provide more accurate capacity estimates for multiplexing several clients on a shared server
Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud
With the advent of cloud computing, organizations are nowadays able to react
rapidly to changing demands for computational resources. Not only individual
applications can be hosted on virtual cloud infrastructures, but also complete
business processes. This allows the realization of so-called elastic processes,
i.e., processes which are carried out using elastic cloud resources. Despite
the manifold benefits of elastic processes, there is still a lack of solutions
supporting them.
In this paper, we identify the state of the art of elastic Business Process
Management with a focus on infrastructural challenges. We conceptualize an
architecture for an elastic Business Process Management System and discuss
existing work on scheduling, resource allocation, monitoring, decentralized
coordination, and state management for elastic processes. Furthermore, we
present two representative elastic Business Process Management Systems which
are intended to counter these challenges. Based on our findings, we identify
open issues and outline possible research directions for the realization of
elastic processes and elastic Business Process Management.Comment: Please cite as: S. Schulte, C. Janiesch, S. Venugopal, I. Weber, and
P. Hoenisch (2015). Elastic Business Process Management: State of the Art and
Open Challenges for BPM in the Cloud. Future Generation Computer Systems,
Volume NN, Number N, NN-NN., http://dx.doi.org/10.1016/j.future.2014.09.00
Online QoS Modeling in the Cloud: A Hybrid and Adaptive Multi-Learners Approach
Given the on-demand nature of cloud computing, managing cloud-based services
requires accurate modeling for the correlation between their Quality of Service
(QoS) and cloud configurations/resources. The resulted models need to cope with
the dynamic fluctuation of QoS sensitivity and interference. However, existing
QoS modeling in the cloud are limited in terms of both accuracy and
applicability due to their static and semi- dynamic nature. In this paper, we
present a fully dynamic multi- learners approach for automated and online QoS
modeling in the cloud. We contribute to a hybrid learners solution, which
improves accuracy while keeping model complexity adequate. To determine the
inputs of QoS model at runtime, we partition the inputs space into two
sub-spaces, each of which applies different symmetric uncertainty based
selection techniques, and we then combine the sub-spaces results. The learners
are also adaptive; they simultaneously allow several machine learning
algorithms to model QoS function and dynamically select the best model for
prediction on the fly. We experimentally evaluate our models using RUBiS
benchmark and realistic FIFA 98 workload. The results show that our
multi-learners approach is more accurate and effective in contrast to the other
state-of-the-art approaches.Comment: In the proceeding of the 7th IEEE/ACM International Conference on
Utility and Cloud Computing (UCC), London, UK, 201
Predictive Analysis for Cloud Infrastructure Metrics
In a cloud computing environment, enterprises have the flexibility to request resources according to their application demands. This elastic feature of cloud computing makes it an attractive option for enterprises to host their applications on the cloud. Cloud providers usually exploit this elasticity by auto-scaling the application resources for quality assurance. However, there is a setup-time delay that may take minutes between the demand for a new resource and it being prepared for utilization. This causes the static resource provisioning techniques, which request allocation of a new resource only when the application breaches a specific threshold, to be slow and inefficient for the resource allocation task. To overcome this limitation, it is important to foresee the upcoming resource demand for an application before it becomes overloaded and trigger resource allocation in advance to allow setup time for the newly allocated resource. Machine learning techniques like time-series forecasting can be leveraged to provide promising results for dynamic resource allocation.
In this research project, I developed a predictive analysis model for dynamic resource provisioning for cloud infrastructure. The researched solution demonstrates that it can predict the upcoming workload for various cloud infrastructure metrics upto 4 hours in future to allow allocation of virtual machines in advance
Provisioning of Edge Computing Resources for Heterogeneous IoT Workload
With the evolution of cellular networks, the number of smart connected devices have witnessed a tremendous increase to reach billions by 2020 as forecasted by Cisco, constituting what is known today as the Internet of Things (IoT). With such explosion of smart devices, novel services have evolved and invaded almost every aspect of our lives; from e-health to smart homes and smart factories, etc. Such services come with stringent QoS requirements. While the current network infrastructure (4G) is providing an acceptable QoE to the end users, it will be rendered obsolete when considering the critical QoS requirements of such new services. Hence, to deliver the seamless experience these services provide, MEC has emerged as a promising technology to offer the cloud capabilities at the edge of the network, and hence, meeting the low latency requirements of such services. Moreover, another QoS parameter that needs to be addressed is the ultra high reliability demanded by the IoT services. Therefore,5G has evolved as a promising technology supporting ultra Reliable Low Latency Communication (uRLLC) and other service categories. While integrating uRLLC with MEC would help in realizing such services, it would however raise some challenges for the network operator. Thus, in this thesis, we address some of these challenges. Specifically, in the second chapter, we address the problem of MEC Resource Provisioning and Workload Assignment (RPWA) in an IoT environment, with heterogeneous workloads demanding services with stringent latency requirements. We formulate the problem as an MIP with the objective to minimize the re-sources deployment cost. Due to the complexity of the problem, we will develop a decomposition approach (RPWA-D) to solve the problem and study through different simulations, the performance of our approach. In chapter 3, we consider both ultra high reliability and low latency requirements of different IoT services, and solve the Workload Assignment problem (WA) in an IoT environment. We formulate the problem as an MIP with the objective of maximizing the admitted workload to the network. After showing the complexity of the problem and the non scalability of the WA-MIP, we propose two different approaches; WA-D and WA-Tabu. The results show that WA-Tabu was the most efficient and scalable
A smart resource management mechanism with trust access control for cloud computing environment
The core of the computer business now offers subscription-based on-demand
services with the help of cloud computing. We may now share resources among
multiple users by using virtualization, which creates a virtual instance of a
computer system running in an abstracted hardware layer. It provides infinite
computing capabilities through its massive cloud datacenters, in contrast to
early distributed computing models, and has been incredibly popular in recent
years because to its continually growing infrastructure, user base, and hosted
data volume. This article suggests a conceptual framework for a workload
management paradigm in cloud settings that is both safe and
performance-efficient. A resource management unit is used in this paradigm for
energy and performing virtual machine allocation with efficiency, assuring the
safe execution of users' applications, and protecting against data breaches
brought on by unauthorised virtual machine access real-time. A secure virtual
machine management unit controls the resource management unit and is created to
produce data on unlawful access or intercommunication. Additionally, a workload
analyzer unit works simultaneously to estimate resource consumption data to
help the resource management unit be more effective during virtual machine
allocation. The suggested model functions differently to effectively serve the
same objective, including data encryption and decryption prior to transfer,
usage of trust access mechanism to prevent unauthorised access to virtual
machines, which creates extra computational cost overhead
- …