337 research outputs found

    Comparing time series with machine learning-based prediction approaches for violation management in cloud SLAs

    Get PDF
    © 2018 In cloud computing, service level agreements (SLAs) are legal agreements between a service provider and consumer that contain a list of obligations and commitments which need to be satisfied by both parties during the transaction. From a service provider's perspective, a violation of such a commitment leads to penalties in terms of money and reputation and thus has to be effectively managed. In the literature, this problem has been studied under the domain of cloud service management. One aspect required to manage cloud services after the formation of SLAs is to predict the future Quality of Service (QoS) of cloud parameters to ascertain if they lead to violations. Various approaches in the literature perform this task using different prediction approaches however none of them study the accuracy of each. However, it is important to do this as the results of each prediction approach vary according to the pattern of the input data and selecting an incorrect choice of a prediction algorithm could lead to service violation and penalties. In this paper, we test and report the accuracy of time series and machine learning-based prediction approaches. In each category, we test many different techniques and rank them according to their order of accuracy in predicting future QoS. Our analysis helps the cloud service provider to choose an appropriate prediction approach (whether time series or machine learning based) and further to utilize the best method depending on input data patterns to obtain an accurate prediction result and better manage their SLAs to avoid violation penalties

    Formulating and managing viable SLAs in cloud computing from a small to medium service provider's viewpoint: A state-of-the-art review

    Full text link
    © 2017 Elsevier Ltd In today's competitive world, service providers need to be customer-focused and proactive in their marketing strategies to create consumer awareness of their services. Cloud computing provides an open and ubiquitous computing feature in which a large random number of consumers can interact with providers and request services. In such an environment, there is a need for intelligent and efficient methods that increase confidence in the successful achievement of business requirements. One such method is the Service Level Agreement (SLA), which is comprised of service objectives, business terms, service relations, obligations and the possible action to be taken in the case of SLA violation. Most of the emphasis in the literature has, until now, been on the formation of meaningful SLAs by service consumers, through which their requirements will be met. However, in an increasingly competitive market based on the cloud environment, service providers too need a framework that will form a viable SLA, predict possible SLA violations before they occur, and generate early warning alarms that flag a potential lack of resources. This is because when a provider and a consumer commit to an SLA, the service provider is bound to reserve the agreed amount of resources for the entire period of that agreement – whether the consumer uses them or not. It is therefore very important for cloud providers to accurately predict the likely resource usage for a particular consumer and to formulate an appropriate SLA before finalizing an agreement. This problem is more important for a small to medium cloud service provider which has limited resources that must be utilized in the best possible way to generate maximum revenue. A viable SLA in cloud computing is one that intelligently helps the service provider to determine the amount of resources to offer to a requesting consumer, and there are number of studies on SLA management in the literature. The aim of this paper is two-fold. First, it presents a comprehensive overview of existing state-of-the-art SLA management approaches in cloud computing, and their features and shortcomings in creating viable SLAs from the service provider's viewpoint. From a thorough analysis, we observe that the lack of a viable SLA management framework renders a service provider unable to make wise decisions in forming an SLA, which could lead to service violations and violation penalties. To fill this gap, our second contribution is the proposal of the Optimized Personalized Viable SLA (OPV-SLA) framework which assists a service provider to form a viable SLA and start managing SLA violation before an SLA is formed and executed. The framework also assists a service provider to make an optimal decision in service formation and allocate the appropriate amount of marginal resources. We demonstrate the applicability of our framework in forming viable SLAs through experiments. From the evaluative results, we observe that our framework helps a service provider to form viable SLAs and later to manage them to effectively minimize possible service violation and penalties

    End-to-End Trust Fulfillment of Big Data Workflow Provisioning over Competing Clouds

    Get PDF
    Cloud Computing has emerged as a promising and powerful paradigm for delivering data- intensive, high performance computation, applications and services over the Internet. Cloud Computing has enabled the implementation and success of Big Data, a relatively recent phenomenon consisting of the generation and analysis of abundant data from various sources. Accordingly, to satisfy the growing demands of Big Data storage, processing, and analytics, a large market has emerged for Cloud Service Providers, offering a myriad of resources, platforms, and infrastructures. The proliferation of these services often makes it difficult for consumers to select the most suitable and trustworthy provider to fulfill the requirements of building complex workflows and applications in a relatively short time. In this thesis, we first propose a quality specification model to support dual pre- and post-cloud workflow provisioning, consisting of service provider selection and workflow quality enforcement and adaptation. This model captures key properties of the quality of work at different stages of the Big Data value chain, enabling standardized quality specification, monitoring, and adaptation. Subsequently, we propose a two-dimensional trust-enabled framework to facilitate end-to-end Quality of Service (QoS) enforcement that: 1) automates cloud service provider selection for Big Data workflow processing, and 2) maintains the required QoS levels of Big Data workflows during runtime through dynamic orchestration using multi-model architecture-driven workflow monitoring, prediction, and adaptation. The trust-based automatic service provider selection scheme we propose in this thesis is comprehensive and adaptive, as it relies on a dynamic trust model to evaluate the QoS of a cloud provider prior to taking any selection decisions. It is a multi-dimensional trust model for Big Data workflows over competing clouds that assesses the trustworthiness of cloud providers based on three trust levels: (1) presence of the most up-to-date cloud resource verified capabilities, (2) reputational evidence measured by neighboring users and (3) a recorded personal history of experiences with the cloud provider. The trust-based workflow orchestration scheme we propose aims to avoid performance degradation or cloud service interruption. Our workflow orchestration approach is not only based on automatic adaptation and reconfiguration supported by monitoring, but also on predicting cloud resource shortages, thus preventing performance degradation. We formalize the cloud resource orchestration process using a state machine that efficiently captures different dynamic properties of the cloud execution environment. In addition, we use a model checker to validate our monitoring model in terms of reachability, liveness, and safety properties. We evaluate both our automated service provider selection scheme and cloud workflow orchestration, monitoring and adaptation schemes on a workflow-enabled Big Data application. A set of scenarios were carefully chosen to evaluate the performance of the service provider selection, workflow monitoring and the adaptation schemes we have implemented. The results demonstrate that our service selection outperforms other selection strategies and ensures trustworthy service provider selection. The results of evaluating automated workflow orchestration further show that our model is self-adapting, self-configuring, reacts efficiently to changes and adapts accordingly while enforcing QoS of workflows

    Towards a normalized trustworthiness approach to enhance security in on-line assessment

    Get PDF
    (c) 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.This paper proposes an approach to enhance information security in on-line assessment based on a normalized trustworthiness model. Among collaborative e-Learning drawbacks which are not completely solved, we have investigated information security requirements in on-line assessment (e-assessment). To the best of our knowledge, security requirements cannot be reached with technology alone, therefore, new models such as trustworthiness approaches can complete technological solutions and support e-assessment requirements for e-Learning. Although trustworthiness models can be defined and included as a service in e-assessment security frameworks, there are multiple factors related to trustworthiness which cannot be managed without normalization. Among these factors we discuss trustworthiness multiple sources, different data source formats, measure techniques and other trustworthiness factors such as rules, evolution or context. Hence, in this paper, we justify why trustworthiness normalization is needed and a normalized trustworthiness model is proposed by reviewing existing normalization procedures for trustworthy values applied to e-assessments. Eventually, we examine the potential of our normalized trustworthiness model in a real online collaborative learning course.Peer ReviewedPostprint (author's final draft

    Risk-based framework for SLA violation abatement from the cloud service provider's perspective

    Get PDF
    © The British Computer Society 2018. The constant increase in the growth of the cloud market creates new challenges for cloud service providers. One such challenge is the need to avoid possible service level agreement (SLA) violations and their consequences through good SLA management. Researchers have proposed various frameworks and have made significant advances in managing SLAs from the perspective of both cloud users and providers. However, none of these approaches guides the service provider on the necessary steps to take for SLA violation abatement; that is, the prediction of possible SLA violations, the process to follow when the system identifies the threat of SLA violation, and the recommended action to take to avoid SLA violation. In this paper, we approach this process of SLA violation detection and abatement from a risk management perspective. We propose a Risk Management-based Framework for SLA violation abatement (RMF-SLA) following the formation of an SLA which comprises SLA monitoring, violation prediction and decision recommendation. Through experiments, we validate and demonstrate the suitability of the proposed framework for assisting cloud providers to minimize possible service violations and penalties

    Green demand aware fog computing : a prediction-based dynamic resource provisioning approach

    Get PDF
    Fog computing could potentially cause the next paradigm shift by extending cloud services to the edge of the network, bringing resources closer to the end-user. With its close proximity to end-users and its distributed nature, fog computing can significantly reduce latency. With the appearance of more and more latency-stringent applications, in the near future, we will witness an unprecedented amount of demand for fog computing. Undoubtedly, this will lead to an increase in the energy footprint of the network edge and access segments. To reduce energy consumption in fog computing without compromising performance, in this paper we propose the Green-Demand-Aware Fog Computing (GDAFC) solution. Our solution uses a prediction technique to identify the working fog nodes (nodes serve when request arrives), standby fog nodes (nodes take over when the computational capacity of the working fog nodes is no longer sufficient), and idle fog nodes in a fog computing infrastructure. Additionally, it assigns an appropriate sleep interval for the fog nodes, taking into account the delay requirement of the applications. Results obtained based on the mathematical formulation show that our solution can save energy up to 65% without deteriorating the delay requirement performance. © 2022 by the authors. Licensee MDPI, Basel, Switzerland

    Modelling Indoor Air Quality Using Sensor Data and Machine Learning Methods

    Get PDF
    Ubiquitous sensing is transforming our societies and how we interact with our surrounding envi- ronment; sensors provide large streams of data while machine learning techniques and artificial intelligence provide the tools needed to generate insights from the data. These developments have taken place in almost every industry sector with topics such as smart cities and smart buildings becoming key topical issues as societies seek more sustainable ways of living. Smart buildings are the main context of this thesis. These are buildings equipped with various sensors used to collect data from the surrounding environment allowing the building to adapt itself and increasing its operational efficiency. Previously, most efforts in realizing smart buildings have focused on energy management and au- tomation where the goal is to improve costs associated with heating, ventilation, and air condi- tioning. A less studied area involves smart buildings and their indoor environments especially relative to sub-spaces within a building. Increased developments in low-cost sensor technologies have created new opportunities to sense indoor environments in more granular ways that provide new possibilities to model finer attributes of spaces within a building. This thesis focuses on modeling indoor environment data obtained from a multipurpose building that serves primarily as a school. The aim is to explore the quality of the indoor environment relative to regulatory guidelines and also exploring suitable predictive models for thermal comfort and indoor air quality. Additionally, design science methodology is applied in the creation of a proof of concept software system. This system is aimed at demonstrating the use of Web APIs to provide sensor data to clients that may use the data to render analytics among other insights to a building’s stakeholders. Overall, the main technical contributions of this thesis are twofold: (i) a potential web-application design for indoor air quality IoT data and (ii) an exposition of modeling of indoor air quality data based on a variety of sensors and multiple spaces within the same building. Results indicate a software-based tool that supports monitoring the indoor environment of a building would be beneficial in maintaining the correct levels of various indoor parameters. Further, modeling data from different spaces within the building shows a need for heterogeneous models to predict variables in these spaces. This implies parameters used to predict thermal comfort and air quality are different in varying spaces especially where the spaces differ in size, indoor climate control settings, and other attributes such as occupancy control

    Evaluation of Neuro-Evolution Algorithms for Tactic Volatility Aware Processes

    Get PDF
    Our society is increasingly evolving to rely on computer mechanisms that perform a variety of tasks. From a self-driving car to a satellite in space relaying data from Mars rovers, we need these systems to perform optimally and without failure. One such point of failure these systems can encounter is tactic volatility of an adaptation tactic. Adaptation tactics are defined workflows that allow systems to navigate their environment. Tactic volatility is the variance in the behavior in the attribute of a tactic, such as cost and latency and/or the combination of the two. Current systems consider these tactic attributes to be static. Studies have shown that not accounting for tactic volatility can adversely affect a system\u27s ability to operate effectively and resiliently. To support self-adaptive systems and address their limitations, this paper proposes a Tactic Volatility Aware solution that utilizes eRNN (TVA-E) and addresses the limitations of current self-adaptive systems. For this research, we used real-world data that has been made available for use by researchers and academics. This data contains real-world volatility and helps us demonstrate the positive impact TVA-E when used in self-adaptive systems. We also employ the use of uncertainty reduction tactics and how they can assist in accounting for tactic volatility. This work will serve as an evaluation and a comparison of using different machine learning methods to predict and account for tactic volatility. We will study different predictive mechanisms in this paper: Auto-Regressive Moving Average(ARIMA), Evolving Recurrent Neural Network(eRNN), Multi-Layer Perceptron(MLP), and Support Vector Regression(SVR). These methods will be studied with our TVA-E process and we will analyze how they can enhance a self-adaptive system’s performance when it accounts for tactic volatility

    Successful Demand Forecasting Modeling Strategies for Increasing Small Retail Medical Supply Profitability

    Get PDF
    The lack of effective demand forecasting strategies can result in imprecise inventory replenishment, inventory overstock, and unused inventory. The purpose of this single case study was to explore successful demand forecasting strategies that leaders of a small, retail, medical supply business used to increase profitability. The conceptual framework for this study was Winters\u27s forecasting demand approach. Data were collected from semistructured, face-to-face interviews with 8 business leaders of a private, small, retail, medical supply business in the southeastern United States and the review of company artifacts. Yin\u27s 5-step qualitative data analysis process of compiling, disassembling, reassembling, interpreting, and concluding was applied. Key themes that emerged from data analysis included understanding sales trends, inventory management with pricing, and seasonality. The findings of this study might contribute to positive social change by encouraging leaders of medical supply businesses to apply demand forecasting strategies that may lead to benefits for medically underserved citizens in need of accessible and abundant medical supplies

    A framework for QoS driven user-side cloud service management

    Get PDF
    This thesis presents a comprehensive framework that assists the cloud service user in making cloud service management decisions, such as service selection and migration. The proposed framework utilizes the QoS history of the available services for QoS forecasting and multi-criteria decision making. It then integrates all the inherent necessary processes, such as QoS monitoring, forecasting, service comparison and ranking to recommend the best and optimal decision to the user
    • …
    corecore