1,913 research outputs found

    RHAS: robust hybrid auto-scaling for web applications in cloud computing

    Get PDF

    A Reliable and Cost-Efficient Auto-Scaling System for Web Applications Using Heterogeneous Spot Instances

    Full text link
    Cloud providers sell their idle capacity on markets through an auction-like mechanism to increase their return on investment. The instances sold in this way are called spot instances. In spite that spot instances are usually 90% cheaper than on-demand instances, they can be terminated by provider when their bidding prices are lower than market prices. Thus, they are largely used to provision fault-tolerant applications only. In this paper, we explore how to utilize spot instances to provision web applications, which are usually considered availability-critical. The idea is to take advantage of differences in price among various types of spot instances to reach both high availability and significant cost saving. We first propose a fault-tolerant model for web applications provisioned by spot instances. Based on that, we devise novel auto-scaling polices for hourly billed cloud markets. We implemented the proposed model and policies both on a simulation testbed for repeatable validation and Amazon EC2. The experiments on the simulation testbed and the real platform against the benchmarks show that the proposed approach can greatly reduce resource cost and still achieve satisfactory Quality of Service (QoS) in terms of response time and availability

    Data-Driven Methods for Data Center Operations Support

    Get PDF
    During the last decade, cloud technologies have been evolving at an impressive pace, such that we are now living in a cloud-native era where developers can leverage on an unprecedented landscape of (possibly managed) services for orchestration, compute, storage, load-balancing, monitoring, etc. The possibility to have on-demand access to a diverse set of configurable virtualized resources allows for building more elastic, flexible and highly-resilient distributed applications. Behind the scenes, cloud providers sustain the heavy burden of maintaining the underlying infrastructures, consisting in large-scale distributed systems, partitioned and replicated among many geographically dislocated data centers to guarantee scalability, robustness to failures, high availability and low latency. The larger the scale, the more cloud providers have to deal with complex interactions among the various components, such that monitoring, diagnosing and troubleshooting issues become incredibly daunting tasks. To keep up with these challenges, development and operations practices have undergone significant transformations, especially in terms of improving the automations that make releasing new software, and responding to unforeseen issues, faster and sustainable at scale. The resulting paradigm is nowadays referred to as DevOps. However, while such automations can be very sophisticated, traditional DevOps practices fundamentally rely on reactive mechanisms, that typically require careful manual tuning and supervision from human experts. To minimize the risk of outages—and the related costs—it is crucial to provide DevOps teams with suitable tools that can enable a proactive approach to data center operations. This work presents a comprehensive data-driven framework to address the most relevant problems that can be experienced in large-scale distributed cloud infrastructures. These environments are indeed characterized by a very large availability of diverse data, collected at each level of the stack, such as: time-series (e.g., physical host measurements, virtual machine or container metrics, networking components logs, application KPIs); graphs (e.g., network topologies, fault graphs reporting dependencies among hardware and software components, performance issues propagation networks); and text (e.g., source code, system logs, version control system history, code review feedbacks). Such data are also typically updated with relatively high frequency, and subject to distribution drifts caused by continuous configuration changes to the underlying infrastructure. In such a highly dynamic scenario, traditional model-driven approaches alone may be inadequate at capturing the complexity of the interactions among system components. DevOps teams would certainly benefit from having robust data-driven methods to support their decisions based on historical information. For instance, effective anomaly detection capabilities may also help in conducting more precise and efficient root-cause analysis. Also, leveraging on accurate forecasting and intelligent control strategies would improve resource management. Given their ability to deal with high-dimensional, complex data, Deep Learning-based methods are the most straightforward option for the realization of the aforementioned support tools. On the other hand, because of their complexity, this kind of models often requires huge processing power, and suitable hardware, to be operated effectively at scale. These aspects must be carefully addressed when applying such methods in the context of data center operations. Automated operations approaches must be dependable and cost-efficient, not to degrade the services they are built to improve. i

    Mobile Crowd Sensing in Edge Computing Environment

    Get PDF
    abstract: The mobile crowdsensing (MCS) applications leverage the user data to derive useful information by data-driven evaluation of innovative user contexts and gathering of information at a high data rate. Such access to context-rich data can potentially enable computationally intensive crowd-sourcing applications such as tracking a missing person or capturing a highlight video of an event. Using snippets and pictures captured from multiple mobile phone cameras with specific contexts can improve the data acquired in such applications. These MCS applications require efficient processing and analysis to generate results in real time. A human user, mobile device and their interactions cause a change in context on the mobile device affecting the quality contextual data that is gathered. Usage of MCS data in real-time mobile applications is challenging due to the complex inter-relationship between: a) availability of context, context is available with the mobile phones and not with the cloud, b) cost of data transfer to remote cloud servers, both in terms of communication time and energy, and c) availability of local computational resources on the mobile phone, computation may lead to rapid battery drain or increased response time. The resource-constrained mobile devices need to offload some of their computation. This thesis proposes ContextAiDe an end-end architecture for data-driven distributed applications aware of human mobile interactions using Edge computing. Edge processing supports real-time applications by reducing communication costs. The goal is to optimize the quality and the cost of acquiring the data using a) modeling and prediction of mobile user contexts, b) efficient strategies of scheduling application tasks on heterogeneous devices including multi-core devices such as GPU c) power-aware scheduling of virtual machine (VM) applications in cloud infrastructure e.g. elastic VMs. ContextAiDe middleware is integrated into the mobile application via Android API. The evaluation consists of overheads and costs analysis in the scenario of ``perpetrator tracking" application on the cloud, fog servers, and mobile devices. LifeMap data sets containing actual sensor data traces from mobile devices are used to simulate the application run for large scale evaluation.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Cloud Instance Management and Resource Prediction For Computation-as-a-Service Platforms

    Get PDF
    Computation-as-a-Service (CaaS) offerings have gained traction in the last few years due to their effectiveness in balancing between the scalability of Software-as-a-Service and the customisation possibilities of Infrastructure-as-a-Service platforms. To function effectively, a CaaS platform must have three key properties: (i) reactive assignment of individual processing tasks to available cloud instances (compute units) according to availability and predetermined time-to-completion (TTC) constraints; (ii) accurate resource prediction; (iii) efficient control of the number of cloud instances servicing workloads, in order to optimize between completing workloads in a timely fashion and reducing resource utilization costs. In this paper, we propose three approaches that satisfy these properties (respectively): (i) a service rate allocation mechanism based on proportional fairness and TTC constraints; (ii) Kalman-filter estimates for resource prediction; and (iii) the use of additive increase multiplicative decrease (AIMD) algorithms (famous for being the resource management in the transport control protocol) for the control of the number of compute units servicing workloads. The integration of our three proposals into a single CaaS platform is shown to provide for more than 27% reduction in Amazon EC2 spot instance cost against methods based on reactive resource prediction and 38% to 60% reduction of the billing cost against the current state-of-the-art in CaaS platforms (Amazon Lambda and Autoscale)

    Forecasting Resource Usage in Cloud Environments Using Temporal Convolutional Networks

    Get PDF
    Background: Predicting resource usage in cloud environments is crucial for optimizing costs. While recurrent neural networks and time series techniques are commonly used for forecasting, their limitations, such as vanishing gradients and lack of memory retention, necessitate the use of convolutional networks for modeling sequential data. Objective: This research proposes a temporal convolutional network (TCN) to forecast CPU usage and memory consumption in cloud environments. TCNs utilize dilated convolutions to capture temporal dependencies and maintain a fixed-sized receptive field, enabling them to handle sequences of varying lengths and capture long-term dependencies. The performance of the TCN is compared with Long Short-Term Memory (LSTM) Networks, Gated Recurrent Unit (GRU) Networks, and Multilayer Perceptron (MLP). Dataset: The study employs the Google Cluster Workload Traces 2019 data, focusing on CPU and memory utilization ranging between 5% and 95% over a 24-hour period, extracted from the first ten days. Results: The TCN outperforms other methods in predicting both CPU usage and memory consumption. For CPU usage prediction, the TCN achieves lower error metrics, including Mean Squared Error (MSE) of 0.05, Root Mean Squared Error (RMSE) of 0.22, Mean Absolute Error (MAE) of 0.18, and Mean Absolute Percentage Error (MAPE) of 3.5%. The TCN also demonstrates higher forecast accuracy, with FA1 = 85%, FA5 = 95%, and FA10 = 98%. Similar performance improvements are observed for memory consumption prediction, with the TCN achieving lower error metrics and higher forecast accuracy compared to LSTM, GRU, and MLP. The TCN exhibits better computational efficiency in terms of training time, inference time, and memory usage. Conclusion: The proposed temporal convolutional network (TCN) demonstrates good performance in forecasting CPU usage and memory consumption in cloud environments compared to LSTM, GRU, and MLP. Since TCN\u27s can capture temporal dependencies and handle sequences of varying lengths makes it a promising approach for resource usage prediction and cost optimization in cloud computing
    • …
    corecore