67 research outputs found
QoS-Aware Resource Management for Multi-phase Serverless Workflows with Aquatope
Multi-stage serverless applications, i.e., workflows with many computation
and I/O stages, are becoming increasingly representative of FaaS platforms.
Despite their advantages in terms of fine-grained scalability and modular
development, these applications are subject to suboptimal performance, resource
inefficiency, and high costs to a larger degree than previous simple serverless
functions.
We present Aquatope, a QoS-and-uncertainty-aware resource scheduler for
end-to-end serverless workflows that takes into account the inherent
uncertainty present in FaaS platforms, and improves performance predictability
and resource efficiency. Aquatope uses a set of scalable and validated Bayesian
models to create pre-warmed containers ahead of function invocations, and to
allocate appropriate resources at function granularity to meet a complex
workflow's end-to-end QoS, while minimizing resource cost. Across a diverse set
of analytics and interactive multi-stage serverless workloads, Aquatope
significantly outperforms prior systems, reducing QoS violations by 5x, and
cost by 34% on average and up to 52% compared to other QoS-meeting methods
Measuring and Managing Answer Quality for Online Data-Intensive Services
Online data-intensive services parallelize query execution across distributed
software components. Interactive response time is a priority, so online query
executions return answers without waiting for slow running components to
finish. However, data from these slow components could lead to better answers.
We propose Ubora, an approach to measure the effect of slow running components
on the quality of answers. Ubora randomly samples online queries and executes
them twice. The first execution elides data from slow components and provides
fast online answers; the second execution waits for all components to complete.
Ubora uses memoization to speed up mature executions by replaying network
messages exchanged between components. Our systems-level implementation works
for a wide range of platforms, including Hadoop/Yarn, Apache Lucene, the
EasyRec Recommendation Engine, and the OpenEphyra question answering system.
Ubora computes answer quality much faster than competing approaches that do not
use memoization. With Ubora, we show that answer quality can and should be used
to guide online admission control. Our adaptive controller processed 37% more
queries than a competing controller guided by the rate of timeouts.Comment: Technical Repor
Recommended from our members
Model-based resource management for fine-grained services
The emergence of DevOps has changed the way modern distributed software systems are developed. Architectures decomposed in fine-grained services, such as microservices or function-as-a-service (FaaS), are now widespread across many organizations. From a resource management perspective, although the systems built with such architectures have many benefits, there are still research challenges that need further attention. In this study, we have focused on three such challenges, each concerning a specific system resource: compute, memory, or storage. Firstly, we focus on scaling the capacity of microservices at runtime. Here, the challenge is to design an autoscaler that can decide between vertical and horizontal scaling options to distribute the CPU capacity. Secondly, we focus on estimating the required capacity of an on-premises FaaS platform such that the service level agreements (SLAs) for function response times are satisfied. The challenge here is to address the cold start dilemma, i.e., that a cold start delays a function response but reduces the memory consumption. Thus, we must find a limit of cold starts such that the memory-consumption remains in-check while satisfying the SLAs. Finally, we focus on the storage management for distributed tracing targeted at microservices. The volume of such traces generated in a data center can be in the scale of tens of terabytes per day, but only a small fraction of these traces is useful for troubleshooting. The objective then is to sample only the useful traces. The key to addressing all these challenges is first, modeling the dynamics concerning the resources and subsequently, leveraging the model in a resource controller. To address the first challenge, we have developed an autoscaler ATOM that leverages layered queueing network (LQN) models to take its scaling decisions. Our experiment, with a real-life application, shows that ATOM produces 30-37% better results than the baseline autoscalers. For the second challenge, we have developed COCOA, a cold start aware capacity planner. COCOA utilizes M/M/k setup and LQN models to assess the cold start scenario and estimate the required capacity. We show with simulation that COCOA can reduce over-provisioning by over 70% compared to the availability aware approaches. Finally, addressing the third challenge, we propose SampleHST, a trace sampler that works under a storage budget constraint. SampleHST relies on either bag of words or graph-based models to represent a trace and groups similar traces using online clustering to perform sampling. We have evaluated the performance of SampleHST using data from both literature and production, which shows it produces 1.2x to 19x better results than the state-of-the-art.Open Acces
Atlas: Hybrid Cloud Migration Advisor for Interactive Microservices
Hybrid cloud provides an attractive solution to microservices for better
resource elasticity. A subset of application components can be offloaded from
the on-premises cluster to the cloud, where they can readily access additional
resources. However, the selection of this subset is challenging because of the
large number of possible combinations. A poor choice degrades the application
performance, disrupts the critical services, and increases the cost to the
extent of making the use of hybrid cloud unviable. This paper presents Atlas, a
hybrid cloud migration advisor. Atlas uses a data-driven approach to learn how
each user-facing API utilizes different components and their network footprints
to drive the migration decision. It learns to accelerate the discovery of
high-quality migration plans from millions and offers recommendations with
customizable trade-offs among three quality indicators: end-to-end latency of
user-facing APIs representing application performance, service availability,
and cloud hosting costs. Atlas continuously monitors the application even after
the migration for proactive recommendations. Our evaluation shows that Atlas
can achieve 21% better API performance (latency) and 11% cheaper cost with less
service disruption than widely used solutions.Comment: To appear at EuroSys 202
Towards Autonomous and Efficient Machine Learning Systems
Computation-intensive machine learning (ML) applications are becoming some of the most popular workloads running atop cloud infrastructure. While training ML applications, practitioners face the challenge of tuning various system-level parameters, such as the number of training nodes, communication topology during training, instance type, and the number of serving nodes, to meet the SLO requirements for bursty workload during the inference. Similarly, efficient resource utilization is another key challenge in cloud computing. This dissertation proposes high-performing and efficient ML systems to speed up training time and inference tasks while enabling automated and robust system management.To train an ML model in a distributed fashion we focus on strategies to mitigate the resource provisioning overhead and improve the training speed without impacting the model accuracy. More specifically, a system for autonomic and adaptive scheduling is built atop serverless computing that dynamically optimizes deployment and resource scaling for ML training tasks for cost-effectiveness and fast training. Similarly, a dynamic client selection framework is developed to address the stragglers problem caused by resource heterogeneity, data quality, and data quantity in a privacy-preserving Federated Learning (FL) environment without impacting the model accuracy.For serving bursty ML workloads we focus on developing highly scalable and adaptive strategies to serve the dynamically changing workload in a cost-effective manner in an autonomic fashion. We develop a framework that optimizes batching parameters on the fly using a lightweight profiler and an analytical model. We also devise strategies for serving ML workloads of varying sizes, leading to non-deterministic service time in a cost-effective manner. More specifically, we develop an SLO-aware framework that first analyzes the request size variations and workload variation to estimate the number of serving functions and intelligently route requests to multiple serving functions. Finally, resource utilization of burstable instances is optimized to benefit the cloud provider and end-user through a careful orchestration of resources (i.e., CPU, network, and I/O) using an analytical model and lightweight profiling, while complying with a user-defined SLO
- …