11 research outputs found
Aggregate matrix-analytic techniques and their applications
The complexity of computer systems affects the complexity of modeling techniques that can be used for their performance analysis. In this dissertation, we develop a set of techniques that are based on tractable analytic models and enable efficient performance analysis of computer systems. Our approach is three pronged: first, we propose new techniques to parameterize measurement data with Markovian-based stochastic processes that can be further used as input into queueing systems; second, we propose new methods to efficiently solve complex queueing models; and third, we use the proposed methods to evaluate the performance of clustered Web servers and propose new load balancing policies based on this analysis.;We devise two new techniques for fitting measurement data that exhibit high variability into Phase-type (PH) distributions. These techniques apply known fitting algorithms in a divide-and-conquer fashion. We evaluate the accuracy of our methods from both the statistics and the queueing systems perspective. In addition, we propose a new methodology for fitting measurement data that exhibit long-range dependence into Markovian Arrival Processes (MAPs).;We propose a new methodology, ETAQA, for the exact solution of M/G/1-type processes, (GI/M/1-type processes, and their intersection, i.e., quasi birth-death (QBD) processes. ETAQA computes an aggregate steady state probability distribution and a set of measures of interest. E TAQA is numerically stable and computationally superior to alternative solution methods. Apart from ETAQA, we propose a new methodology for the exact solution of a class of GI/G/1-type processes based on aggregation/decomposition.;Finally, we demonstrate the applicability of the proposed techniques by evaluating load balancing policies in clustered Web servers. We address the high variability in the service process of Web servers by dedicating the servers of a cluster to requests of similar sizes and propose new, content-aware load balancing policies. Detailed analysis shows that the proposed policies achieve high user-perceived performance and, by continuously adapting their scheduling parameters to the current workload characteristics, provide good performance under conditions of transient overload
The effect of workload dependence in systems: Experimental evaluation, analytic models, and policy development
This dissertation presents an analysis of performance effects of burstiness (formalized by the autocorrelation function) in multi-tiered systems via a 3-pronged approach, i.e., experimental measurements, analytic models, and policy development. This analysis considers (a) systems with finite buffers (e.g., systems with admission control that effectively operate as closed systems) and (b) systems with infinite buffers (i.e., systems that operate as open systems).;For multi-tiered systems with a finite buffer size, experimental measurements show that if autocorrelation exists in any of the tiers in a multi-tiered system, then autocorrelation propagates to all tiers of the system. The presence of autocorrelated flows in all tiers significantly degrades performance. Workload characterization in a real experimental environment driven by the TPC-W benchmark confirms the existence of autocorrelated flows, which originate from the autocorrelated service process of one of the tiers. A simple model is devised that captures the observed behavior. The model is in excellent agreement with experimental measurements and captures the propagation of autocorrelation in the multi-tiered system as well as the resulting performance trends.;For systems with an infinite buffer size, this study focuses on analytic models by proposing and comparing two families of approximations for the departure process of a BMAP/MAP/1 queue that admits batch correlated flows, and whose service time process may be autocorrelated. One approximation is based on the ETAQA methodology for the solution of M/G/1-type processes and the other arises from lumpability rules. Formal proofs are provided: both approximations preserve the marginal distribution of the inter-departure times and their initial correlation structures.;This dissertation also demonstrates how the knowledge of autocorrelation can be used to effectively improve system performance, D_EQAL, a new load balancing policy for clusters with dependent arrivals is proposed. D_EQAL separates jobs to servers according to their sizes as traditional load balancing policies do, but this separation is biased by the effort to reduce performance loss due to autocorrelation in the streams of jobs that are directed to each server. as a result of this, not all servers are equally utilized (i.e., the load in the system becomes unbalanced) but performance benefits of this load unbalancing are significant
Recommended from our members
Model-based resource management for fine-grained services
The emergence of DevOps has changed the way modern distributed software systems are developed. Architectures decomposed in fine-grained services, such as microservices or function-as-a-service (FaaS), are now widespread across many organizations. From a resource management perspective, although the systems built with such architectures have many benefits, there are still research challenges that need further attention. In this study, we have focused on three such challenges, each concerning a specific system resource: compute, memory, or storage. Firstly, we focus on scaling the capacity of microservices at runtime. Here, the challenge is to design an autoscaler that can decide between vertical and horizontal scaling options to distribute the CPU capacity. Secondly, we focus on estimating the required capacity of an on-premises FaaS platform such that the service level agreements (SLAs) for function response times are satisfied. The challenge here is to address the cold start dilemma, i.e., that a cold start delays a function response but reduces the memory consumption. Thus, we must find a limit of cold starts such that the memory-consumption remains in-check while satisfying the SLAs. Finally, we focus on the storage management for distributed tracing targeted at microservices. The volume of such traces generated in a data center can be in the scale of tens of terabytes per day, but only a small fraction of these traces is useful for troubleshooting. The objective then is to sample only the useful traces. The key to addressing all these challenges is first, modeling the dynamics concerning the resources and subsequently, leveraging the model in a resource controller. To address the first challenge, we have developed an autoscaler ATOM that leverages layered queueing network (LQN) models to take its scaling decisions. Our experiment, with a real-life application, shows that ATOM produces 30-37% better results than the baseline autoscalers. For the second challenge, we have developed COCOA, a cold start aware capacity planner. COCOA utilizes M/M/k setup and LQN models to assess the cold start scenario and estimate the required capacity. We show with simulation that COCOA can reduce over-provisioning by over 70% compared to the availability aware approaches. Finally, addressing the third challenge, we propose SampleHST, a trace sampler that works under a storage budget constraint. SampleHST relies on either bag of words or graph-based models to represent a trace and groups similar traces using online clustering to perform sampling. We have evaluated the performance of SampleHST using data from both literature and production, which shows it produces 1.2x to 19x better results than the state-of-the-art.Open Acces