144 research outputs found

    ASIdE: Using Autocorrelation-Based Size Estimation for Scheduling Bursty Workloads.

    Get PDF
    Temporal dependence in workloads creates peak congestion that can make service unavailable and reduce system performance. To improve system performability under conditions of temporal dependence, a server should quickly process bursts of requests that may need large service demands. In this paper, we propose and evaluateASIdE, an Autocorrelation-based SIze Estimation, that selectively delays requests which contribute to the workload temporal dependence. ASIdE implicitly approximates the shortest job first (SJF) scheduling policy but without any prior knowledge of job service times. Extensive experiments show that (1) ASIdE achieves good service time estimates from the temporal dependence structure of the workload to implicitly approximate the behavior of SJF; and (2) ASIdE successfully counteracts peak congestion in the workload and improves system performability under a wide variety of settings. Specifically, we show that system capacity under ASIdE is largely increased compared to the first-come first-served (FCFS) scheduling policy and is highly-competitive with SJF. © 2012 IEEE

    Markovian Workload Characterization for QoS Prediction in the Cloud.

    No full text
    Resource allocation in the cloud is usually driven by performance predictions, such as estimates of the future incoming load to the servers or of the quality-of-service (QoS) offered by applications to end users. In this context, characterizing web workload fluctuations in an accurate way is fundamental to understand how to provision cloud resources under time-varying traffic intensities. In this paper, we investigate the Markovian Arrival Processes (MAP) and the related MAP/MAP/1 queueing model as a tool for performance prediction of servers deployed in the cloud. MAPs are a special class of Markov models used as a compact description of the time-varying characteristics of workloads. In addition, MAPs can fit heavy-tail distributions, that are common in HTTP traffic, and can be easily integrated within analytical queueing models to efficiently predict system performance without simulating. By comparison with trace-driven simulation, we observe that existing techniques for MAP parameterization from HTTP log files often lead to inaccurate performance predictions. We then define a maximum likelihood method for fitting MAP parameters based on data commonly available in Apache log files, and a new technique to cope with batch arrivals, which are notoriously difficult to model accurately. Numerical experiments demonstrate the accuracy of our approach for performance prediction of web systems. © 2011 IEEE

    Dependence-driven techniques in system design

    Get PDF
    Burstiness in workloads is often found in multi-tier architectures, storage systems, and communication networks. This feature is extremely important in system design because it can significantly degrade system performance and availability. This dissertation focuses on how to use knowledge of burstiness to develop new techniques and tools for performance prediction, scheduling, and resource allocation under bursty workload conditions.;For multi-tier enterprise systems, burstiness in the service times is catastrophic for performance. Via detailed experimentation, we identify the cause of performance degradation on the persistent bottleneck switch among various servers. This results in an unstable behavior that cannot be captured by existing capacity planning models. In this dissertation, beyond identifying the cause and effects of bottleneck switch in multi-tier systems, we also propose modifications to the classic TPC-W benchmark to emulate bursty arrivals in multi-tier systems.;This dissertation also demonstrates how burstiness can be used to improve system performance. Two dependence-driven scheduling policies, SWAP and ALoC, are developed. These general scheduling policies counteract burstiness in workloads and maintain high availability by delaying selected requests that contribute to burstiness. Extensive experiments show that both SWAP and ALoC achieve good estimates of service times based on the knowledge of burstiness in the service process. as a result, SWAP successfully approximates the shortest job first (SJF) scheduling without requiring a priori information of job service times. ALoC adaptively controls system load by infinitely delaying only a small fraction of the incoming requests.;The knowledge of burstiness can also be used to forecast the length of idle intervals in storage systems. In practice, background activities are scheduled during system idle times. The scheduling of background jobs is crucial in terms of the performance degradation of foreground jobs and the utilization of idle times. In this dissertation, new background scheduling schemes are designed to determine when and for how long idle times can be used for serving background jobs, without violating predefined performance targets of foreground jobs. Extensive trace-driven simulation results illustrate that the proposed schemes are effective and robust in a wide range of system conditions. Furthermore, if there is burstiness within idle times, then maintenance features like disk scrubbing and intra-disk data redundancy can be successfully scheduled as background activities during idle times

    Performance modelling with adaptive hidden Markov models and discriminatory processor sharing queues

    Get PDF
    In modern computer systems, workload varies at different times and locations. It is important to model the performance of such systems via workload models that are both representative and efficient. For example, model-generated workloads represent realistic system behaviour, especially during peak times, when it is crucial to predict and address performance bottlenecks. In this thesis, we model performance, namely throughput and delay, using adaptive models and discrete queues. Hidden Markov models (HMMs) parsimoniously capture the correlation and burstiness of workloads with spatiotemporal characteristics. By adapting the batch training of standard HMMs to incremental learning, online HMMs act as benchmarks on workloads obtained from live systems (i.e. storage systems and financial markets) and reduce time complexity of the Baum-Welch algorithm. Similarly, by extending HMM capabilities to train on multiple traces simultaneously it follows that workloads of different types are modelled in parallel by a multi-input HMM. Typically, the HMM-generated traces verify the throughput and burstiness of the real data. Applications of adaptive HMMs include predicting user behaviour in social networks and performance-energy measurements in smartphone applications. Equally important is measuring system delay through response times. For example, workloads such as Internet traffic arriving at routers are affected by queueing delays. To meet quality of service needs, queueing delays must be minimised and, hence, it is important to model and predict such queueing delays in an efficient and cost-effective manner. Therefore, we propose a class of discrete, processor-sharing queues for approximating queueing delay as response time distributions, which represent service level agreements at specific spatiotemporal levels. We adapt discrete queues to model job arrivals with distributions given by a Markov-modulated Poisson process (MMPP) and served under discriminatory processor-sharing scheduling. Further, we propose a dynamic strategy of service allocation to minimise delays in UDP traffic flows whilst maximising a utility function.Open Acces

    Exact Analysis of TTL Cache Networks: The Case of Caching Policies driven by Stopping Times

    Full text link
    TTL caching models have recently regained significant research interest, largely due to their ability to fit popular caching policies such as LRU. This paper advances the state-of-the-art analysis of TTL-based cache networks by developing two exact methods with orthogonal generality and computational complexity. The first method generalizes existing results for line networks under renewal requests to the broad class of caching policies whereby evictions are driven by stopping times. The obtained results are further generalized, using the second method, to feedforward networks with Markov arrival processes (MAP) requests. MAPs are particularly suitable for non-line networks because they are closed not only under superposition and splitting, as known, but also under input-output caching operations as proven herein for phase-type TTL distributions. The crucial benefit of the two closure properties is that they jointly enable the first exact analysis of feedforward networks of TTL caches in great generality

    The effect of workload dependence in systems: Experimental evaluation, analytic models, and policy development

    Get PDF
    This dissertation presents an analysis of performance effects of burstiness (formalized by the autocorrelation function) in multi-tiered systems via a 3-pronged approach, i.e., experimental measurements, analytic models, and policy development. This analysis considers (a) systems with finite buffers (e.g., systems with admission control that effectively operate as closed systems) and (b) systems with infinite buffers (i.e., systems that operate as open systems).;For multi-tiered systems with a finite buffer size, experimental measurements show that if autocorrelation exists in any of the tiers in a multi-tiered system, then autocorrelation propagates to all tiers of the system. The presence of autocorrelated flows in all tiers significantly degrades performance. Workload characterization in a real experimental environment driven by the TPC-W benchmark confirms the existence of autocorrelated flows, which originate from the autocorrelated service process of one of the tiers. A simple model is devised that captures the observed behavior. The model is in excellent agreement with experimental measurements and captures the propagation of autocorrelation in the multi-tiered system as well as the resulting performance trends.;For systems with an infinite buffer size, this study focuses on analytic models by proposing and comparing two families of approximations for the departure process of a BMAP/MAP/1 queue that admits batch correlated flows, and whose service time process may be autocorrelated. One approximation is based on the ETAQA methodology for the solution of M/G/1-type processes and the other arises from lumpability rules. Formal proofs are provided: both approximations preserve the marginal distribution of the inter-departure times and their initial correlation structures.;This dissertation also demonstrates how the knowledge of autocorrelation can be used to effectively improve system performance, D_EQAL, a new load balancing policy for clusters with dependent arrivals is proposed. D_EQAL separates jobs to servers according to their sizes as traditional load balancing policies do, but this separation is biased by the effort to reduce performance loss due to autocorrelation in the streams of jobs that are directed to each server. as a result of this, not all servers are equally utilized (i.e., the load in the system becomes unbalanced) but performance benefits of this load unbalancing are significant

    Queuing Modelling and Performance Analysis of Content Transfer in Information Centric Networks

    Get PDF
    With the rapid development of multimedia services and wireless technology, new generation of network traffic like short-form video and live streaming have put tremendous pressure on the current network infrastructure. To meet the high bandwidth and low latency needs of this new generation of traffic, the focus of Internet architecture has moved from host-centric end-to-end communication to requester-driven content retrieval. This shift has motivated the development of Information-Centric Networking (ICN), a promising new paradigm for the future Internet. ICN aims to improve information retrieval on the Internet by identifying and routing data using unified names. In-network caching and the use of a pending interest table (PIT) are two key features of ICN that are designed to efficiently handle bulk data dissemination and retrieval, as well as reduce bandwidth consumption. Performance analysis has been and continues to be key research interests of ICN. This thesis starts with the evaluation of content delivery delays in ICN. The main component of delay is composed of propagation delay, transmission delay,processing delay and queueing delay. To characterize the main components of content delivery delay, queueing network theory has been exploited to coordinate with cache miss rate in modelling the content delivery time in ICN. Moreover, different topologies and network conditions have been taken into account to evaluate the performance of content transfer in ICN. ICN is intrinsically compatible with wireless networks. To evaluate the performance of content transfer in wireless networks, an analytical model to evaluate the mean service time based on consumer and provider mobility has been proposed. The accuracy of the analytical model is validated through extensive simulation experiments. Finally, the analytical model is used to evaluate the impact of key metrics, such as the cache size, content size and content popularity on the performance of PIT and content transfer in ICN. Pending interest table (PIT) is one of the essential components of the ICN forwarding plane, which is responsible for stateful routing in ICN. It also aggregates the same interests to alleviate request flooding and network congestion. The aggregation feature of PIT improves performance of content delivery in ICN. Thus, having an analytical model to characterize the impact of PIT on content delivery time could allow for a more precise evaluation of content transfer performance. In parallel, if the size of the PIT is not properly determined, the interest drop rate may be too high, resulting in a reduction in quality of service for consumers as their requests have to be retransmitted. Furthermore, PIT is a costly resource as it requires to operate at wirespeed in the forwarding plane. Therefore, in order to ensure that interests drop rate less than the requirement, an analytical model of PIT occupancy has been developed to determine the minimum PIT size. In this thesis, the proposed analytical models are used to efficiently and accurately evaluate the performance of ICN content transfer and investigate the key component of ICN forwarding plane. Leveraging the insights discovered by these analytical models, the minimal PIT size and proper interest timeout can be determined to enhance the performance of ICN. To widen the outcomes achieved in the thesis, several interesting yet challenging research directions are pointed out

    On the Queue Length Distribution in BMAP Systems

    Get PDF
    Batch Markovian Arrival Process – BMAP – is a teletraffic model which combines high ability to imitate complex statistical behaviour of network traces with relative simplicity in analysis and simulation. It is also a generalization of a wide class of Markovian processes, a class which in particular include the Poisson process, the compound Poisson process, the Markovmodulated Poisson process, the phase-type renewal process and others. In this paper we study the main queueing performance characteristic of a finite-buffer queue fed by the BMAP, namely the queue length distribution. In particular, we show a formula for the Laplace transform of the queue length distribution. The main benefit of this formula is that it may be used to obtain both transient and stationary characteristics. To demonstrate this, several numerical results are presented

    Resource Provisioning for Web Applications under Time-varying Traffic

    Get PDF
    Cloud computing has gained considerable popularity in recent years. In this paradigm, an organization, referred to as a subscriber, acquires resources from an infrastructure provider to deploy its applications and pays for these resources on a pay-as-you-go basis. Typically, an infrastructure provider charges a subscriber based on resource level and duration of usage. From the subscriber's perspective, it is desirable to acquire enough capacity to provide an acceptable quality of service while minimizing the cost. A key indicator of quality of service is response time. In this thesis, we use performance models based on queueing theory to determine the required capacity to meet a performance target given by Pr[response time ≤ x] ≥ β. We first consider the case where resources are obtained from an infrastructure provider for a time period of one hour. This is compatible with the pricing policy of major infrastructure providers where instance usage is charged on an hourly basis. Over such a time period, web application traffic exhibits time-varying behavior. A conventional traffic model such as Poisson process does not capture this characteristic. The Markov-modulated Poisson process (MMPP), on the other hand, is capable of modeling such behavior. In our investigation of MMPP as a traffic model, an available workload generator is extended to produce a synthetic trace of job arrivals with a controlled level of time-variation, and an MMPP is fitted to the synthetic trace. The effectiveness of MMPP is evaluated by comparing the performance results through simulation, using as input the synthetic trace and job arrivals generated by the fitted MMPP. Queueing models with MMPP arrival process are then developed to determine the required capacity to meet a performance target over a one-hour time interval. Specifically, results on response time distribution are used in an optimization to obtain estimates of the required capacity. Two models are of interest to our investigation: a single-server model and a two-stage tandem queue. For both models, it is assumed that service time is represented by a phase-type (PH) distribution and queueing discipline is FCFS. The single-server model is therefore the MMPP/PH/1 (FCFS) model. Analytic results for time-dependent response time distribution of this model are first obtained. Computation of numerical results, however, is very costly. Through numerical examples, it is found that steady-state results are a good approximation for a time interval of one hour; the computation requirement is also significantly lower. Steady-state results are then used to determine the required capacity. The effectiveness of this model in terms of predicting the required capacity to meet the performance target is evaluated using an experimental system based on the TPC-W benchmark. Results on the impact of MMPP parameters on the required capacity are also presented. The second model is a two-stage tandem queue. The accuracy of the required capacity obtained via steady-state analysis is also evaluated using the TPC-W benchmark. We next consider the case where the infrastructure provider uses a time unit (TU) of less than one hour for charging of resource usage. We focus on scenarios where TU is comparable to the average sojourn time in an MMPP state. A one-hour operation interval is divided into a number of service intervals, each having the length one TU. At the beginning of each service interval, an estimate of the arrival rate is used as input to the M/PH/1 (FCFS) model to determine the required capacity to meet the performance target over the upcoming service interval; three heuristic algorithms are developed to estimate the arrival rate. The merit of this strategy, in terms of meeting the performance target over the operation interval and savings in capacity when compared to that determined by the single-server model, is investigated using the TPC-W benchmark
    corecore