64 research outputs found

    Elements for Response Time Statistics in ERP Transaction Systems

    Full text link
    We present some measurements and ideas for response time statistics in ERP systems. It is shown that the response time distribution of a given transaction in a given system is generically a log-normal distribution or, in some situations, a sum of two or more log-normal distributions. We present some arguments for this form of the distribution based on heuristic rules for response times, and we show data from performance measurements in actual systems to support the log-normal form. Deviations of the log-normal form can often be traced back to performance problems in the system. Consequences for the interpretation of response time data and for service level agreements are discussed.Comment: revtex, twocolumn, 8 pages, 13 figures. figures replaced by coloured version

    Characterizing Workload of Web Applications on Virtualized Servers

    Full text link
    With the ever increasing demands of cloud computing services, planning and management of cloud resources has become a more and more important issue which directed affects the resource utilization and SLA and customer satisfaction. But before any management strategy is made, a good understanding of applications' workload in virtualized environment is the basic fact and principle to the resource management methods. Unfortunately, little work has been focused on this area. Lack of raw data could be one reason; another reason is that people still use the traditional models or methods shared under non-virtualized environment. The study of applications' workload in virtualized environment should take on some of its peculiar features comparing to the non-virtualized environment. In this paper, we are open to analyze the workload demands that reflect applications' behavior and the impact of virtualization. The results are obtained from an experimental cloud testbed running web applications, specifically the RUBiS benchmark application. We profile the workload dynamics on both virtualized and non-virtualized environments and compare the findings. The experimental results are valuable for us to estimate the performance of applications on computer architectures, to predict SLA compliance or violation based on the projected application workload and to guide the decision making to support applications with the right hardware.Comment: 8 pages, 8 figures, The Fourth Workshop on Big Data Benchmarks, Performance Optimization, and Emerging Hardware in conjunction with the 19th ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS-2014), Salt Lake City, Utah, USA, March 1-5, 201

    KISS: Stochastic Packet Inspection Classifier for UDP Traffic

    Get PDF
    This paper proposes KISS, a novel Internet classifica- tion engine. Motivated by the expected raise of UDP traffic, which stems from the momentum of Peer-to-Peer (P2P) streaming appli- cations, we propose a novel classification framework that leverages on statistical characterization of payload. Statistical signatures are derived by the means of a Chi-Square-like test, which extracts the protocol "format," but ignores the protocol "semantic" and "synchronization" rules. The signatures feed a decision process based either on the geometric distance among samples, or on Sup- port Vector Machines. KISS is very accurate, and its signatures are intrinsically robust to packet sampling, reordering, and flow asym- metry, so that it can be used on almost any network. KISS is tested in different scenarios, considering traditional client-server proto- cols, VoIP, and both traditional and new P2P Internet applications. Results are astonishing. The average True Positive percentage is 99.6%, with the worst case equal to 98.1,% while results are al- most perfect when dealing with new P2P streaming applications

    Performance evaluation of an open distributed platform for realistic traffic generation

    Get PDF
    Network researchers have dedicated a notable part of their efforts to the area of modeling traffic and to the implementation of efficient traffic generators. We feel that there is a strong demand for traffic generators capable to reproduce realistic traffic patterns according to theoretical models and at the same time with high performance. This work presents an open distributed platform for traffic generation that we called distributed internet traffic generator (D-ITG), capable of producing traffic (network, transport and application layer) at packet level and of accurately replicating appropriate stochastic processes for both inter departure time (IDT) and packet size (PS) random variables. We implemented two different versions of our distributed generator. In the first one, a log server is in charge of recording the information transmitted by senders and receivers and these communications are based either on TCP or UDP. In the other one, senders and receivers make use of the MPI library. In this work a complete performance comparison among the centralized version and the two distributed versions of D-ITG is presented

    Weibull mixture model to characterise end-to-end Internet delay at coarse time-scales

    Get PDF
    Traces collected at monitored points around the Internet contain representative performance information about the paths their probes traverse. Basic measurement attributes, such as delay and loss, are easy to collect and provide a means to both build and validate empirical performance models. However, the task of analysis and extracting performance conclusions from measurements remains challenging. Ideally, performance modelling aims to find a set of self-contained parameters to describe, summarise, profile and easy display network performance status at a time. This can result in the provision of meaningful information to address applications in fault and performance management, hence providing input to network provisioning, traffic engineering and performance prediction. In this work we present the Weibull Mixture Model, a method to characterise endto- end network delay measurements within a few simple, accurate, representative and handleable parameters using a finite combination of Weibull distributions, with all the aforementioned benefits. The model parameters are related tomeaningful delay characteristics, such as average peak and tail behaviour in a daily profile, and can be optimally found using an iterative algorithm known as Expectation Maximisation. Studies on such parameter evolution can reflect current workload status and all possible network events impacting packet dynamics, with further applications in network management. In addition, a self-sufficient procedure to implement the Weibull Mixture Model is presented, along with a set of matching examples to real GPS synchronised measurements taken across the Internet, donated by RIPE NCC
    corecore