2,527 research outputs found

    WESSBAS: extraction of probabilistic workload specifications for load testing and performance prediction—a model-driven approach for session-based application systems

    Get PDF
    The specification of workloads is required in order to evaluate performance characteristics of application systems using load testing and model-based performance prediction. Defining workload specifications that represent the real workload as accurately as possible is one of the biggest challenges in both areas. To overcome this challenge, this paper presents an approach that aims to automate the extraction and transformation of workload specifications for load testing and model-based performance prediction of session-based application systems. The approach (WESSBAS) comprises three main components. First, a system- and tool-agnostic domain-specific language (DSL) allows the layered modeling of workload specifications of session-based systems. Second, instances of this DSL are automatically extracted from recorded session logs of production systems. Third, these instances are transformed into executable workload specifications of load generation tools and model-based performance evaluation tools. We present transformations to the common load testing tool Apache JMeter and to the Palladio Component Model. Our approach is evaluated using the industry-standard benchmark SPECjEnterprise2010 and the World Cup 1998 access logs. Workload-specific characteristics (e.g., session lengths and arrival rates) and performance characteristics (e.g., response times and CPU utilizations) show that the extracted workloads match the measured workloads with high accuracy

    Finding a suitable performance testing tool

    Get PDF
    Abstract. The pursuit of finding the most suitable testing software for each project is a difficult task as there are a lot of software effective finding certain kind of problems but completely missing others in the field of stress and load testing. A silver bullet solving all problems in a cost effective and reliable way has not yet been found. This project was done as a systematic literature review to find whether there are solutions documented capable of testing everything in a cost-effective way. The document starts with an introduction of the task, originating from a real software testing company’s suggestion of finding suitable test software that can, cost effectively and reliably, fulfil the needs of the company. A history section is describing the reason of testing importance, basics of testing and what others have found in their studies of the area. The research method is described in detail followed by results describing tools found during the research divided in sections by license type. The sectioning by license type was selected for the benefit of testing companies that are interested in further developing tools found to their own interest. Findings and answered research questions were presented and discussed followed by possible implications and further research suggestions to future scholars interested in the matter. The systematic literature review found a total of 40 different tools identified during the data extraction process. One complete software system was available commercially including heavy support and help functions for the customer. A different approach linking open source and relatively inexpensive pieces of software together to achieve a composite solution was also identified. The solution included the most common and most popular individual piece of software identified by the study. All found pieces of software were listed and commented briefly mainly with information originating from the authors’ home pages

    LWS: A Framework for Log-based Workload Simulation in Session-based SUT

    Full text link
    Microservice-based applications and cloud-native systems have been widely applied in large IT enterprises. The operation and management of microservice-based applications and cloud-native systems have become the focus of research. Essential and real workloads are the premise and basis of prominent research topics including performance testing, dynamic resource provisioning and scheduling, and AIOps. Due to the privacy restriction, the complexity and variety of workloads, and the requirements for reasonable intervention, it is difficult to copy or generate real workloads directly. In this paper, we formulate the task of workload simulation and propose a framework for Log-based Workload Simulation (LWS) in session-based application systems. First, LWS collects session logs and transforms them into grouped and well-organized sessions. Then LWS extracts the user behavior abstraction based on a relational model and the intervenable workload intensity by three methods from different perspectives. LWS combines the user behavior abstraction and the workload intensity for simulated workload generation and designs a domain-specific language for better execution. The experimental evaluation is performed on an open-source cloud-native application and a public real-world e-commerce workload. The experimental results show that the simulated workload generated by LWS is effective and intervenable

    Distributed Load Testing by Modeling and Simulating User Behavior

    Get PDF
    Modern human-machine systems such as microservices rely upon agile engineering practices which require changes to be tested and released more frequently than classically engineered systems. A critical step in the testing of such systems is the generation of realistic workloads or load testing. Generated workload emulates the expected behaviors of users and machines within a system under test in order to find potentially unknown failure states. Typical testing tools rely on static testing artifacts to generate realistic workload conditions. Such artifacts can be cumbersome and costly to maintain; however, even model-based alternatives can prevent adaptation to changes in a system or its usage. Lack of adaptation can prevent the integration of load testing into system quality assurance, leading to an incomplete evaluation of system quality. The goal of this research is to improve the state of software engineering by addressing open challenges in load testing of human-machine systems with a novel process that a) models and classifies user behavior from streaming and aggregated log data, b) adapts to changes in system and user behavior, and c) generates distributed workload by realistically simulating user behavior. This research contributes a Learning, Online, Distributed Engine for Simulation and Testing based on the Operational Norms of Entities within a system (LODESTONE): a novel process to distributed load testing by modeling and simulating user behavior. We specify LODESTONE within the context of a human-machine system to illustrate distributed adaptation and execution in load testing processes. LODESTONE uses log data to generate and update user behavior models, cluster them into similar behavior profiles, and instantiate distributed workload on software systems. We analyze user behavioral data having differing characteristics to replicate human-machine interactions in a modern microservice environment. We discuss tools, algorithms, software design, and implementation in two different computational environments: client-server and cloud-based microservices. We illustrate the advantages of LODESTONE through a qualitative comparison of key feature parameters and experimentation based on shared data and models. LODESTONE continuously adapts to changes in the system to be tested which allows for the integration of load testing into the quality assurance process for cloud-based microservices

    Automated test-based learning and verification of performance models for microservices systems

    Get PDF
    Effective and automated verification techniques able to provide assurances of performance and scalability are highly demanded in the context of microservices systems. In this paper, we introduce a methodology that applies specification-driven load testing to learn the behavior of the target microservices system under multiple deployment configurations. Testing is driven by realistic workload conditions sampled in production. The sampling produces a formal description of the users' behavior through a Discrete Time Markov Chain. This model drives multiple load testing sessions that query the system under test and feed a Bayesian inference process which incrementally refines the initial model to obtain a complete specification from run-time evidence as a Continuous Time Markov Chain. The complete specification is then used to conduct automated verification by using probabilistic model checking and to compute a configuration score that evaluates alternative deployment options. This paper introduces the methodology, its theoretical foundation, and the toolchain we developed to automate it. Our empirical evaluation shows its applicability, benefits, and costs on a representative microservices system benchmark. We show that the methodology detects performance issues, traces them back to system-level requirements, and, thanks to the configuration score, provides engineers with insights on deployment options. The comparison between our approach and a selected state-of-the-art baseline shows that we are able to reduce the cost up to 73% in terms of number of tests. The verification stage requires negligible execution time and memory consumption. We observed that the verification of 360 system-level requirements took ~1 minute by consuming at most 34 KB. The computation of the score involved the verification of ~7k (automatically generated) properties verified in ~72 seconds using at most ~50 KB. (C)& nbsp;2022 The Author(s). Published by Elsevier Inc.& nbsp

    Emerging research directions in computer science : contributions from the young informatics faculty in Karlsruhe

    Get PDF
    In order to build better human-friendly human-computer interfaces, such interfaces need to be enabled with capabilities to perceive the user, his location, identity, activities and in particular his interaction with others and the machine. Only with these perception capabilities can smart systems ( for example human-friendly robots or smart environments) become posssible. In my research I\u27m thus focusing on the development of novel techniques for the visual perception of humans and their activities, in order to facilitate perceptive multimodal interfaces, humanoid robots and smart environments. My work includes research on person tracking, person identication, recognition of pointing gestures, estimation of head orientation and focus of attention, as well as audio-visual scene and activity analysis. Application areas are humanfriendly humanoid robots, smart environments, content-based image and video analysis, as well as safety- and security-related applications. This article gives a brief overview of my ongoing research activities in these areas

    Architectural run-time models for performance and privacy analysis in dynamic cloud applications

    Get PDF

    Towards a Model-driven Performance Prediction Approach for Internet of Things Architectures

    Get PDF
    Indisputable, security and interoperability play major concerns in Internet of Things (IoT) architectures and applications. In this paper, however, we emphasize the role and importance of performance and scalability as additional, crucial aspects in planning and building sustainable IoT solutions. IoT architectures are complicated system-of-systems that include different developer roles, development processes, organizational units, and a multilateral governance. Its performance is often neglected during development but becomes a major concern at the end of development and results in supplemental efforts, costs, and refactoring. It should not be relied on linearly scaling for such systems only by using up-to-date technologies that may promote such behavior. Furthermore, different security or interoperability choices also have a considerable impact on performance and may result in unforeseen trade-offs. Therefore, we propose and pursue the vision of a model-driven approach to predict and evaluate the performance of IoT architectures early in the system lifecylce in order to guarantee efficient and scalable systems reaching from sensors to business applications

    The Integration of Machine Learning into Automated Test Generation: A Systematic Mapping Study

    Get PDF
    Context: Machine learning (ML) may enable effective automated test generation. Objective: We characterize emerging research, examining testing practices, researcher goals, ML techniques applied, evaluation, and challenges. Methods: We perform a systematic mapping on a sample of 102 publications. Results: ML generates input for system, GUI, unit, performance, and combinatorial testing or improves the performance of existing generation methods. ML is also used to generate test verdicts, property-based, and expected output oracles. Supervised learning - often based on neural networks - and reinforcement learning - often based on Q-learning - are common, and some publications also employ unsupervised or semi-supervised learning. (Semi-/Un-)Supervised approaches are evaluated using both traditional testing metrics and ML-related metrics (e.g., accuracy), while reinforcement learning is often evaluated using testing metrics tied to the reward function. Conclusion: Work-to-date shows great promise, but there are open challenges regarding training data, retraining, scalability, evaluation complexity, ML algorithms employed - and how they are applied - benchmarks, and replicability. Our findings can serve as a roadmap and inspiration for researchers in this field.Comment: Under submission to Software Testing, Verification, and Reliability journal. (arXiv admin note: text overlap with arXiv:2107.00906 - This is an earlier study that this study extends
    • …
    corecore