7 research outputs found

    Collaborative Software Performance Engineering for Enterprise Applications

    Get PDF
    In the domain of enterprise applications, organizations usually implement third-party standard software components in order to save costs. Hence, application performance monitoring activities constantly produce log entries that are comparable to a certain extent, holding the potential for valuable collaboration across organizational borders. Taking advantage of this fact, we propose a collaborative knowledge base, aimed to support decisions of performance engineering activities, carried out during early design phases of planned enterprise applications. To verify our assumption of cross-organizational comparability, machine learning algorithms were trained on monitoring logs of 18,927 standard application instances productively running at different organizations around the globe. Using random forests, we were able to predict the mean response time for selected standard business transactions with a mean relative error of 23.19 percent. Hence, the approach combines benefits of existing measurement-based and model-based performance prediction techniques, leading to competitive advantages, enabled by inter-organizational collaboration

    A Perception of the Practice of Software Security and Performance Verification

    Get PDF
    Security and performance are critical nonfunctional requirements for software systems. Thus, it is crucial to include verification activities during software development to identify defects related to such requirements, avoiding their occurrence after release. Software verification, including testing and reviews, encompasses a set of activities that have a purpose of analyzing the software searching for defects. Security and performance verification are activities that look at defects related to these specific quality attributes. Few empirical studies have been focused on how is the state of the practice in security and performance verification. This paper presents the results of a case study performed in the context of Brazilian organizations aiming to characterize security and performance verification practices. Additionally, it provides a set of conjectures indicating recommendations to improve security and performance verification activities.acceptedVersio

    Software Microbenchmarking in the Cloud. How Bad is it Really?

    Get PDF
    Rigorous performance engineering traditionally assumes measuring on bare-metal environments to control for as many confounding factors as possible. Unfortunately, some researchers and practitioners might not have access, knowledge, or funds to operate dedicated performance-testing hardware, making public clouds an attractive alternative. However, shared public cloud environments are inherently unpredictable in terms of the system performance they provide. In this study, we explore the effects of cloud environments on the variability of performance test results and to what extent slowdowns can still be reliably detected even in a public cloud. We focus on software microbenchmarks as an example of performance tests and execute extensive experiments on three different well-known public cloud services (AWS, GCE, and Azure) using three different cloud instance types per service. We also compare the results to a hosted bare-metal offering from IBM Bluemix. In total, we gathered more than 4.5 million unique microbenchmarking data points from benchmarks written in Java and Go. We find that the variability of results differs substantially between benchmarks and instance types (by a coefficient of variation from 0.03% to > 100%). However, executing test and control experiments on the same instances (in randomized order) allows us to detect slowdowns of 10% or less with high confidence, using state-of-the-art statistical tests (i.e., Wilcoxon rank-sum and overlapping bootstrapped confidence intervals). Finally, our results indicate that Wilcoxon rank-sum manages to detect smaller slowdowns in cloud environments

    Applying test case prioritization to software microbenchmarks

    Get PDF
    Regression testing comprises techniques which are applied during software evolution to uncover faults effectively and efficiently. While regression testing is widely studied for functional tests, performance regression testing, e.g., with software microbenchmarks, is hardly investigated. Applying test case prioritization (TCP), a regression testing technique, to software microbenchmarks may help capturing large performance regressions sooner upon new versions. This may especially be beneficial for microbenchmark suites, because they take considerably longer to execute than unit test suites. However, it is unclear whether traditional unit testing TCP techniques work equally well for software microbenchmarks. In this paper, we empirically study coverage-based TCP techniques, employing total and additional greedy strategies, applied to software microbenchmarks along multiple parameterization dimensions, leading to 54 unique technique instantiations. We find that TCP techniques have a mean APFD-P (average percentage of fault-detection on performance) effectiveness between 0.54 and 0.71 and are able to capture the three largest performance changes after executing 29% to 66% of the whole microbenchmark suite. Our efficiency analysis reveals that the runtime overhead of TCP varies considerably depending on the exact parameterization. The most effective technique has an overhead of 11% of the total microbenchmark suite execution time, making TCP a viable option for performance regression testing. The results demonstrate that the total strategy is superior to the additional strategy. Finally, dynamic-coverage techniques should be favored over static-coverage techniques due to their acceptable analysis overhead; however, in settings where the time for prioritzation is limited, static-coverage techniques provide an attractive alternative

    Introducing performance awareness in an integrated specification environment

    Get PDF
    With an increase in software complexity and modularization to create large software systems and software product lines it is increasingly difficult to ensure all requirements are met by the built system. Performance requirements are an important concern to software systems and research has developed approaches being capable of predicting software performance from annotated software architecture descriptions, such as the Palladio tool suite. However, the tooling when moving between specification, implementation and verification phase has a gap as the tools are commonly not linked, leading to inconsistencies and ambiguities in the produced artifacts. This thesis introduces performance awareness into the Integrated Specification Environment for the Specification of Technical Software Systems (IETS3), which is a specification environment aiming to close the tooling gap between the different lifecycle phases. Performance awareness is introduced by integrating existing approaches for software performance prediction from the Palladio tool suite and extending them to cope with variability-aware system models for software product lines. The thesis includes an experimental evaluation showing that the developed approach is able to provide performance predictions to users of the specification environment within 2000 ms for systems of up to 20 components and within 8000 ms for systems of up to 30 components.Mit zunehmender Software-Komplexität und Modularisierung zur Entwicklung großer Softwaresysteme und Software-Produktlinien ist es zunehmend schwierig, alle Anforderungen des eingebauten Systems zu erfüllen. Performanz ist eine wichtige Anforderung für Software-Systeme und aktuelle Forschungsarbeiten haben Ansätze entwickelt, die in der Lage sind, Software-Performanz von annotierten Software-Architekturen vorherzusagen, wie beispielswiese die Palladio Tool Suite. Jedoch hat beim Wechseln zwischen Spezifikations-, Implementierungs- und Verifikationsphase die bestehende Toolchain eine Lücke, da die eingesetzten Werkzeuge häufig nicht miteinander verknüpft sind. Dies führt zu Inkonsistenzen und Unklarheiten in den erzeugten Artefakten. Diese Arbeit führt Performanz-Bewusstsein in die Integrated Specification Environment for the Specification of Technical Software Systems (IETS3) ein - eine Spezifikationsumgebung, die die Werkzeuglücke zwischen den verschiedenen Phasen des Software-Lebenszyklus zu schließen versucht. Das Bewusstsein wird durch die Integration bestehender Ansätze zur Performanz-Vorhersage aus der Palladio Tool Suite hergestellt und um die Analyse von Produktlinien erweitert. Die experimentelle Evaluierung der Arbeit zeigt, dass der entwickelte Ansatz in der Lage ist, innerhalb von 2000 ms Systeme bestehend aus bis zu 20 Komponenten, und innerhalb von 8000 ms Systeme bestehend aus bis zu 30 Komponenten, zu analysieren

    Utilizing Performance Unit Tests To Increase Performance Awareness

    No full text
    corecore