22 research outputs found
Benchmarking Function Hook Latency in Cloud-Native Environments
Researchers and engineers are increasingly adopting cloud-native technologies
for application development and performance evaluation. While this has improved
the reproducibility of benchmarks in the cloud, the complexity of cloud-native
environments makes it difficult to run benchmarks reliably. Cloud-native
applications are often instrumented or altered at runtime, by dynamically
patching or hooking them, which introduces a significant performance overhead.
Our work discusses the benchmarking-related pitfalls of the dominant
cloud-native technology, Kubernetes, and how they affect performance
measurements of dynamically patched or hooked applications. We present
recommendations to mitigate these risks and demonstrate how an improper
experimental setup can negatively impact latency measurements.Comment: to be published in the 14th Symposium on Software Performance (SSP
2023), source code available at
https://github.com/dynatrace-research/function-hook-latency-benchmarkin
Continuous Integration of Architectural Performance Models with Parametric Dependencies – The CIPM Approach
Explicitly considering the software architecture supports efficient assessments of quality attributes. In particular, Architecture-based Performance Prediction (AbPP) supports performance assessment for future scenarios (e.g., alternative workload, design, deployment, etc.) without expensive measurements for all such alternatives.
However, accurate AbPP requires an up-to-date architectural Performance Model (aPM) that is parameterized over factors impacting performance like input data characteristics. Especially in agile development, keeping such a parametric aPM consistent with software artifacts is challenging due to frequent evolutionary, adaptive and usage-related changes.
The shortcoming of existing approaches is the scope of consistency maintenance since they do not address the impact of all aforementioned changes. Besides, extracting aPM by static and/or dynamic analysis after each impacting change would cause unnecessary monitoring overhead and may overwrite previous manual adjustments.
In this article, we present our Continuous Integration of architectural Performance Model (CIPM) approach, which automatically updates the parametric aPM after each evolutionary, adaptive or usage change. To reduce the monitoring overhead, CIPM calibrates just the affected performance parameters (e.g., resource demand), using adaptive monitoring. Moreover, CIPM proposes a self-validation process that validates the accuracy, manages the monitoring and recalibrates the inaccurate parts. As a result, CIPM will automatically keep the aPM up-to-date throughout the development time and operation time, which enables AbPP for a proactive identification of upcoming performance problems and evaluating alternatives at low costs.
CIPM is evaluated using three case studies, considering (1) the accuracy of the updated aPMs and associated AbPP and (2) the applicability of CIPM in terms of the scalability and the required monitoring overhead
Continuous integration and application deployment with the Kubernetes technology
Poslední dobou by téměř každý chtěl své aplikace nasadit do Kubernetes. Jenže pro plné využití Kubernetes je třeba přijmout s otevřenou náručí postupy průběžné integrace (CI) a nasazení (CD). Je třeba CI/CD pipeline. Ale k dispozici je až zdrcující množství open-source nástrojů, kde každý pokrývá různé části celého procesu. Následující text vysvětlí základy technologií, kterých bude pro pipeline třeba. A následně shrne některé z populárních open-source nástrojů využívaných pro CI/CD. Z open-source nástrojů navrhneme pipeline. Závěrečné porovnání možných řešení (včetně proprietárních) poskytne čtenáři konkrétní tipy a rady ohledně vytváření vlastní pipeline.It seems nearly everyone would like to deploy to Kubernetes nowadays. To efficiently leverage the power of Kubernetes one must first fully embrace continuous integration (CI) and deployment (CD) practices. A CI/CD pipeline is needed. But there is an overwhelming amount of open-source tools that cover various parts of the whole process.The following text explains the basics of the underlying technologies needed for a pipeline deploying to Kubernetes. And subsequently summarizes some of the popular open-source tools used for CI/CD. Then it designs a working pipeline from the researched tools. Finally, it summarizes some of the possible pipelines (including proprietary) and provides the reader with specific bits of advice on how to implement a pipeline
Incremental Calibration of Architectural Performance Models with Parametric Dependencies
Architecture-based Performance Prediction (AbPP) allows evaluation of the
performance of systems and to answer what-if questions without measurements for
all alternatives. A difficulty when creating models is that Performance Model
Parameters (PMPs, such as resource demands, loop iteration numbers and branch
probabilities) depend on various influencing factors like input data, used
hardware and the applied workload. To enable a broad range of what-if
questions, Performance Models (PMs) need to have predictive power beyond what
has been measured to calibrate the models. Thus, PMPs need to be parametrized
over the influencing factors that may vary.
Existing approaches allow for the estimation of parametrized PMPs by
measuring the complete system. Thus, they are too costly to be applied
frequently, up to after each code change. They do not keep also manual changes
to the model when recalibrating.
In this work, we present the Continuous Integration of Performance Models
(CIPM), which incrementally extracts and calibrates the performance model,
including parametric dependencies. CIPM responds to source code changes by
updating the PM and adaptively instrumenting the changed parts. To allow AbPP,
CIPM estimates the parametrized PMPs using the measurements (generated by
performance tests or executing the system in production) and statistical
analysis, e.g., regression analysis and decision trees.
Additionally, our approach responds to production changes (e.g., load or
deployment changes) and calibrates the usage and deployment parts of PMs
accordingly.
For the evaluation, we used two case studies. Evaluation results show that we
were able to calibrate the PM incrementally and accurately.Comment: Manar Mazkatli is supported by the German Academic Exchange Service
(DAAD
Parsing and Printing Java 7-15 by Extending an Existing Metamodel
Many technologies and frameworks are built upon the open source Eclipse Modelling Framework (EMF) to provide model-based software development or even model-based consistency preservation of software artifacts. In this context, not only EMF-based modeling of the source code but also parsing of the source code and printing the model again into source code files are required.
The Java Model Parser and Printer (JaMoPP) provides an EMF-based environment for modeling, parsing and printing Java source code. However, it supports just the syntax of Java 5 and 6. Moreover, JaMoPP is based on some technologies that have technical problems and have not been further maintained.
In this work, we extend the metamodel of JaMoPP to support Java versions 7-15. Our extensions expand the metamodel with new features, for instance, the diamond operator, lambda expressions, or modules. Moreover, we implemented our new parser and printer. The parser implementation is based on the Eclipse Java Development Tools (JDT) that is well maintained, which reduces the maintenance effort to extend our JaMoPP for new versions of Java
Scalability Benchmarking of Cloud-Native Applications Applied to Event-Driven Microservices
Cloud-native applications constitute a recent trend for designing large-scale software systems. This thesis introduces the Theodolite benchmarking method, allowing researchers and practitioners to conduct empirical scalability evaluations of cloud-native applications, their frameworks, configurations, and deployments. The benchmarking method is applied to event-driven microservices, a specific type of cloud-native applications that employ distributed stream processing frameworks to scale with massive data volumes. Extensive experimental evaluations benchmark and compare the scalability of various stream processing frameworks under different configurations and deployments, including different public and private cloud environments. These experiments show that the presented benchmarking method provides statistically sound results in an adequate amount of time. In addition, three case studies demonstrate that the Theodolite benchmarking method can be applied to a wide range of applications beyond stream processing
Establishing a Benchmark Dataset for Traceability Link Recovery between Software Architecture Documentation and Models
In research, evaluation plays a key role to assess the performance of an approach.
When evaluating approaches, there is a wide range of possible types of studies that can be used, each with different properties.
Benchmarks have the benefit that they establish clearly defined standards and baselines.
However, when creating new benchmarks, researchers face various problems regarding the identification of potential data, its mining, as well as the creation of baselines.
As a result, some research domains do not have any benchmarks at all.
This is the case for traceability link recovery between software architecture documentation and software architecture models.
In this paper, we create and describe an open-source benchmark dataset for this research domain.
With this benchmark, we define a baseline with a simple approach based on information retrieval techniques.
This way, we provide other researchers a way to evaluate and compare their approaches