593 research outputs found
A method and framework to evaluate system architecture candidates on reliability criteria
Philips Lighting is moving from a lighting component business towards lighting solutions business in professional environments, offering lighting solutions for energy saving, productivity and effect creation. In these solutions (consisting of light sources, sensors and control devices) networked based systems are essential and software makes it possible to add intelligence into the system. Reliability aspects at system level are getting more important and architectural analysis is done during brainstorm sessions to evaluate possible system architecture candidates on reliability criteria. It is crucial that the reliability of the architectures is evaluated more formally, for example based on a model of each of the architectural candidates. This report describes the design, the implementation, and the experimentation with such a method that predicts the reliability and other non-functional criteria of the architecture candidates. Lighting entities, their attributes, behavior, and relationships are the core of evaluating a system on reliability criteria. This approach relies heavily on modeling principle. The results of the project show whether such an approach can been used to determine and evaluate an architecture candidate, and give input on how architecture evaluation can be done in the future
Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World
This report documents the program and the outcomes of GI-Dagstuhl Seminar
16394 "Software Performance Engineering in the DevOps World".
The seminar addressed the problem of performance-aware DevOps. Both, DevOps
and performance engineering have been growing trends over the past one to two
years, in no small part due to the rise in importance of identifying
performance anomalies in the operations (Ops) of cloud and big data systems and
feeding these back to the development (Dev). However, so far, the research
community has treated software engineering, performance engineering, and cloud
computing mostly as individual research areas. We aimed to identify
cross-community collaboration, and to set the path for long-lasting
collaborations towards performance-aware DevOps.
The main goal of the seminar was to bring together young researchers (PhD
students in a later stage of their PhD, as well as PostDocs or Junior
Professors) in the areas of (i) software engineering, (ii) performance
engineering, and (iii) cloud computing and big data to present their current
research projects, to exchange experience and expertise, to discuss research
challenges, and to develop ideas for future collaborations
Modelling Event-Based Interactions in Component-Based Architectures for Quantitative System Evaluation
This dissertation thesis presents an approach enabling the modelling and quality-of-service prediction of event-based systems at the architecture-level. Applying a two-step model refinement transformation, the approach integrates platform-specific performance influences of the underlying middleware while enabling the use of different existing analytical and simulation-based prediction techniques
Towards the predictive analysis of cloud systems with e-Motions
Current methods for the predictive analysis of software systems are not directly applicable on self-adaptive systems as cloud systems, mainly due to their complexity and dynamism. To tackle the difficulties to handle the dynamic changes in the systems and their environments, we propose using graph transformation to define an adaptive com-
ponent model and analysis tools for it, what allows us to carry on such analyses on dynamic architectures. Specifically, we use the e-Motions system to define the Palladio component model, and simulation-based analysis tools for it. Adaptation mechanisms are then specified as generic adaptation rules. This setting will allow us to study different mechanisms for the management of dynamic systems and their adaptation mechanisms, and different QoS metrics to be considered in a dynamic environment.Universidad de Málaga. Campus de Excelencia Internacional AndalucĂa Tech
Adaptation-Aware Architecture Modeling and Analysis of Energy Efficiency for Software Systems
This thesis presents an approach for the design time analysis of energy efficiency for static and self-adaptive software systems.
The quality characteristics of a software system, such as performance and operating costs, strongly depend upon its architecture. Software architecture is a high-level view on software artifacts that reflects essential quality characteristics of a system under design. Design decisions made on an architectural level have a decisive impact on the quality of a system. Revising architectural design decisions late into development requires significant effort. Architectural analyses allow software architects to reason about the impact of design decisions on quality, based on an architectural description of the system. An essential quality goal is the reduction of cost while maintaining other quality goals. Power consumption accounts for a significant part of the Total Cost of Ownership (TCO) of data centers. In 2010, data centers contributed 1.3% of the world-wide power consumption. However, reasoning on the energy efficiency of software systems is excluded from the systematic analysis of software architectures at design time. Energy efficiency can only be evaluated once the system is deployed and operational. One approach to reduce power consumption or cost is the introduction of self-adaptivity to a software system. Self-adaptive software systems execute adaptations to provision costly resources dependent on user load. The execution of reconfigurations can increase energy efficiency and reduce cost. If performed improperly, however, the additional resources required to execute a reconfiguration may exceed their positive effect.
Existing architecture-level energy analysis approaches offer limited accuracy or only consider a limited set of system features, e.g., the used communication style. Predictive approaches from the embedded systems and Cloud Computing domain operate on an abstraction that is not suited for architectural analysis. The execution of adaptations can consume additional resources. The additional consumption can reduce performance and energy efficiency. Design time quality analyses for self-adaptive software systems ignore this transient effect of adaptations.
This thesis makes the following contributions to enable the systematic consideration of energy efficiency in the architectural design of self-adaptive software systems: First, it presents a modeling language that captures power consumption characteristics on an architectural abstraction level. Second, it introduces an energy efficiency analysis approach that uses instances of our power consumption modeling language in combination with existing performance analyses for architecture models. The developed analysis supports reasoning on energy efficiency for static and self-adaptive software systems. Third, to ease the specification of power consumption characteristics, we provide a method for extracting power models for server environments. The method encompasses an automated profiling of servers based on a set of restrictions defined by the user. A model training framework extracts a set of power models specified in our modeling language from the resulting profile. The method ranks the trained power models based on their predicted accuracy. Lastly, this thesis introduces a systematic modeling and analysis approach for considering transient effects in design time quality analyses. The approach explicitly models inter-dependencies between reconfigurations, performance and power consumption. We provide a formalization of the execution semantics of the model. Additionally, we discuss how our approach can be integrated with existing quality analyses of self-adaptive software systems.
We validated the accuracy, applicability, and appropriateness of our approach in a variety of case studies. The first two case studies investigated the accuracy and appropriateness of our modeling and analysis approach. The first study evaluated the impact of design decisions on the energy efficiency of a media hosting application. The energy consumption predictions achieved an absolute error lower than 5.5% across different user loads. Our approach predicted the relative impact of the design decision on energy efficiency with an error of less than 18.94%. The second case study used two variants of the Spring-based community case study system PetClinic. The case study complements the accuracy and appropriateness evaluation of our modeling and analysis approach. We were able to predict the energy consumption of both variants with an absolute error of no more than 2.38%. In contrast to the first case study, we derived all models automatically, using our power model extraction framework, as well as an extraction framework for performance models. The third case study applied our model-based prediction to evaluate the effect of different self-adaptation algorithms on energy efficiency. It involved scientific workloads executed in a virtualized environment. Our approach predicted the energy consumption with an error below 7.1%, even though we used coarse grained measurement data of low accuracy to train the input models. The fourth case study evaluated the appropriateness and accuracy of the automated model extraction method using a set of Big Data and enterprise workloads. Our method produced power models with prediction errors below 5.9%. A secondary study evaluated the accuracy of extracted power models for different Virtual Machine (VM) migration scenarios. The results of the fifth case study showed that our approach for modeling transient effects improved the prediction accuracy for a horizontally scaling application. Leveraging the improved accuracy, we were able to identify design deficiencies of the application that otherwise would have remained unnoticed
Automated Cloud-to-Cloud Migration of Distributed Software Systems for Privacy Compliance
Mit der ständig wachsenden Zahl von verteilten Cloudanwendungen und immer mehr Datenschutzverordnungen wächst das Interesse an legalen Cloudanwendungen. Jedoch ist vielen Betreibern der Legalitätsstatus ihrer Anwendung nicht bekannt. In 2018 wird die neue EU Datenschutzverordnung in Kraft treten. Diese Verordnung beinhaltet empfindliche Strafen für Datenschutzverletzungen. Einer der wichtigsten Faktoren für die Einhaltung der Datenschutzverordnung ist die Verarbeitung von Stammdaten von EU-Bürgern innerhalb der EU. Wir haben für diese Regelung eine Privacy Analyse entwickelt, formalisiert, implementiert und evaluiert. Außerdem haben wir mit iObserve Privacy ein System nach dem MAPE-Prinzip entwickelt, das automatisch Datenschutzverletzungen erkennt und ein alternatives, datenschutzkonformes Systemhosting errechnet. Zudem migriert iObserve Privacy die Cloudanwendung entsprechend dem alternativen Hosting automatisch. Hierdurch können wir eine rechtskonforme Verteilung der Cloudanwendung gewährleisten, ohne das System in seiner Tiefe zu analysieren oder zu verstehen. Jedoch benötigen wir die Closed World Assumption. Wir benutzen PerOpteryx für die Generierung von rechtskonformen, alternativen Hostings. Basierend auf diesem Hosting errechnen wir eine Sequenz von Adaptionsschritten zur Wiedererlangung der Rechtskonformität. Wenn Fehler auftreten, nutzen wir das Operator-in-the-loop Prinzip von iObserve. Als Datengrundlage nutzen wir das Palladio Component Model. In dieser Arbeit beschreiben wir detailliert die Konzepte, weisen auf Implementierungsdetails hin und evaluieren iObserve nach Präzision und Skalierbarkeit
Efficiently Conducting Quality-of-Service Analyses by Templating Architectural Knowledge
Previously, software architects were unable to effectively and efficiently apply reusable knowledge (e.g., architectural styles and patterns) to architectural analyses. This work tackles this problem with a novel method to create and apply templates for reusable knowledge. These templates capture reusable knowledge formally and can efficiently be integrated in architectural analyses
Supplementary Material for the Evaluation of the Publication – A Layered Reference Architecture for Model-based Quality Analysis
In this technical report, we present the supplementary information for the evaluation of the Layered Reference Architecture for Model-based Quality Analysis. In chapter 2, we present the case studies, rst in their monolithic and then in their modular form. We present installation instructions for the tools required to reproduce the evaluation results in chapter 3. In chapter 4, we provide detailed information about the evolution scenarios. The tooling and the results of the scenarios can be found online [7]
- …