7 research outputs found

    Multi-Quality Auto-Tuning by Contract Negotiation

    Get PDF
    A characteristic challenge of software development is the management of omnipresent change. Classically, this constant change is driven by customers changing their requirements. The wish to optimally leverage available resources opens another source of change: the software systems environment. Software is tailored to specific platforms (e.g., hardware architectures) resulting in many variants of the same software optimized for different environments. If the environment changes, a different variant is to be used, i.e., the system has to reconfigure to the variant optimized for the arisen situation. The automation of such adjustments is subject to the research community of self-adaptive systems. The basic principle is a control loop, as known from control theory. The system (and environment) is continuously monitored, the collected data is analyzed and decisions for or against a reconfiguration are computed and realized. Central problems in this field, which are addressed in this thesis, are the management of interdependencies between non-functional properties of the system, the handling of multiple criteria subject to decision making and the scalability. In this thesis, a novel approach to self-adaptive software--Multi-Quality Auto-Tuning (MQuAT)--is presented, which provides design and operation principles for software systems which automatically provide the best possible utility to the user while producing the least possible cost. For this purpose, a component model has been developed, enabling the software developer to design and implement self-optimizing software systems in a model-driven way. This component model allows for the specification of the structure as well as the behavior of the system and is capable of covering the runtime state of the system. The notion of quality contracts is utilized to cover the non-functional behavior and, especially, the dependencies between non-functional properties of the system. At runtime the component model covers the runtime state of the system. This runtime model is used in combination with the contracts to generate optimization problems in different formalisms (Integer Linear Programming (ILP), Pseudo-Boolean Optimization (PBO), Ant Colony Optimization (ACO) and Multi-Objective Integer Linear Programming (MOILP)). Standard solvers are applied to derive solutions to these problems, which represent reconfiguration decisions, if the identified configuration differs from the current. Each approach is empirically evaluated in terms of its scalability showing the feasibility of all approaches, except for ACO, the superiority of ILP over PBO and the limits of all approaches: 100 component types for ILP, 30 for PBO, 10 for ACO and 30 for 2-objective MOILP. In presence of more than two objective functions the MOILP approach is shown to be infeasible

    Autonomic Computing: State of the Art - Promises - Impact

    Get PDF
    Software has never been as important as today – and its impact on life, work and society is growing at an impressive rate. We are in the flow of a software-induced transformation of nearly all aspects of our way of life and work. The dependence on software has become almost total. Malfunctions and unavailability may threaten vital areas of our society, life and work at any time. The two massive challenges of software are one hand the complexity of the software and on the other hand the disruptive environment. Complexity of the software is a result of the size, the continuously growing functionality, the more complicated technology and the growing networking. The unfortunate consequence is that complexity leads to many problems in design, development, evolution and operation of software-systems, especially of large software-systems. All software-systems live in an environment. Many of today’s environments can be disruptive and cause severe problems for the systems and their users. Examples of disruptions are attacks, failures of partner systems or networks, faults in communications or malicious activities. Traditionally, both growing complexity and disruptions from the environment have been tackled by better and better software engineering. The development and operating processes are constantly being improved and more powerful engineering tools are introduced. For defending against disruptions, predictive methods – such as risk analysis or fault trees – are used. All this techniques are based on the ingenuity, experience and skills of the engineers! However, the growing complexity and the increasing intensity of possible disruptions from the environment make it more and more questionable, if people are really able to successfully cope with this raising challenge in the future. Already, serious research suggests that this is not the case anymore and that we need assistance from the software-systems themselves! Here enters “autonomic computing” – A promising branch of software science which enables software-systems with self-configuring, self-healing, self-optimization and self-protection capabilities. Autonomic computing systems are able to re-organize, optimize, defend and adapt themselves with no real-time human intervention. Autonomic computing relies on many branches of science – especially computer science, artificial intelligence, control theory, machine learning, multi-agent systems and more. Autonomic computing is an active research field which currently transfers many of its results into software engineering and many applications. This Hauptseminar offered the opportunity to learn about the fascinating technology “autonomic computing” and to do some personal research guided by a professor and assisted by the seminar peers.:Introduction 5 1 What Knowledge Does a Taxi Need? – Overview of Rule Based, Model Based and Reinforcement Learning Systems for Autonomic Computing (Anja Reusch) 11 2 Chancen und Risiken von Virtual Assistent Systemen (Felix Hanspach) 23 3 Evolution einer Microservice Architektur zu Autonomic Computing (Ilja Bauer) 37 4 Mögliche Einflüsse von autonomen Informationsdiensten auf ihre Nutzer (Jan Engelmohr) 49 5 The Benefits of Resolving the Trust Issues between Autonomic Computing Systems and their Users (Marc Kandler) 6

    Autonomic Computing: State of the Art - Promises - Impact

    Get PDF
    Software has never been as important as today – and its impact on life, work and society is growing at an impressive rate. We are in the flow of a software-induced transformation of nearly all aspects of our way of life and work. The dependence on software has become almost total. Malfunctions and unavailability may threaten vital areas of our society, life and work at any time. The two massive challenges of software are one hand the complexity of the software and on the other hand the disruptive environment. Complexity of the software is a result of the size, the continuously growing functionality, the more complicated technology and the growing networking. The unfortunate consequence is that complexity leads to many problems in design, development, evolution and operation of software-systems, especially of large software-systems. All software-systems live in an environment. Many of today’s environments can be disruptive and cause severe problems for the systems and their users. Examples of disruptions are attacks, failures of partner systems or networks, faults in communications or malicious activities. Traditionally, both growing complexity and disruptions from the environment have been tackled by better and better software engineering. The development and operating processes are constantly being improved and more powerful engineering tools are introduced. For defending against disruptions, predictive methods – such as risk analysis or fault trees – are used. All this techniques are based on the ingenuity, experience and skills of the engineers! However, the growing complexity and the increasing intensity of possible disruptions from the environment make it more and more questionable, if people are really able to successfully cope with this raising challenge in the future. Already, serious research suggests that this is not the case anymore and that we need assistance from the software-systems themselves! Here enters “autonomic computing” – A promising branch of software science which enables software-systems with self-configuring, self-healing, self-optimization and self-protection capabilities. Autonomic computing systems are able to re-organize, optimize, defend and adapt themselves with no real-time human intervention. Autonomic computing relies on many branches of science – especially computer science, artificial intelligence, control theory, machine learning, multi-agent systems and more. Autonomic computing is an active research field which currently transfers many of its results into software engineering and many applications. This Hauptseminar offered the opportunity to learn about the fascinating technology “autonomic computing” and to do some personal research guided by a professor and assisted by the seminar peers.:Introduction 5 1 What Knowledge Does a Taxi Need? – Overview of Rule Based, Model Based and Reinforcement Learning Systems for Autonomic Computing (Anja Reusch) 11 2 Chancen und Risiken von Virtual Assistent Systemen (Felix Hanspach) 23 3 Evolution einer Microservice Architektur zu Autonomic Computing (Ilja Bauer) 37 4 Mögliche Einflüsse von autonomen Informationsdiensten auf ihre Nutzer (Jan Engelmohr) 49 5 The Benefits of Resolving the Trust Issues between Autonomic Computing Systems and their Users (Marc Kandler) 6

    Multi-Quality Auto-Tuning by Contract Negotiation

    Get PDF
    A characteristic challenge of software development is the management of omnipresent change. Classically, this constant change is driven by customers changing their requirements. The wish to optimally leverage available resources opens another source of change: the software systems environment. Software is tailored to specific platforms (e.g., hardware architectures) resulting in many variants of the same software optimized for different environments. If the environment changes, a different variant is to be used, i.e., the system has to reconfigure to the variant optimized for the arisen situation. The automation of such adjustments is subject to the research community of self-adaptive systems. The basic principle is a control loop, as known from control theory. The system (and environment) is continuously monitored, the collected data is analyzed and decisions for or against a reconfiguration are computed and realized. Central problems in this field, which are addressed in this thesis, are the management of interdependencies between non-functional properties of the system, the handling of multiple criteria subject to decision making and the scalability. In this thesis, a novel approach to self-adaptive software--Multi-Quality Auto-Tuning (MQuAT)--is presented, which provides design and operation principles for software systems which automatically provide the best possible utility to the user while producing the least possible cost. For this purpose, a component model has been developed, enabling the software developer to design and implement self-optimizing software systems in a model-driven way. This component model allows for the specification of the structure as well as the behavior of the system and is capable of covering the runtime state of the system. The notion of quality contracts is utilized to cover the non-functional behavior and, especially, the dependencies between non-functional properties of the system. At runtime the component model covers the runtime state of the system. This runtime model is used in combination with the contracts to generate optimization problems in different formalisms (Integer Linear Programming (ILP), Pseudo-Boolean Optimization (PBO), Ant Colony Optimization (ACO) and Multi-Objective Integer Linear Programming (MOILP)). Standard solvers are applied to derive solutions to these problems, which represent reconfiguration decisions, if the identified configuration differs from the current. Each approach is empirically evaluated in terms of its scalability showing the feasibility of all approaches, except for ACO, the superiority of ILP over PBO and the limits of all approaches: 100 component types for ILP, 30 for PBO, 10 for ACO and 30 for 2-objective MOILP. In presence of more than two objective functions the MOILP approach is shown to be infeasible

    Untersuchungen zur Risikominimierungstechnik Stealth Computing fĂĽr verteilte datenverarbeitende Software-Anwendungen mit nutzerkontrollierbar zusicherbaren Eigenschaften

    Get PDF
    Die Sicherheit und Zuverlässigkeit von Anwendungen, welche schutzwürdige Daten verarbeiten, lässt sich durch die geschützte Verlagerung in die Cloud mit einer Kombination aus zielgrößenabhängiger Datenkodierung, kontinuierlicher mehrfacher Dienstauswahl, dienstabhängiger optimierter Datenverteilung und kodierungsabhängiger Algorithmen deutlich erhöhen und anwenderseitig kontrollieren. Die Kombination der Verfahren zu einer anwendungsintegrierten Stealth-Schutzschicht ist eine notwendige Grundlage für die Konstruktion sicherer Anwendungen mit zusicherbaren Sicherheitseigenschaften im Rahmen eines darauf angepassten Softwareentwicklungsprozesses.:1 Problemdarstellung 1.1 Einführung 1.2 Grundlegende Betrachtungen 1.3 Problemdefinition 1.4 Einordnung und Abgrenzung 2 Vorgehensweise und Problemlösungsmethodik 2.1 Annahmen und Beiträge 2.2 Wissenschaftliche Methoden 2.3 Struktur der Arbeit 3 Stealth-Kodierung für die abgesicherte Datennutzung 3.1 Datenkodierung 3.2 Datenverteilung 3.3 Semantische Verknüpfung verteilter kodierter Daten 3.4 Verarbeitung verteilter kodierter Daten 3.5 Zusammenfassung der Beiträge 4 Stealth-Konzepte für zuverlässige Dienste und Anwendungen 4.1 Überblick über Plattformkonzepte und -dienste 4.2 Netzwerkmultiplexerschnittstelle 4.3 Dateispeicherschnittstelle 4.4 Datenbankschnittstelle 4.5 Stromspeicherdienstschnittstelle 4.6 Ereignisverarbeitungsschnittstelle 4.7 Dienstintegration 4.8 Entwicklung von Anwendungen 4.9 Plattformäquivalente Cloud-Integration sicherer Dienste und Anwendungen 4.10 Zusammenfassung der Beiträge 5 Szenarien und Anwendungsfelder 5.1 Online-Speicherung von Dateien mit Suchfunktion 5.2 Persönliche Datenanalyse 5.3 Mehrwertdienste für das Internet der Dinge 6 Validierung 6.1 Infrastruktur für Experimente 6.2 Experimentelle Validierung der Datenkodierung 6.3 Experimentelle Validierung der Datenverteilung 6.4 Experimentelle Validierung der Datenverarbeitung 6.5 Funktionstüchtigkeit und Eigenschaften der Speicherdienstanbindung 6.6 Funktionstüchtigkeit und Eigenschaften der Speicherdienstintegration 6.7 Funktionstüchtigkeit und Eigenschaften der Datenverwaltung 6.8 Funktionstüchtigkeit und Eigenschaften der Datenstromverarbeitung 6.9 Integriertes Szenario: Online-Speicherung von Dateien 6.10 Integriertes Szenario: Persönliche Datenanalyse 6.11 Integriertes Szenario: Mobile Anwendungen für das Internet der Dinge 7 Zusammenfassung 7.1 Zusammenfassung der Beiträge 7.2 Kritische Diskussion und Bewertung 7.3 Ausblick Verzeichnisse Tabellenverzeichnis Abbildungsverzeichnis Listings Literaturverzeichnis Symbole und Notationen Software-Beiträge für native Cloud-Anwendungen Repositorien mit ExperimentdatenThe security and reliability of applications processing sensitive data can be significantly increased and controlled by the user by a combination of techniques. These encompass a targeted data coding, continuous multiple service selection, service-specific optimal data distribution and coding-specific algorithms. The combination of the techniques towards an application-integrated stealth protection layer is a necessary precondition for the construction of safe applications with guaranteeable safety properties in the context of a custom software development process.:1 Problemdarstellung 1.1 Einführung 1.2 Grundlegende Betrachtungen 1.3 Problemdefinition 1.4 Einordnung und Abgrenzung 2 Vorgehensweise und Problemlösungsmethodik 2.1 Annahmen und Beiträge 2.2 Wissenschaftliche Methoden 2.3 Struktur der Arbeit 3 Stealth-Kodierung für die abgesicherte Datennutzung 3.1 Datenkodierung 3.2 Datenverteilung 3.3 Semantische Verknüpfung verteilter kodierter Daten 3.4 Verarbeitung verteilter kodierter Daten 3.5 Zusammenfassung der Beiträge 4 Stealth-Konzepte für zuverlässige Dienste und Anwendungen 4.1 Überblick über Plattformkonzepte und -dienste 4.2 Netzwerkmultiplexerschnittstelle 4.3 Dateispeicherschnittstelle 4.4 Datenbankschnittstelle 4.5 Stromspeicherdienstschnittstelle 4.6 Ereignisverarbeitungsschnittstelle 4.7 Dienstintegration 4.8 Entwicklung von Anwendungen 4.9 Plattformäquivalente Cloud-Integration sicherer Dienste und Anwendungen 4.10 Zusammenfassung der Beiträge 5 Szenarien und Anwendungsfelder 5.1 Online-Speicherung von Dateien mit Suchfunktion 5.2 Persönliche Datenanalyse 5.3 Mehrwertdienste für das Internet der Dinge 6 Validierung 6.1 Infrastruktur für Experimente 6.2 Experimentelle Validierung der Datenkodierung 6.3 Experimentelle Validierung der Datenverteilung 6.4 Experimentelle Validierung der Datenverarbeitung 6.5 Funktionstüchtigkeit und Eigenschaften der Speicherdienstanbindung 6.6 Funktionstüchtigkeit und Eigenschaften der Speicherdienstintegration 6.7 Funktionstüchtigkeit und Eigenschaften der Datenverwaltung 6.8 Funktionstüchtigkeit und Eigenschaften der Datenstromverarbeitung 6.9 Integriertes Szenario: Online-Speicherung von Dateien 6.10 Integriertes Szenario: Persönliche Datenanalyse 6.11 Integriertes Szenario: Mobile Anwendungen für das Internet der Dinge 7 Zusammenfassung 7.1 Zusammenfassung der Beiträge 7.2 Kritische Diskussion und Bewertung 7.3 Ausblick Verzeichnisse Tabellenverzeichnis Abbildungsverzeichnis Listings Literaturverzeichnis Symbole und Notationen Software-Beiträge für native Cloud-Anwendungen Repositorien mit Experimentdate

    Multi-Quality Auto-Tuning by Contract Negotiation

    No full text
    A characteristic challenge of software development is the management of omnipresent change. Classically, this constant change is driven by customers changing their requirements. The wish to optimally leverage available resources opens another source of change: the software systems environment. Software is tailored to specific platforms (e.g., hardware architectures) resulting in many variants of the same software optimized for different environments. If the environment changes, a different variant is to be used, i.e., the system has to reconfigure to the variant optimized for the arisen situation. The automation of such adjustments is subject to the research community of self-adaptive systems. The basic principle is a control loop, as known from control theory. The system (and environment) is continuously monitored, the collected data is analyzed and decisions for or against a reconfiguration are computed and realized. Central problems in this field, which are addressed in this thesis, are the management of interdependencies between non-functional properties of the system, the handling of multiple criteria subject to decision making and the scalability. In this thesis, a novel approach to self-adaptive software--Multi-Quality Auto-Tuning (MQuAT)--is presented, which provides design and operation principles for software systems which automatically provide the best possible utility to the user while producing the least possible cost. For this purpose, a component model has been developed, enabling the software developer to design and implement self-optimizing software systems in a model-driven way. This component model allows for the specification of the structure as well as the behavior of the system and is capable of covering the runtime state of the system. The notion of quality contracts is utilized to cover the non-functional behavior and, especially, the dependencies between non-functional properties of the system. At runtime the component model covers the runtime state of the system. This runtime model is used in combination with the contracts to generate optimization problems in different formalisms (Integer Linear Programming (ILP), Pseudo-Boolean Optimization (PBO), Ant Colony Optimization (ACO) and Multi-Objective Integer Linear Programming (MOILP)). Standard solvers are applied to derive solutions to these problems, which represent reconfiguration decisions, if the identified configuration differs from the current. Each approach is empirically evaluated in terms of its scalability showing the feasibility of all approaches, except for ACO, the superiority of ILP over PBO and the limits of all approaches: 100 component types for ILP, 30 for PBO, 10 for ACO and 30 for 2-objective MOILP. In presence of more than two objective functions the MOILP approach is shown to be infeasible

    Feature-based configuration management of reconfigurable cloud applications

    Get PDF
    A recent trend in software industry is to provide enterprise applications in the cloud that are accessible everywhere and on any device. As the market is highly competitive, customer orientation plays an important role. Companies therefore start providing applications as a service, which are directly configurable by customers in an online self-service portal. However, customer configurations are usually deployed in separated application instances. Thus, each instance is provisioned manually and must be maintained separately. Due to the induced redundancy in software and hardware components, resources are not optimally utilized. A multi-tenant aware application architecture eliminates redundancy, as a single application instance serves multiple customers renting the application. The combination of a configuration self-service portal with a multi-tenant aware application architecture allows serving customers just-in-time by automating the deployment process. Furthermore, self-service portals improve application scalability in terms of functionality, as customers can adapt application configurations on themselves according to their changing demands. However, the configurability of current multi-tenant aware applications is rather limited. Solutions implementing variability are mainly developed for a single business case and cannot be directly transferred to other application scenarios. The goal of this thesis is to provide a generic framework for handling application variability, automating configuration and reconfiguration processes essential for self-service portals, while exploiting the advantages of multi-tenancy. A promising solution to achieve this goal is the application of software product line methods. In software product line research, feature models are in wide use to express variability of software intense systems on an abstract level, as features are a common notion in software engineering and prominent in matching customer requirements against product functionality. This thesis introduces a framework for feature-based configuration management of reconfigurable cloud applications. The contribution is three-fold. First, a development strategy for flexible multi-tenant aware applications is proposed, capable of integrating customer configurations at application runtime. Second, a generic method for defining concern-specific configuration perspectives is contributed. Perspectives can be tailored for certain application scopes and facilitate the handling of numerous configuration options. Third, a novel method is proposed to model and automate structured configuration processes that adapt to varying stakeholders and reduce configuration redundancies. Therefore, configuration processes are modeled as workflows and adapted by applying rewrite rules triggered by stakeholder events. The applicability of the proposed concepts is evaluated in different case studies in the industrial and academic context. Summarizing, the introduced framework for feature-based configuration management is a foundation for automating configuration and reconfiguration processes of multi-tenant aware cloud applications, while enabling application scalability in terms of functionality
    corecore