440,629 research outputs found
Performance Testing of Distributed Component Architectures
Performance characteristics, such as response time, throughput andscalability, are key quality attributes of distributed applications. Current practice,however, rarely applies systematic techniques to evaluate performance characteristics.We argue that evaluation of performance is particularly crucial in early developmentstages, when important architectural choices are made. At first glance, thiscontradicts the use of testing techniques, which are usually applied towards the endof a project. In this chapter, we assume that many distributed systems are builtwith middleware technologies, such as the Java 2 Enterprise Edition (J2EE) or theCommon Object Request Broker Architecture (CORBA). These provide servicesand facilities whose implementations are available when architectures are defined.We also note that it is the middleware functionality, such as transaction and persistenceservices, remote communication primitives and threading policy primitives,that dominates distributed system performance. Drawing on these observations, thischapter presents a novel approach to performance testing of distributed applications.We propose to derive application-specific test cases from architecture designs so thatthe performance of a distributed application can be tested based on the middlewaresoftware at early stages of a development process. We report empirical results thatsupport the viability of the approach
An Approach for the Empirical Validation of Software Complexity Measures
Software metrics are widely accepted tools to control and assure software quality. A large number of software metrics with a variety of content can be found in the literature; however most of them are not adopted in industry as they are seen as irrelevant to needs, as they are unsupported, and the major reason behind this is due to improper
empirical validation. This paper tries to identify possible root causes for the improper empirical validation of the software metrics. A practical model for the empirical validation of software metrics is proposed along with root causes. The model is validated by applying it to recently proposed and well known metrics
Optimistic Adaptation of Decentralised Role-based Software Systems
The complexity of computer networks has been rising over the last decades. Increasing interconnectivity between multiple devices, growing complexity of performed tasks and a strong collaboration between nodes are drivers for this phenomenon. An example is represented by Internet-of-Things devices, whose relevance has been rising in recent years. The increasing number of devices requiring updates and supervision makes maintenance more difficult. Human interaction, in this case, is costly and requires a lot of time. To overcome this, self-adaptive software systems (SAS) can be used. SAS are a subset of autonomous systems which can monitor themselves and their environment to adapt to changes without human interaction. In the literature, different approaches for engineering SAS were proposed, including techniques for executing adaptations on multiple devices based on generated plans for reacting to changes. Among those solutions, also decentralised approaches can be found. To the best of our knowledge, no approach for engineering a SAS exists which tolerates errors during the execution of adaptation in a decentralised setting. While some approaches for role-based execution reset the application in case of a single failure during the adaptation process, others do not make assumptions about errors or do not consider an erroneous environment. In a real-world environment, errors will likely occur during run-time, and the adaptation process could be disturbed.
This work aims to perform adaptations in a decentralised way on role-based systems with a relaxed consistency constraint, i.e., errors during the adaptation phase are tolerated. This increases the availability of nodes since no rollbacks are required in case of a failure. Moreover, a subset of applications, such as drone swarms, would benefit from an approach with a relaxed consistency model since parts of the system that adapted successfully can already operate in an adapted configuration instead of waiting for other peers to apply the changes in a later iteration. Moreover, if we eliminate the need for an atomic adaptation execution, asynchronous execution of adaptation would be possible. In that case, we can supervise the adaptation process for a long time and ensure that every peer takes the planned actions as soon as the internal task execution allows it.
To allow for a relaxed consistent way of adaptation execution, we develop a decentralised adaptation execution protocol, which supports the notion of eventual consistency. As soon as devices reconnect after network congestion or restore their internal state after local failures, our protocol can coordinate the recovery process among multiple devices to attempt recovery of a globally consistent state after errors occur. By superseding the need for a central instance, every peer who received information about failing peers can start the recovery process. The developed approach can restore a consistent global configuration if almost all peers fail. Moreover, the approach supports asynchronous adaptations, i.e., the peers can execute planned adaptations as soon as they are ready, which increases overall availability in case of delayed adaptation of single nodes.
The developed protocol is evaluated with the help of a proof-of-concept implementation. The approach was run in five different experiments with thousands of iterations to show the applicability and reliability of this novel approach. The time for execution of the protocol and the number of exchanged messages has been measured to compare the protocol for different error cases and system sizes, as well as to show the scalability of the approach. The developed solution has been compared to a blocking approach to show the feasibility compared to an atomic approach. The applicability in a real-world scenario has been described in an empirical study using an example of a fire-extinguishing drone swarm. The results show that an optimistic approach to adaptation is suitable and specific scenarios can benefit from the improved availability since no rollbacks are required. Systems can continue their work regardless of the failures of participating nodes in large-scale systems.:Abstract VI
1. Introduction 1
1.1. Motivational Use-Case 2
1.2. Problem Definition 3
1.3. Objectives 4
1.4. Research Questions 5
1.5. Contributions 5
1.6. Outline 6
2. Foundation 7
2.1. Role Concept 7
2.2. Self-Adaptive Software Systems 13
2.3. Terminology for Role-Based Self-Adaptation 15
2.4. Consistency Preservation and Consistency Models 17
2.5. Summary 20
3. Related Work 21
3.1. Role-Based Approaches 22
3.2. Actor Model of Computation and Akka 23
3.3. Adaptation Execution in Self-Adaptive Software Systems 24
3.4. Change Consistency in Distributed Systems 33
3.5. Comparison of the Evaluated Approaches 40
4. The Decentralised Consistency Compensation Protocol 43
4.1. System and Error Model 43
4.2. Requirements to the Concept 44
4.3. The Usage of Roles in Adaptations 45
4.4. Protocol Overview 47
4.5. Protocol Description 51
4.6. Protocol Corner- and Error Cases 64
4.7. Summary 66
5. Prototypical Implementation 67
5.1. Technology Overview 67
5.2. Reused Artifacts 68
5.3. Implementation Details 70
5.4. Setup of the Prototypical Implementation 76
5.5. Summary 77
6. Evaluation 79
6.1. Evaluation Methodology 79
6.2. Evaluation Setup 80
6.3. Experiment Overview 81
6.4. Default Case: Successful Adaptation 84
6.5. Compensation on Disconnection of Peers 85
6.6. Recovery from Failed Adaptation 88
6.7. Impact of Early Activation of Adaptations 91
6.8. Comparison with a Blocking Approach 92
6.9. Empirical Study: Fire Extinguishing Drones 95
6.10. Summary 97
7. Conclusion and Future Work 99
7.1. Recap of the Research Questions 99
7.2. Discussion 101
7.3. Future Work 101
A. Protocol Buffer Definition 103
Acronyms 108
Bibliography 10
Recommended from our members
OntoEng: A design method for ontology engineering in information systems
This paper addresses the design problem relating to ontology engineering in the discipline of information systems. Ontology engineering is a realm that covers issues related to ontology development and use throughout its life span. Nowadays, ontology as a new innovation promises to improve the design, semantic integration, and utilization of information systems. Ontologies are the backbone of knowledge-based systems. In addition, they establish sharable and reusable common understanding of specific domains amongst people, information systems, and software agents. Notwithstanding, the ontology engineering literature does not provide adequate guidance on how to build, evaluate, and maintain ontologies. On the basis of the
gathered experience during the development of V4 Telecoms Business Model Ontology as well as the conducted integration of the related literature from the design science paradigm, this paper introduces OntoEng and its application as a novel systematic design
method for ontology engineering
Recommended from our members
New ideas and emerging research: evaluating prediction system accuracy
BACKGROUND: Prediction e.g. of project cost is an important concern in software engineering. PROBLEM: Although many empirical validations of software engineering prediction systems have been published, no one approach dominates and sense-making of conflicting empirical results is proving challenging. METHOD: We propose a new approach to evaluating competing prediction systems based upon an unbiased statistic (Standardised Accuracy), analysis of results relative to the baseline technique of guessing and calculation of effect sizes. RESULTS: Two empirical studies are revisited and the published results are shown to be misleading when re-analysed using our new approach. CONCLUSION: Biased statistics such as MMRE are deprecated. By contrast our approach leads to valid results. Such steps will greatly assist in performing future meta-analyses
Research Findings on Empirical Evaluation of Requirements Specifications Approaches
Numerous software requirements specification (SRS) approaches have been proposed in software engineering. However, there has been little empirical evaluation of the use of these approaches in specific contexts. This paper describes the results of a mapping study, a key instrument of the evidence-based paradigm, in an effort to understand what aspects of SRS are evaluated, in which context, and by using which research method. On the basis of 46 identified and categorized primary studies, we found that understandability is the most commonly evaluated aspect of SRS, experiments are the most commonly used research method, and the academic environment is where most empirical evaluation takes place
- âŠ