160,255 research outputs found

    A review of software change impact analysis

    Get PDF
    Change impact analysis is required for constantly evolving systems to support the comprehension, implementation, and evaluation of changes. A lot of research effort has been spent on this subject over the last twenty years, and many approaches were published likewise. However, there has not been an extensive attempt made to summarize and review published approaches as a base for further research in the area. Therefore, we present the results of a comprehensive investigation of software change impact analysis, which is based on a literature review and a taxonomy for impact analysis. The contribution of this review is threefold. First, approaches proposed for impact analysis are explained regarding their motivation and methodology. They are further classified according to the criteria of the taxonomy to enable the comparison and evaluation of approaches proposed in literature. We perform an evaluation of our taxonomy regarding the coverage of its classification criteria in studied literature, which is the second contribution. Last, we address and discuss yet unsolved problems, research areas, and challenges of impact analysis, which were discovered by our review to illustrate possible directions for further research

    Exploring Maintainability Assurance Research for Service- and Microservice-Based Systems: Directions and Differences

    Get PDF
    To ensure sustainable software maintenance and evolution, a diverse set of activities and concepts like metrics, change impact analysis, or antipattern detection can be used. Special maintainability assurance techniques have been proposed for service- and microservice-based systems, but it is difficult to get a comprehensive overview of this publication landscape. We therefore conducted a systematic literature review (SLR) to collect and categorize maintainability assurance approaches for service-oriented architecture (SOA) and microservices. Our search strategy led to the selection of 223 primary studies from 2007 to 2018 which we categorized with a threefold taxonomy: a) architectural (SOA, microservices, both), b) methodical (method or contribution of the study), and c) thematic (maintainability assurance subfield). We discuss the distribution among these categories and present different research directions as well as exemplary studies per thematic category. The primary finding of our SLR is that, while very few approaches have been suggested for microservices so far (24 of 223, ?11%), we identified several thematic categories where existing SOA techniques could be adapted for the maintainability assurance of microservices

    A taxonomy of asymmetric requirements aspects

    Get PDF
    The early aspects community has received increasing attention among researchers and practitioners, and has grown a set of meaningful terminology and concepts in recent years, including the notion of requirements aspects. Aspects at the requirements level present stakeholder concerns that crosscut the problem domain, with the potential for a broad impact on questions of scoping, prioritization, and architectural design. Although many existing requirements engineering approaches advocate and advertise an integral support of early aspects analysis, one challenge is that the notion of a requirements aspect is not yet well established to efficaciously serve the community. Instead of defining the term once and for all in a normally arduous and unproductive conceptual unification stage, we present a preliminary taxonomy based on the literature survey to show the different features of an asymmetric requirements aspect. Existing approaches that handle requirements aspects are compared and classified according to the proposed taxonomy. In addition,we study crosscutting security requirements to exemplify the taxonomy's use, substantiate its value, and explore its future directions

    A Taxonomy for a Constructive Approach to Software Evolution

    Get PDF
    In many software design and evaluation techniques, either the software evolution problem is not systematically elaborated, or only the impact of evolution is considered. Thus, most of the time software is changed by editing the components of the software system, i.e. breaking down the software system. The software engineering discipline provides many mechanisms that allow evolution without breaking down the system; however, the contexts where these mechanisms are applicable are not taken into account. Furthermore, the software design and evaluation techniques do not support identifying these contexts. In this paper, we provide a taxonomy of software evolution that can be used to identify the context of the evolution problem. The identified contexts are used to retrieve, from the software engineering discipline, the mechanisms, which can evolve the software software without breaking it down. To build such a taxonomy, we build a model for software evolution and use this model to identify the factors that effect the selection of software evolution\ud mechanisms. Our approach is based on solution sets, however; the contents of these sets may vary at different stages of the software life-cycle. To address this problem, we introduce perspectives; that are filters to select relevant elements from a solution set. We apply our taxonomy to a parser tool to show how it coped with problematic evolution problems

    Ways of Applying Artificial Intelligence in Software Engineering

    Full text link
    As Artificial Intelligence (AI) techniques have become more powerful and easier to use they are increasingly deployed as key components of modern software systems. While this enables new functionality and often allows better adaptation to user needs it also creates additional problems for software engineers and exposes companies to new risks. Some work has been done to better understand the interaction between Software Engineering and AI but we lack methods to classify ways of applying AI in software systems and to analyse and understand the risks this poses. Only by doing so can we devise tools and solutions to help mitigate them. This paper presents the AI in SE Application Levels (AI-SEAL) taxonomy that categorises applications according to their point of AI application, the type of AI technology used and the automation level allowed. We show the usefulness of this taxonomy by classifying 15 papers from previous editions of the RAISE workshop. Results show that the taxonomy allows classification of distinct AI applications and provides insights concerning the risks associated with them. We argue that this will be important for companies in deciding how to apply AI in their software applications and to create strategies for its use

    The effectiveness of refactoring, based on a compatibility testing taxonomy and a dependency graph

    Get PDF
    In this paper, we describe and then appraise a testing taxonomy proposed by van Deursen and Moonen (VD&M) based on the post-refactoring repeatability of tests. Four categories of refactoring are identified by VD&M ranging from semantic-preserving to incompatible, where, for the former, no new tests are required and for the latter, a completely new test set has to be developed. In our appraisal of the taxonomy, we heavily stress the need for the inter-dependence of the refactoring categories to be considered when making refactoring decisions and we base that need on a refactoring dependency graph developed as part of the research. We demonstrate that while incompatible refactorings may be harmful and time-consuming from a testing perspective, semantic-preserving refactorings can have equally unpleasant hidden ramifications despite their advantages. In fact, refactorings which fall into neither category have the most interesting properties. We support our results with empirical refactoring data drawn from seven Java open-source systems (OSS) and from the same analysis form a tentative categorization of code smells

    Evaluating cost taxonomies for information systems management

    Get PDF
    The consideration of costs, benefits and risks underpin many Information System (IS) evaluation decisions. Yet, vendors and project-champions alike tend to identify and focus much of their effort on the benefits achievable from the adoption of new technology, as it is often not in the interest of key stakeholders to spend too much time considering the wider cost and risk implications of enterprise-wide technology adoptions. In identifying a void in the literature, the authors of the paper present a critical analysis of IS-cost taxonomies. In doing so, the authors establish that such cost taxonomies tend to be esoteric and difficult to operationalize, as they lack specifics in detail. Therefore, in developing a deeper understanding of IS-related costs, the authors position the need to identify, control and reduce IS-related costs within the information systems evaluation domain, through culminating and then synthesizing the literature into a frame of reference that supports the evaluation of information systems through a deeper understanding of IS-cost taxonomies. The paper then concludes by emphasizing that the total costs associated with IS-adoption can only be determined after having considered the multi-faceted dimensions of information system investments

    What to Fix? Distinguishing between design and non-design rules in automated tools

    Full text link
    Technical debt---design shortcuts taken to optimize for delivery speed---is a critical part of long-term software costs. Consequently, automatically detecting technical debt is a high priority for software practitioners. Software quality tool vendors have responded to this need by positioning their tools to detect and manage technical debt. While these tools bundle a number of rules, it is hard for users to understand which rules identify design issues, as opposed to syntactic quality. This is important, since previous studies have revealed the most significant technical debt is related to design issues. Other research has focused on comparing these tools on open source projects, but these comparisons have not looked at whether the rules were relevant to design. We conducted an empirical study using a structured categorization approach, and manually classify 466 software quality rules from three industry tools---CAST, SonarQube, and NDepend. We found that most of these rules were easily labeled as either not design (55%) or design (19%). The remainder (26%) resulted in disagreements among the labelers. Our results are a first step in formalizing a definition of a design rule, in order to support automatic detection.Comment: Long version of accepted short paper at International Conference on Software Architecture 2017 (Gothenburg, SE
    • …
    corecore