18 research outputs found

    Change Impact Analysis for SysML Requirements Models based on Semantics of Trace Relations

    Get PDF
    Change impact analysis is one of the applications of requirements traceability in software engineering community. In this paper, we focus on requirements and requirements relations from traceability perspective. We provide formal definitions of the requirements relations in SysML for change impact analysis. Our approach aims at keeping the model synchronized with what stakeholders want to be modeled, and possibly implemented as well, which we called as the domain. The differences between the domain and model are defined as external inconsistencies. The inconsistencies are propagated for the whole model by using the formalization of relations, and mapped to proposed model changes. We provide tool support which is a plug-in of the commercial visual software modeler BluePrint

    The role of distances in requirements communication: a case study

    Get PDF
    Requirements communication plays a vital role in development projects in coordinating the customers, the business roles and the software engineers. Communication gaps represent a significant source of project failures and overruns. For example, misunderstood or uncommunicated requirements can lead to software that does not meet the customers’ requirements, and subsequent low number of sales or additional cost required to redo the implementation. We propose that RE (Requirements Engineering) distance measures are useful for locating gaps in requirements communication and for improving on development practice. In this paper, we present a case study of one software development project to evaluate this proposition. Thirteen RE distances were measured including geographical and cognitive distances between project members, and semantic distances between requirements and testing artefacts. The findings confirm that RE distances impact requirements communication and project coordi- nation. Furthermore, the concept of distances was found to enable constructive group reflection on communication gaps and improvements to development practices. The insights reported in this paper can provide practitioners with an increased awareness of distances and their impact. Furthermore, the results provide a stepping stone for further research into RE distances and methods for improving on software development processes and practices

    Supporting Change Propagation in the Evolution of Enterprise Architectures

    Full text link

    Traceability improvement for software miniaturization

    Get PDF
    On the one hand, software companies try to reach the maximum number of customers, which often translate into integrating more features into their programs, leading to an increase in size, memory footprint, screen complexity, and so on. On the other hand, hand-held devices are now pervasive and their customers ask for programs similar to those they use everyday on their desktop computers. Companies are left with two options, either to develop new software for hand-held devices or perform manual refactoring to port it on hand-held devices, but both options are expensive and laborious. Software miniaturization can aid companies to port their software to hand-held devices. However, traceability is backbone of software miniaturization, without up-to-date traceability links it becomes diffi cult to recover desired artefacts for miniaturized software. Unfortunately, due to continuous changes, it is a tedious and time-consuming task to keep traceability links up-to-date. Often traceability links become outdated or completely vanish. Several traceability recovery approaches have been developed in the past. Each approach has some benefits and limitations. However, these approaches do not tell which factors can affect traceability recovery process. Our current research proposal is based on the premise that controlling potential quality factors and combining different traceability approaches can improve traceability quality for software miniaturization. In this research proposal, we introduce traceability improvement for software miniaturization (TISM) process. TISM has three sub processes, namely, traceability factor controller (TFC), hybrid traceability (HT), and software miniaturization optimization (SMO). TFC is a semi automatic process, it provides solution for factors, that can affect traceability process. TFC uses a generic format to document trace quality affecting factors. TFC results will help practitioners and researcher to improve their tool, techniques, and approaches. In the HT different traceability, recovery approaches are combined to trace functional and non-functional requirements. HT also works on improving precision and recall with the help of TFC. Finally these links have been used by SMO to identify required artefacts and optimize using scalability, performance, and portability parameters. We will conduct two case studies to aid TISM. The contributions of this research proposal can be summarised as follow: (i) traceability support for software miniaturization and optimization, (ii) a hybrid approach that combines the best of available traceability approaches to trace functional, non-functional requirements, and provides return-on-investment analysis, (iii) traceability quality factor controller that records the quality factors and provide support for avoiding or controlling them

    Predicting change propagation using domain-based coupling

    Get PDF
    Most enterprise systems operate in domains where business rules and requirements frequently change. Managing the cost and impact of these changes has been a known challenge, and the software maintenance community has been tackling it for more than two decades. The traditional approach to impact analysis is by tracing dependencies in the design documents and the source code. More recently the software maintenance history has been exploited for impact analysis. The problem is that these approaches are difficult to implement for hybrid systems that consist of heterogeneous components. In today’s computer era, it is common to find systems of systems where each system was developed in a different language. In such environments, it is a challenge to estimate the change propagation between components that are developed in different languages. There is often no direct code dependency between these components, and they are maintained in different development environments by different developers. In addition, it is the domain experts and consultants who raise the most of the enhancement requests; however, using the existing change impact analysis methods, they cannot evaluate the impact and cost of the proposed changes without the support of the developers. This thesis seeks to address these problems by proposing a new approach to change impact analysis based on software domain-level information. This approach is based on the assumption that domain-level relationships are reflected in the software source code, and one can predict software dependencies and change propagation by exploiting software domain-level information. The proposed approach is independent of the software implementation, inexpensive to implement, and usable by domain experts with no requirement to access and analyse the source code. This thesis introduces domain-based coupling as a novel measure of the semantic similarity between software user interface components. The hypothesis is that the domain-based coupling between software components is correlated with the likelihood of the existence of dependencies and change propagation between these components. This hypothesis has been evaluated with two case studies: • A study of one of the largest open source enterprise systems demonstrates that architectural dependencies can be identified with an accuracy of more than 70% solely based on the domain-based coupling. • A study of 12 years’ maintenance history of the five subsystems of a significant sized proprietary enterprise system demonstrates that the co-change coupling derived from over 75,000 change records can be predicted solely using domain-based coupling, with average recall and precision of more than 60%, which is of comparable quality to other state-of-the-art change impact analysis methods. The results of these studies support our hypothesis that software dependencies and change propagation can be predicted solely from software domain-level information. Although the accuracy of such predictions are not sufficiently strong to completely replace the traditional dependency analysis methods; nevertheless, the presented results suggest that the domain-based coupling might be used as a complementary method or where analysis of dependencies in the code and documents is not a viable option

    Traceability of Requirements and Software Architecture for Change Management

    Get PDF
    At the present day, software systems get more and more complex. The requirements of software systems change continuously and new requirements emerge frequently. New and/or modified requirements are integrated with the existing ones, and adaptations to the architecture and source code of the system are made. The process of integration of the new/modified requirements and adaptations to the software system is called change management. The size and complexity of software systems make change management costly and time consuming. To reduce the cost of changes, it is important to apply change management as early as possible in the software development cycle. Requirements traceability is considered crucial in change management for establishing and maintaining consistency between software development artifacts. It is the ability to link requirements back to stakeholders’ rationales and forward to corresponding design artifacts, code, and test cases. When changes for the requirements of the software system are proposed, the impact of these changes on other requirements, design elements and source code should be traced in order to determine parts of the software system to be changed. Determining the impact of changes on the parts of development artifacts is called change impact analysis. Change impact analysis is applicable to many development artifacts like requirements documents, detailed design, source code and test cases. Our focus is change impact analysis in requirements and software architecture. The need for change impact analysis is observed in both requirements and software architecture. When a change is introduced to a requirement, the requirements engineer needs to find out if any other requirement related to the changed requirement is impacted. After determining the impacted requirements, the software architect needs to identify the impacted architectural elements by tracing the changed requirements to software architecture. It is hard, expensive and error prone to manually trace impacted requirements and architectural elements from the changed requirements. There are tools and approaches that automate change impact analysis like IBM Rational RequisitePro and DOORS. In most of these tools, traces are just simple relations and their semantics is not considered. Due to the lack of semantics of traces in these tools, all requirements and architectural elements directly or indirectly traced from the changed requirement are candidate impacted. The requirements engineer has to inspect all these candidate impacted requirements and architectural elements to identify changes if there are any. In this thesis we address the following problems which arise in performing change impact analysis for requirements and software architecture. Explosion of impacts in requirements after a change in requirements. In practice, requirements documents are often textual artifacts with implicit structure. Most of the relations among requirements are not given explicitly. There is a lack of precise definition of relations among requirements in most tools and approaches. Due to the lack of semantics of requirements relations, change impact analysis may produce high number of false positive and false negative impacted requirements. A requirements engineer may have to analyze all requirements in the requirements document for a single change. This may result in neglecting the actual impact of a change. Manual, expensive and error prone trace establishment. Considerable research has been devoted to relating requirements and design artifacts with source code. Less attention has been paid to relating Requirements (R) with Architecture (A) by using well-defined semantics of traces. Designing architecture based on requirements is a problem solving process that relies on human experience and creativity, and is mainly manual. The software architect may need to manually assign traces between R&A. Manual trace assignment is time-consuming, expensive and error prone. The assigned traces might be incomplete and invalid. Explosion of impacts in software architecture after a change in requirements. Due to the lack of semantics of traces between R&A, change impact analysis may produce high number of false positive and false negative impacted architectural elements. A software architect may have to analyze all architectural elements in the architecture for a single requirements change. In this thesis we propose an approach that reduces the explosion of impacts in R&A. The approach employs semantic information of traces and is supported by tools. We consider that every relation between software development artifacts or between elements in these artifacts can play the role of a trace for a certain traceability purpose like change impact analysis. We choose Model Driven Engineering (MDE) as a solution platform for our approach. MDE provides a uniform treatment of software artifacts (e.g. requirements documents, software design and test documents) as models. It also enables using different formalisms to reason about development artifacts described as models. To give an explicit structure to requirements documents and treat requirements, architecture and traces in a uniform way, we use metamodels and models with formally defined semantics. The thesis provides the following contributions: A modeling language for definition of requirements models with formal semantics. The language is defined according to the MDE principles by defining a metamodel. It is based on a survey about the most commonly found requirements types and relation types. With this language, the requirements engineer can explicitly specify the requirements and the relations among them. The semantics of these entities is given in First Order Logic (FOL) and allows two activities. First, new relations among requirements can be inferred from the initial set of relations. Second, requirements models can be automatically checked for consistency of the relations. Tool for Requirements Inferencing and Consistency Checking (TRIC) is developed to support both activities. The defined semantics is used in a technique for change impact analysis in requirements models. A change impact analysis technique for requirements using semantics of requirements relations and requirements change types. The technique aims at solving the problem of explosion of impacts in requirements when semantics of requirements relations is missing. The technique uses formal semantics of requirements relations and requirements change types. A classification of requirements changes based on the structure of a textual requirement is given and formalized. The semantics of requirements change types is based on FOL. We support three activities for impact analysis. First, the requirements engineer proposes changes according to the change classification before implementing the actual changes. Second, the requirements engineer indentifies the propagation of the changes to related requirements. The change alternatives in the propagation are determined based on the semantics of change types and requirements relations. Third, possible contradicting changes are identified. We extend TRIC with a support for these activities. The tool automatically determines the change propagation paths, checks the consistency of the changes, and suggests alternatives for implementing the change. A technique that provides trace establishment between R&A by using architecture verification and semantics of traces. It is hard, expensive and error prone to manually establish traces between R&A. We present an approach that provides trace establishment by using architecture verification together with semantics of requirements relations and traces. We use a trace metamodel with commonly used trace types. The semantics of traces is formalized in FOL. Software architectures are expressed in the Architecture Analysis and Design Language (AADL). AADL is provided with a formal semantics expressed in Maude. The Maude tool set allows simulation and verification of architectures. The first way to establish traces is to use architecture verification techniques. A given requirement is reformulated as a property in terms of the architecture. The architecture is executed and a state space is produced. This execution simulates the behavior of the system on the architectural level. The property derived from the requirement is checked by the Maude model checker. Traces are generated between the requirement and the architectural components used in the verification of the property. The second way to establish traces is to use the requirements relations together with the semantics of traces. Requirements relations are reflected in the connections among the traced architectural elements based on the semantics of traces. Therefore, new traces are inferred from existing traces by using requirements relations. We use semantics of requirements relations and traces to both generate/validate traces and generate/validate requirements relations. There is a tool support for our approach. The tool provides the following: (1) generation/validation of traces by using requirements relations and/or verification of architecture, (2) generation/validation of requirements relations by using traces. A change impact analysis technique for software architecture using architecture verification and semantics of traces between R&A. The software architect needs to identify the impacted architectural elements after requirements change. We present a change impact analysis technique for software architecture using architecture verification and semantics of traces. The technique is semi-automatic and requires participation of the software architect. Our technique has two parts. The first part is to identify the architectural elements that implement the system properties to which proposed requirements changes are introduced. By having the formal semantics of requirements relations and traces, we identify which parts of software architecture are impacted by a proposed change in requirements. We have extended TRIC for determining candidate impacted architectural elements. The second part of our technique is to propose possible changes for software architecture when the software architecture does not satisfy the new and/or changed requirements. The technique is based on architecture verification. The output of verification is a counter example if the requirements are not satisfied. The counter example is used with a classification of architectural changes in order to propose changes in the software architecture. These changes produce a new version of the architecture that possibly satisfies the new or the changed requirements

    A review of software change impact analysis

    Get PDF
    Change impact analysis is required for constantly evolving systems to support the comprehension, implementation, and evaluation of changes. A lot of research effort has been spent on this subject over the last twenty years, and many approaches were published likewise. However, there has not been an extensive attempt made to summarize and review published approaches as a base for further research in the area. Therefore, we present the results of a comprehensive investigation of software change impact analysis, which is based on a literature review and a taxonomy for impact analysis. The contribution of this review is threefold. First, approaches proposed for impact analysis are explained regarding their motivation and methodology. They are further classified according to the criteria of the taxonomy to enable the comparison and evaluation of approaches proposed in literature. We perform an evaluation of our taxonomy regarding the coverage of its classification criteria in studied literature, which is the second contribution. Last, we address and discuss yet unsolved problems, research areas, and challenges of impact analysis, which were discovered by our review to illustrate possible directions for further research

    Fail-Safe Test Generation of Safety Critical Systems

    Get PDF
    This dissertation introduces a technique for testing proper failure mitigation in safety critical systems. Unlike other approaches which integrate behavioral and failure models, and then generate tests from the integrated model, we build safety mitigation tests from an existing behavioral test suite, using an explicit mitigation model for which we generate mitigation paths which are then woven at selected failure points into the original test suite to create failure-mitigation tests (safety mitigation test)

    Improvements of and Extensions to FSMWeb: Testing Mobile Apps

    Get PDF
    A mobile application is a software program that runs on mobile device. In 2017, 178.1 billion mobile apps downloaded and the number is expected to grow to 258.2 billion app downloads in 2022 [19]. The number of app downloads poses a challenge for mobile application testers to find the right approach to test apps. This dissertation extends the FSMWeb approach for testing web applications [50] to test mobile applications (FSMApp). During the process of analyzing FSMWeb how it could be extended to test Mobile Apps, a number of shortcomings were detected which we improved upon. We discuss these first. We present an approach to generate black-box tests to test fail-safe behavior for web applications. We apply the approach to a large commercial web application. The approach uses a functional (behavioral) model to generate tests. It then determines at which states in the execution of behavioral test failures can occur and what mitigation requirements need to be tested. Mitigation requirements are used to build mitigation models for each failure type. From those mitigation models failure mitigation tests are generated. Next, this dissertation provides an approach for selective black-box model-based fail-safe regression testing for web applications. It classifies existing tests and test requirements as reusable, retestable, and obsolete. Removing reusable test requirements reduces test requirements between 49% to 65% in the case study. The approach also uses partial regeneration for new tests wherever possible. Third, we present the new FSMApp approach to test mobile applications and compare the approach with several other approaches [88, 37]. A number of case studies explore applicability, scalability, effectiveness, and efficiency of FSMApp with other approaches. Future work makes suggestion on how to improve test generation and execution efficiency with FSMApp
    corecore