1,081 research outputs found

    VTrace-A Tool for Visualizing Traceability Links Among Software Artefacts for an Evolving System

    Full text link
    Traceability Management plays a key role in tracing the life of a requirement through all the specifications produced during the development phase of a software project. A lack of traceability information not only hinders the understanding of the system but also will prove to be a bottleneck in the future maintenance of the system. Projects that maintain traceability information during the development stages somehow fail to upgrade their artefacts or maintain traceability among the different versions of the artefacts that are produced during the maintenance phase. As a result the software artefacts lose the trustworthiness and engineers mostly work from the source code for impact analysis. The goal of our research is on understanding the impact of visualizing traceability links on change management tasks for an evolving system. As part of our research we have implemented a Traceability Visualization Tool-VTrace that manages software artefacts and also enables the visualization of traceability links. The results of our controlled experiment show that subjects who used the tool were more accurate and faster on change management tasks than subjects that didn't use the tool

    VTrace-A Tool for Visualizing Traceability Links among Software Artefacts for an Evolving System

    Get PDF
    Traceability Management plays a key role in tracing the life of a requirement through all the specifications produced during the development phase of a software project. A lack of traceability information not only hinders the understanding of the system but also will prove to be a bottleneck in the future maintenance of the system. Projects that maintain traceability information during the development stages somehow fail to upgrade their artefacts or maintain traceability among the different versions of the artefacts that are produced during the maintenance phase. As a result the software artefacts lose the trustworthiness and engineers mostly work from the source code for impact analysis. The goal of our research is on understanding the impact of visualizing traceability links on change management tasks for an evolving system. As part of our research we have implemented a Traceability Visualization Tool-VTrace that manages software artefacts and also enables the visualization of traceability links. The results of our controlled experiment show that subjects who used the tool were more accurate and faster on change management tasks than subjects that didn’t use the tool

    Datasets Used in Fifteen Years of Automated Requirements Traceability Research

    Get PDF
    Datasets are crucial to advance automated software traceability research. Acquiring such datasets come in a high cost and require expert knowledge to manually collect and validate them. Obtaining such software development datasets has been one of the most frequently reported barrier for researchers in the software engineering domain in general. This problem is even more acute in field of requirement traceability, which plays crucial role in safety critical and highly regulated systems. Therefore, the main motivation behind this work is to analyze the current state of art of datasets used in the field of software traceability. This work presents a first-of-its-kind literature study to review and assess the datasets that have been used in software traceability research over the last fifteen years. It articulates several attributes related to these datasets such as their characteristics, threats and diversity. Firstly, 202 primary studies (refer Appendix A) were identified for purpose of this study, which were used to derive 73 unique datasets. These 73 datasets were studied in-depth and several attributes (size, type, domain, availability, artifacts) were extracted (refer Appendix B). Based on analysis of the primary studies, a threat to validity reference model, tailored to Software traceability datasets was derived (refer to figure 4.4). Furthermore, to put some light upon the dataset diversity trend in the Software traceability community, a metric called Dataset Diversity Ratio was derived for 38 authors (refer to figure 4.5) who have published more than one publication in field of software traceability

    What have we learnt from the challenges of (semi-) automated requirements traceability? A discussion on blockchain applicability.

    Get PDF
    Over the last 3 decades, researchers have attempted to shed light into the requirements traceability problem by introducing tracing tools, techniques, and methods with the vision of achieving ubiquitous traceability. Despite the technological advances, requirements traceability remains problematic for researchers and practitioners. This study aims to identify and investigate the main challenges in implementing (semi-)automated requirements traceability, as reported in the recent literature. A systematic literature review was carried out based on the guidelines for systematic literature reviews in software engineering, proposed by Kitchenham. We retrieved 4530 studies by searching five major bibliographic databases and selected 70 primary studies. These studies were analysed and classified according to the challenges they present and/or address. Twenty-one challenges were identified and were classified into five categories. Findings reveal that the most frequent challenges are technological challenges, in particular, low accuracy of traceability recovery methods. Findings also suggest that future research efforts should be devoted to the human facet of tracing, to explore traceability practices in organisational settings, and to develop traceability approaches that support agile and DevOps practices. Finally, it is recommended that researchers leverage blockchain technology as a suitable technical solution to ensure the trustworthiness of traceability information in interorganisational software projects.publishedVersio

    Exploring Knowledge Engineering Strategies in Designing and Modelling a Road Traffic Accident Management Domain

    Get PDF
    Formulating knowledge for use in AI Planning engines is currently something of an ad-hoc process, where the skills of knowledge engineers and the tools they use may significantly influence the quality of the resulting planning application. There is little in the way of guidelines or standard procedures, however, for knowledge engineers to use when formulating knowledge into planning domain languages such as PDDL. This paper seeks to investigate this process using as a case study a road traffic accident management domain. Managing road accidents requires systematic, sound planning and coordination of resources to improve outcomes for accident victims. We have derived a set of requirements in consultation with stakeholders for the resource coordination part of managing accidents. We evaluate two separate knowledge engineering strategies for encoding the resulting planning domain from the set of requirements: (a) the traditional method of PDDL experts and text editor, and (b) a leading planning GUI with built in UML modelling tools. These strategies are evaluated using process and product metrics, where the domain model (the product) was tested extensively with a range of planning engines. The results give insights into the strengths and weaknesses of the approaches, highlight lessons learned regarding knowledge encoding, and point to important lines of research for knowledge engineering for planning

    Recovering from a Decade: A Systematic Mapping of Information Retrieval Approaches to Software Traceability

    Get PDF
    Engineers in large-scale software development have to manage large amounts of information, spread across many artifacts. Several researchers have proposed expressing retrieval of trace links among artifacts, i.e. trace recovery, as an Information Retrieval (IR) problem. The objective of this study is to produce a map of work on IR-based trace recovery, with a particular focus on previous evaluations and strength of evidence. We conducted a systematic mapping of IR-based trace recovery. Of the 79 publications classified, a majority applied algebraic IR models. While a set of studies on students indicate that IR-based trace recovery tools support certain work tasks, most previous studies do not go beyond reporting precision and recall of candidate trace links from evaluations using datasets containing less than 500 artifacts. Our review identified a need of industrial case studies. Furthermore, we conclude that the overall quality of reporting should be improved regarding both context and tool details, measures reported, and use of IR terminology. Finally, based on our empirical findings, we present suggestions on how to advance research on IR-based trace recovery

    Managing technical debt through software metrics, refactoring and traceability

    Get PDF

    Traceability Management Architectures Supporting Total Traceability in the Context of Software Engineering

    Get PDF
    In the area of Software Engineering, traceability is defined as the capability to track requirements, their evolution and transformation in different components related to engineering process, as well as the management of the relationships between those components. However the current state of the art in traceability does not keep in mind many of the elements that compose a product, specially those created before requirements arise, nor the appropriated use of traceability to manage the knowledge underlying in order to be handled by other organizational or engineering processes. In this work we describe the architecture of a reference model that establishes a set of definitions, processes and models which allow a proper management of traceability and further uses of it, in a wider context than the one related to software development

    Semi-Automatische Deduktion von Feature-Lokalisierung während der Softwareentwicklung: Masterarbeit

    Get PDF
    Despite extensive research on software product lines in the last decades, ad-hoc clone-and-own development is still the dominant way for introducing variability to software systems. Therefore, the same issues for which software product lines were developed in the first place are still imminent in clone-and-own development: Fixing bugs consistently throughout clones and avoiding duplicate implementation effort is extremely diffcult as similarities and differences between variants are unknown. In order to remedy this, we enhance clone-and-own development with techniques from product-line engineering for targeted variant synchronisation such that domain knowledge can be integrated stepwise and without obligation. Contrary to retroactive feature mapping recovery (e.g., mining) techniques, we infer feature-to-code mappings directly during software development when concrete domain knowledge is present. In this thesis, we focus on the first step towards targeted synchronisation between variants: the recording of feature mappings. By letting developers specify on which feature they are working on, we derive feature mappings directly during software development. We ensure syntactic validity of feature mappings and variant synchronisation by implementing disciplined annotations through abstract syntax trees. To bridge the mismatch between change classification in the implementation and abstract layer, we synthesise semantic edits on abstract syntax trees. We show that our derivation can be used to reproduce variability-related real-world code changes and compare it to the feature mapping derivation of the projectional variation control system VTS by Stanciulescu et al.Trotz umfangreicher Forschung zu Software-Produktlinien in den letzten Jahrzehnten ist Clone-and-Own immer noch der dominierende Ansatz zur Einführung von Variabilität in Softwaresystemen. Daher stehen bei Clone-and-Own immer noch die gleichen Probleme im Vordergrund, für die Software-Produktlinien überhaupt erst entwickelt wurden: Die konsistente Behebung von Fehlern in allen Klonen und die Vermeidung von doppeltem Implementierungsaufwand sind äußerst schwierig, da Ähnlichkeiten und Unterschiede zwischen den Varianten unbekannt sind. Um hier Abhilfe zu schaffen, erweitern wir die Clone-and-Own-Entwicklung mit Techniken aus der Produktlinien-Entwicklung zur gezielten Synchronisierung von Varianten, sodass Entwickler ihr Domänenwissen schrittweise und unverbindlich integrieren können. Im Gegensatz zu nachträglich arbeitenden Feature-Mapping-Recovery- oder auch Mining-Techniken, leiten wir Zuordungen von Features zu Quellcode direkt während der Softwareentwicklung ab, wenn konkretes Domänenwissen vorhanden ist. In dieser Arbeit entwickeln wir den ersten Schritt zur gezielten Synchronisation von Varianten: die Aufzeichnung von Feature-Mappings. Indem Entwickler spezifizieren an welchem Feature sie arbeiten, leiten wir Feature-Mappings direkt während der Softwareentwicklung ab. Wir stellen die syntaktische Korrektheit von Feature-Mappings und der Synchronisation von Varianten sicher, indem wir disziplinierte Annotationen mithilfe von abstrakten Syntaxbäumen implementieren. Um die Diskrepanz der Klassifizierung von Änderungen zwischen der Implementierungs- und der Abstraktionsschicht zu überbrücken, synthetisieren wir Semantic Edits auf abstrakten Syntaxbäumen. Wir zeigen, dass unsere Ableitung von Feature-Mappings in der Lage ist reale Codeänderungen zu reproduzieren und vergleichen sie mit der Feature-Mapping-Ableitung des Variationskontrollsystems VTS von Stanciulescu et al
    corecore