11,570 research outputs found

    Grand Challenges of Traceability: The Next Ten Years

    Full text link
    In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research

    Grand Challenges of Traceability: The Next Ten Years

    Full text link
    In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research

    Iterative criteria-based approach to engineering the requirements of software development methodologies

    Get PDF
    Software engineering endeavours are typically based on and governed by the requirements of the target software; requirements identification is therefore an integral part of software development methodologies. Similarly, engineering a software development methodology (SDM) involves the identification of the requirements of the target methodology. Methodology engineering approaches pay special attention to this issue; however, they make little use of existing methodologies as sources of insight into methodology requirements. The authors propose an iterative method for eliciting and specifying the requirements of a SDM using existing methodologies as supplementary resources. The method is performed as the analysis phase of a methodology engineering process aimed at the ultimate design and implementation of a target methodology. An initial set of requirements is first identified through analysing the characteristics of the development situation at hand and/or via delineating the general features desirable in the target methodology. These initial requirements are used as evaluation criteria; refined through iterative application to a select set of relevant methodologies. The finalised criteria highlight the qualities that the target methodology is expected to possess, and are therefore used as a basis for de. ning the final set of requirements. In an example, the authors demonstrate how the proposed elicitation process can be used for identifying the requirements of a general object-oriented SDM. Owing to its basis in knowledge gained from existing methodologies and practices, the proposed method can help methodology engineers produce a set of requirements that is not only more complete in span, but also more concrete and rigorous

    A research review of quality assessment for software

    Get PDF
    Measures were recommended to assess the quality of software submitted to the AdaNet program. The quality factors that are important to software reuse are explored and methods of evaluating those factors are discussed. Quality factors important to software reuse are: correctness, reliability, verifiability, understandability, modifiability, and certifiability. Certifiability is included because the documentation of many factors about a software component such as its efficiency, portability, and development history, constitute a class for factors important to some users, not important at all to other, and impossible for AdaNet to distinguish between a priori. The quality factors may be assessed in different ways. There are a few quantitative measures which have been shown to indicate software quality. However, it is believed that there exists many factors that indicate quality and have not been empirically validated due to their subjective nature. These subjective factors are characterized by the way in which they support the software engineering principles of abstraction, information hiding, modularity, localization, confirmability, uniformity, and completeness

    Virtualisation of the test environment for signalling

    Get PDF
    ERTMS is a well-known, well-performing technology applied all over the world but it still lacks flexibility when it comes to authorisation and certification procedures. The key of its success in the future lies as much in cost reduction as in simplification of placing in service procedures. This holds true for the implementation of a new subsystem and even more so for new software releases related to subsystems already in service. Currently the placing in service process of ETCS components and subsystems requires a large amount of tests due to the complexity of the signalling systems and the different engineering rules applied. The S2R Multi-Annual Action Plan states that the effort and time consumption of these onsite tests are at least 30% for any particular project. VITE research project (VIrtualisation of the Test Environment) aims at reducing these onsite tests to a minimum while ensuring that laboratory tests can serve as evidence for valid system behaviour and are accepted by all stakeholders involved in the placing in service process. This paper presents the first VITE results

    Product specification documentation standard and Data Item Descriptions (DID). Volume of the information system life-cycle and documentation standards, volume 3

    Get PDF
    This is the third of five volumes on Information System Life-Cycle and Documentation Standards which present a well organized, easily used standard for providing technical information needed for developing information systems, components, and related processes. This volume states the Software Management and Assurance Program documentation standard for a product specification document and for data item descriptions. The framework can be applied to any NASA information system, software, hardware, operational procedures components, and related processes

    An approach to impact analysis in software maintenance

    Get PDF
    Impact analysis is a software maintenance activity, which consists of determining the scope of a requested change, as a basis for planning and implementing it. After a change request has been specified (change understanding) and the initial part of the system to be changed has been identified (change localization), impact analysis helps to understand consequences of the change on other parts of the system. Induced changes, also named ripple effects, among software components are detected. Most existing approaches perform impact analysis for changes occurring at the code level. In this thesis, concepts developed to perform impact analysis at the code level are applied to trace changes occurring at the design level. The method consists of proposing an activity model addressing the different steps of impact analysis and a data model on which propagations of changes can be traced. The method is validated with a case study applied to a system from the aerospace field. The tools we developed on PCTE help for consistency checks in HOOD based designs during editing. Our data-model based on an Entity Relationship notation describes a way to model HOOD diagrams in PCTE and further on to propagate changes on the repository. Examples chosen address the design phase of a simple engine system. We show that addressing modifications at a higher level of abstraction than the code eases understanding and localization of changes. It also limits the propagation of ripple effects (i.e., unexpected behaviour of the system) by detecting secondary changes at an earlier stage
    corecore