22,647 research outputs found

    Grand Challenges of Traceability: The Next Ten Years

    Full text link
    In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research

    Grand Challenges of Traceability: The Next Ten Years

    Full text link
    In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research

    Automated legal sensemaking: the centrality of relevance and intentionality

    Get PDF
    Introduction: In a perfect world, discovery would ideally be conducted by the senior litigator who is responsible for developing and fully understanding all nuances of their client’s legal strategy. Of course today we must deal with the explosion of electronically stored information (ESI) that never is less than tens-of-thousands of documents in small cases and now increasingly involves multi-million-document populations for internal corporate investigations and litigations. Therefore scalable processes and technologies are required as a substitute for the authority’s judgment. The approaches taken have typically either substituted large teams of surrogate human reviewers using vastly simplified issue coding reference materials or employed increasingly sophisticated computational resources with little focus on quality metrics to insure retrieval consistent with the legal goal. What is required is a system (people, process, and technology) that replicates and automates the senior litigator’s human judgment. In this paper we utilize 15 years of sensemaking research to establish the minimum acceptable basis for conducting a document review that meets the needs of a legal proceeding. There is no substitute for a rigorous characterization of the explicit and tacit goals of the senior litigator. Once a process has been established for capturing the authority’s relevance criteria, we argue that literal translation of requirements into technical specifications does not properly account for the activities or states-of-affairs of interest. Having only a data warehouse of written records, it is also necessary to discover the intentions of actors involved in textual communications. We present quantitative results for a process and technology approach that automates effective legal sensemaking

    A Framework for Evaluating Traceability Benchmark Metrics

    Get PDF
    Many software traceability techniques have been developed in the past decade, but suffer from inaccuracy. To address this shortcoming, the software traceability research community seeks to employ benchmarking. Benchmarking will help the community agree on whether improvements to traceability techniques have addressed the challenges faced by the research community. A plethora of evaluation methods have been applied, with no consensus on what should be part of a community benchmark. The goals of this paper are: to identify recurring problems in evaluation of traceability techniques, to identify essential properties that evaluation methods should possess to overcome the identified problems, and to provide guidelines for benchmarking software traceability techniques. We illustrate the properties and guidelines using empirical evaluation of three software traceability techniques on nine data sets. The proposed benchmarking framework can be broadly applied to domains beyond traceability research

    TRACEABILITY IN THE U.S. FOOD SUPPLY: ECONOMIC THEORY AND INDUSTRY STUDIES

    Get PDF
    This investigation into the traceability baseline in the United States finds that private sector food firms have developed a substantial capacity to trace. Traceability systems are a tool to help firms manage the flow of inputs and products to improve efficiency, product differentiation, food safety, and product quality. Firms balance the private costs and benefits of traceability to determine the efficient level of traceability. In cases of market failure, where the private sector supply of traceability is not socially optimal, the private sector has developed a number of mechanisms to correct the problem, including contracting, third-party safety/quality audits, and industry-maintained standards. The best-targeted government policies for strengthening firms' incentives to invest in traceability are aimed at ensuring that unsafe of falsely advertised foods are quickly removed from the system, while allowing firms the flexibility to determine the manner. Possible policy tools include timed recall standards, increased penalties for distribution of unsafe foods, and increased foodborne-illness surveillance.traceability, tracking, traceback, tracing, recall, supply-side management, food safety, product differentiation, Food Consumption/Nutrition/Food Safety, Industrial Organization,

    Development of a Computer Vision-Based Three-Dimensional Reconstruction Method for Volume-Change Measurement of Unsaturated Soils during Triaxial Testing

    Get PDF
    Problems associated with unsaturated soils are ubiquitous in the U.S., where expansive and collapsible soils are some of the most widely distributed and costly geologic hazards. Solving these widespread geohazards requires a fundamental understanding of the constitutive behavior of unsaturated soils. In the past six decades, the suction-controlled triaxial test has been established as a standard approach to characterizing constitutive behavior for unsaturated soils. However, this type of test requires costly test equipment and time-consuming testing processes. To overcome these limitations, a photogrammetry-based method has been developed recently to measure the global and localized volume-changes of unsaturated soils during triaxial test. However, this method relies on software to detect coded targets, which often requires tedious manual correction of incorrectly coded target detection information. To address the limitation of the photogrammetry-based method, this study developed a photogrammetric computer vision-based approach for automatic target recognition and 3D reconstruction for volume-changes measurement of unsaturated soils in triaxial tests. Deep learning method was used to improve the accuracy and efficiency of coded target recognition. A photogrammetric computer vision method and ray tracing technique were then developed and validated to reconstruct the three-dimensional models of soil specimen

    IMPROVING TRACEABILITY RECOVERY TECHNIQUES THROUGH THE STUDY OF TRACING METHODS AND ANALYST BEHAVIOR

    Get PDF
    Developing complex software systems often involves multiple stakeholder interactions, coupled with frequent requirements changes while operating under time constraints and budget pressures. Such conditions can lead to hidden problems, manifesting when software modifications lead to unexpected software component interactions that can cause catastrophic or fatal situations. A critical step in ensuring the success of software systems is to verify that all requirements can be traced to the design, source code, test cases, and any other software artifacts generated during the software development process. The focus of this research is to improve on the trace matrix generation process and study how human analysts create the final trace matrix using traceability information generated from automated methods. This dissertation presents new results in the automated generation of traceability matrices and in the analysis of analyst actions during a tracing task. The key contributions of this dissertation are as follows: (1) Development of a Proximity-based Vector Space Model for automated generation of TMs. (2) Use of Mean Average Precision (a ranked retrieval-based measure) and 21-point interpolated precision-recall graph (a set-based measure) for statistical evaluation of automated methods. (3) Logging and visualization of analyst actions during a tracing task. (4) Study of human analyst tracing behavior with consideration of decisions made during the tracing task and analyst tracing strategies. (5) Use of potential recall, sensitivity, and effort distribution as analyst performance measures. Results show that using both a ranked retrieval-based and a set-based measure with statistical rigor provides a framework for evaluating automated methods. Studying the human analyst provides insight into how analysts use traceability information to create the final trace matrix and identifies areas for improvement in the traceability process. Analyst performance measures can be used to identify analysts that perform the tracing task well and use effective tracing strategies to generate a high quality final trace matrix

    Tracing requirement objects as an information retrieval task

    Get PDF
    In large requirement databases, tracing of different objects to each other, e.g. higher-level requirements to lower-level requirements, or requirements to their verification methods, can be a tedious job. With numerous objects in the database the selection of the corresponding object from lists can take a long time. In this thesis standard information retrieval (IR) methods, in particular multiple variants of vector space modelling, are applied in order to provide a shortlist of a few objects, which are predicted to be relevant, this way speeding up the selection process. The aim of the thesis is to demonstrate the usage of such IR system on a real-life example requirement data set, providing an end-to-end solution from processing the relevant data to showing the shortlist on a GUI view. The separation of the data to train and validation subsets and the setup of a relevant evaluation metric is also essential in order to benchmark future developments
    corecore