230 research outputs found

    Requirement Mining for Model-Based Product Design

    Get PDF
    PLM software applications should enable engineers to develop and manage requirements throughout the product’s lifecycle. However, PLM activities of the beginning-of-life and end-of-life of a product mainly deal with a fastidious document-based approach. Indeed, requirements are scattered in many different prescriptive documents (reports, specifications, standards, regulations, etc.) that make the feeding of a requirements management tool laborious. Our contribution is two-fold. First, we propose a natural language processing (NLP) pipeline to extract requirements from prescriptive documents. Second, we show how machine learning techniques can be used to develop a text classifier that will automatically classify requirements into disciplines. Both contributions support companies willing to feed a requirements management tool from prescriptive documents. The NLP experiment shows an average precision of 0.86 and an average recall of 0.95, whereas the SVM requirements classifier outperforms that of naive Bayes with a 76% accuracy rate

    Requirement mining for model-based product design

    Get PDF
    PLM software applications should enable engineers to develop and manage requirements throughout the product’s lifecycle. However, PLM activities of the beginning-of-life and end-of-life of a product mainly deal with a fastidious document-based approach. Indeed, requirements are scattered in many different prescriptive documents (reports, specifications, standards, regulations, etc.) that make the feeding of a requirements management tool laborious. Our contribution is two-fold. First, we propose a natural language processing (NLP) pipeline to extract requirements from prescriptive documents. Second, we show how machine learning techniques can be used to develop a text classifier that will automatically classify requirements into disciplines. Both contributions support companies willing to feed a requirements management tool from prescriptive documents. The NLP experiment shows an average precision of 0.86 and an average recall of 0.95, whereas the SVM requirements classifier outperforms that of naive Bayes with a 76% accuracy rate

    Using tools to assist identification of non-requirements in requirements specifications : A controlled experiment

    Get PDF
    [Context and motivation] In many companies, textual fragments in specification documents are categorized into requirements and non-requirements. This categorization is important for determining liability, deriving test cases, and many more decisions. In practice, this categorization is usually performed manually, which makes it labor-intensive and error-prone. [Question/Problem] We have developed a tool to assist users in this task by providing warnings based on classification using neural networks. However, we currently do not know whether using the tool actually helps increasing the classification quality compared to not using the tool. [Principal idea/results] Therefore, we performed a controlled experiment with two groups of students. One group used the tool for a given task, whereas the other did not. By comparing the performance of both groups, we can assess in which scenarios the application of our tool is beneficial. [Contribution] The results show that the application of an automated classification approach may provide benefits, given that the accuracy is high enough

    Requirement mining for model-based product design

    Get PDF
    PLM software applications should enable engineers to develop and manage requirements throughout the product’s lifecycle. However, PLM activities of the beginning-of-life and end-of-life of a product mainly deal with a fastidious document-based approach. Indeed, requirements are scattered in many different prescriptive documents (reports, specifications, standards, regulations, etc.) that make the feeding of a requirements management tool laborious. Our contribution is two-fold. First, we propose a natural language processing (NLP) pipeline to extract requirements from prescriptive documents. Second, we show how machine learning techniques can be used to develop a text classifier that will automatically classify requirements into disciplines. Both contributions support companies willing to feed a requirements management tool from prescriptive documents. The NLP experiment shows an average precision of 0.86 and an average recall of 0.95, whereas the SVM requirements classifier outperforms that of naive Bayes with a 76% accuracy rate

    Requirement Mining for Model-Based Product Design

    Get PDF
    PLM software applications should enable engineers to develop and manage requirements throughout the product’s lifecycle. However, PLM activities of the beginning-of-life and end-of-life of a product mainly deal with a fastidious document-based approach. Indeed, requirements are scattered in many different prescriptive documents (reports, specifications, standards, regulations, etc.) that make the feeding of a requirements management tool laborious. Our contribution is two-fold. First, we propose a natural language processing (NLP) pipeline to extract requirements from prescriptive documents. Second, we show how machine learning techniques can be used to develop a text classifier that will automatically classify requirements into disciplines. Both contributions support companies willing to feed a requirements management tool from prescriptive documents. The NLP experiment shows an average precision of 0.86 and an average recall of 0.95, whereas the SVM requirements classifier outperforms that of naive Bayes with a 76% accuracy rate

    Application of machine learning techniques to the flexible assessment and improvement of requirements quality

    Get PDF
    It is already common to compute quantitative metrics of requirements to assess their quality. However, the risk is to build assessment methods and tools that are both arbitrary and rigid in the parameterization and combination of metrics. Specifically, we show that a linear combination of metrics is insufficient to adequately compute a global measure of quality. In this work, we propose to develop a flexible method to assess and improve the quality of requirements that can be adapted to different contexts, projects, organizations, and quality standards, with a high degree of automation. The domain experts contribute with an initial set of requirements that they have classified according to their quality, and we extract their quality metrics. We then use machine learning techniques to emulate the implicit expert’s quality function. We provide also a procedure to suggest improvements in bad requirements. We compare the obtained rule-based classifiers with different machine learning algorithms, obtaining measurements of effectiveness around 85%. We show as well the appearance of the generated rules and how to interpret them. The method is tailorable to different contexts, different styles to write requirements, and different demands in quality. The whole process of inferring and applying the quality rules adapted to each organization is highly automatedThis research has received funding from the CRYSTAL project–Critical System Engineering Acceleration (European Union’s Seventh Framework Program FP7/2007-2013, ARTEMIS Joint Undertaking grant agreement no 332830); and from the AMASS project–Architecture-driven, Multi-concern and Seamless Assurance and Certification of Cyber-Physical Systems (H2020-ECSEL grant agreement no 692474; Spain’s MINECO ref. PCIN-2015-262)

    Sentence Embedding Models for Similarity Detection of Software Requirements

    Get PDF
    Semantic similarity detection mainly relies on the availability of laboriously curated ontologies, as well as of supervised and unsupervised neural embedding models. In this paper, we present two domain-specific sentence embedding models trained on a natural language requirements dataset in order to derive sentence embeddings specific to the software requirements engineering domain. We use cosine-similarity measures in both these models. The result of the experimental evaluation confirm that the proposed models enhance the performance of textual semantic similarity measures over existing state-of-the-art neural sentence embedding models: we reach an accuracy of 88.35%—which improves by about 10% on existing benchmarks.Semantic similarity detection mainly relies on the availability of laboriously curated ontologies, as well as of supervised and unsupervised neural embedding models. In this paper, we present two domain-specific sentence embedding models trained on a natural language requirements dataset in order to derive sentence embeddings specific to the software requirements engineering domain. We use cosine-similarity measures in both these models. The result of the experimental evaluation confirm that the proposed models enhance the performance of textual semantic similarity measures over existing state-of-the-art neural sentence embedding models: we reach an accuracy of 88.35%—which improves by about 10% on existing benchmarks

    Litigating Partial Autonomy

    Get PDF
    Who is responsible when a semi-autonomous vehicle crashes? Automobile manufacturers claim that because Advanced Driver Assistance Systems (ADAS) require constant human oversight even when autonomous features are active, the driver is always fully responsible when supervised autonomy fails. This Article argues that the automakers’ position is likely wrong both descriptively and normatively. On the descriptive side, current products liability law offers a pathway toward shared legal responsibility. Automakers, after all, have engaged in numerous marketing efforts to gain public trust in automation features. When drivers’ trust turns out to be misplaced, drivers are not always able to react in a timely fashion to re-take control of the car. In such cases, the automaker is likely to face primary liability, perhaps with a reduction for the driver’s comparative fault. On the normative side, this Article argues that the nature of modern semi-autonomous systems requires the human and machine to engage in a collaborative driving endeavor. The human driver should not bear full liability for the harm arising from this shared responsibility. As lawsuits involving partial autonomy increase, the legal system will face growing challenges in incentivizing safe product development, allocating liability in line with fair principles, and leaving room for a nascent technology to improve in ways that, over time, will add substantial safety protections. The Article develops a framework for considering how those policy goals can play a role in litigation involving autonomous features. It offers three key recommendations, including (1) that courts consider collaborative driving as a system when allocating liability; (2) that the legal system recognize and encourage regular software updates for vehicles, and (3) that customers pursue fraud and warranty claims when manufacturers overstate their autonomous capabilities. Claims for economic damages can encourage manufacturers to internalize the cost of product defects before, rather than after, their customers suffer serious physical injury

    Supporting Analysts by Dynamic Extraction and Classification of Requirements-Related Knowledge

    Full text link
    © 2019 IEEE. In many software development projects, analysts are required to deal with systems' requirements from unfamiliar domains. Familiarity with the domain is necessary in order to get full leverage from interaction with stakeholders and for extracting relevant information from the existing project documents. Accurate and timely extraction and classification of requirements knowledge support analysts in this challenging scenario. Our approach is to mine real-time interaction records and project documents for the relevant phrasal units about the requirements related topics being discussed during elicitation. We propose to use both generative and discriminating methods. To extract the relevant terms, we leverage the flexibility and power of Weighted Finite State Transducers (WFSTs) in dynamic modelling of natural language processing tasks. We used an extended version of Support Vector Machines (SVMs) with variable-sized feature vectors to efficiently and dynamically extract and classify requirements-related knowledge from the existing documents. To evaluate the performance of our approach intuitively and quantitatively, we used edit distance and precision/recall metrics. We show in three case studies that the snippets extracted by our method are intuitively relevant and reasonably accurate. Furthermore, we found that statistical and linguistic parameters such as smoothing methods, and words contiguity and order features can impact the performance of both extraction and classification tasks

    A Machine Learning-Based Approach for Demarcating Requirements in Textual Specifications

    Get PDF
    A simple but important task during the analysis of a textual requirements specification is to determine which statements in the specification represent requirements. In principle, by following suitable writing and markup conventions, one can provide an immediate and unequivocal demarcation of requirements at the time a specification is being developed. However, neither the presence nor a fully accurate enforcement of such conventions is guaranteed. The result is that, in many practical situations, analysts end up resorting to after-the-fact reviews for sifting requirements from other material in a requirements specification. This is both tedious and time-consuming. We propose an automated approach for demarcating requirements in free-form requirements specifications. The approach, which is based on machine learning, can be applied to a wide variety of specifications in different domains and with different writing styles. We train and evaluate our approach over an independently labeled dataset comprised of 30 industrial requirements specifications. Over this dataset, our approach yields an average precision of 81.2% and an average recall of 95.7%. Compared to simple baselines that demarcate requirements based on the presence of modal verbs and identifiers, our approach leads to an average gain of 16.4% in precision and 25.5% in recall
    • …
    corecore