9 research outputs found

    Requirement Mining for Model-Based Product Design

    Get PDF
    PLM software applications should enable engineers to develop and manage requirements throughout the product’s lifecycle. However, PLM activities of the beginning-of-life and end-of-life of a product mainly deal with a fastidious document-based approach. Indeed, requirements are scattered in many different prescriptive documents (reports, specifications, standards, regulations, etc.) that make the feeding of a requirements management tool laborious. Our contribution is two-fold. First, we propose a natural language processing (NLP) pipeline to extract requirements from prescriptive documents. Second, we show how machine learning techniques can be used to develop a text classifier that will automatically classify requirements into disciplines. Both contributions support companies willing to feed a requirements management tool from prescriptive documents. The NLP experiment shows an average precision of 0.86 and an average recall of 0.95, whereas the SVM requirements classifier outperforms that of naive Bayes with a 76% accuracy rate

    Requirement mining for model-based product design

    Get PDF
    PLM software applications should enable engineers to develop and manage requirements throughout the product’s lifecycle. However, PLM activities of the beginning-of-life and end-of-life of a product mainly deal with a fastidious document-based approach. Indeed, requirements are scattered in many different prescriptive documents (reports, specifications, standards, regulations, etc.) that make the feeding of a requirements management tool laborious. Our contribution is two-fold. First, we propose a natural language processing (NLP) pipeline to extract requirements from prescriptive documents. Second, we show how machine learning techniques can be used to develop a text classifier that will automatically classify requirements into disciplines. Both contributions support companies willing to feed a requirements management tool from prescriptive documents. The NLP experiment shows an average precision of 0.86 and an average recall of 0.95, whereas the SVM requirements classifier outperforms that of naive Bayes with a 76% accuracy rate

    Requirement mining for model-based product design

    Get PDF
    PLM software applications should enable engineers to develop and manage requirements throughout the product’s lifecycle. However, PLM activities of the beginning-of-life and end-of-life of a product mainly deal with a fastidious document-based approach. Indeed, requirements are scattered in many different prescriptive documents (reports, specifications, standards, regulations, etc.) that make the feeding of a requirements management tool laborious. Our contribution is two-fold. First, we propose a natural language processing (NLP) pipeline to extract requirements from prescriptive documents. Second, we show how machine learning techniques can be used to develop a text classifier that will automatically classify requirements into disciplines. Both contributions support companies willing to feed a requirements management tool from prescriptive documents. The NLP experiment shows an average precision of 0.86 and an average recall of 0.95, whereas the SVM requirements classifier outperforms that of naive Bayes with a 76% accuracy rate

    Requirement Mining for Model-Based Product Design

    Get PDF
    PLM software applications should enable engineers to develop and manage requirements throughout the product’s lifecycle. However, PLM activities of the beginning-of-life and end-of-life of a product mainly deal with a fastidious document-based approach. Indeed, requirements are scattered in many different prescriptive documents (reports, specifications, standards, regulations, etc.) that make the feeding of a requirements management tool laborious. Our contribution is two-fold. First, we propose a natural language processing (NLP) pipeline to extract requirements from prescriptive documents. Second, we show how machine learning techniques can be used to develop a text classifier that will automatically classify requirements into disciplines. Both contributions support companies willing to feed a requirements management tool from prescriptive documents. The NLP experiment shows an average precision of 0.86 and an average recall of 0.95, whereas the SVM requirements classifier outperforms that of naive Bayes with a 76% accuracy rate

    A Property Graph Data Model for a Context-Aware Design Assistant

    Get PDF
    The design of a product requires to satisfy a large number of design rules so as to avoid design errors. [Problem] Although there are numerous technological alternatives for managing knowledge, design departments continue to store design rules in nearly unusable documents. Indeed, existing propositions based on basic information retrieval techniques applied to unstructured engineering documents do not provide good results. Conversely, the development and management of structured ontologies are too laborious. [Proposition] We propose a property graph data model that paves the way to a context-aware design assistant. The property graph data model is a graph-oriented data structure that enables us to formally define a design context as a consolidated set of five sub-contexts: social, semantic, engineering, operational IT, and traceability. [Future work] Connected to or embedded in a Computer Aided Design (CAD) environment, our context-aware design assistant will extend traditional CAD capabilities as it could, for instance, ease: 1) the retrieval of rules according to a particular design context, 2) the recommendation of design rules while a design activity is being performed, 3) the verification of design solutions, 4) the automation of design routines, etc

    Testing a New Structured Tool for Supporting Requirements’ Formulation and Decomposition

    Get PDF
    open4noThe definition of a comprehensive initial set of engineering requirements is crucial to an effective and successful design process. To support engineering designers in this non-trivial task, well-acknowledged requirement checklists are available in literature, but their actual support is arguable. Indeed, engineering design tasks involve multifunctional systems, characterized by a complex map of requirements affecting different functions. Aiming at improving the support provided by common checklists, this paper proposes a structured tool capable of allocating different requirements to specific functions, and to discern between design wishes and demands. A first experiment of the tool enabled the extraction of useful information for future developments targeting the enhancement of the tool’s efficacy. Indeed, although some advantages have been observed in terms of the number of proposed requirements, the presence of multiple functions led users (engineering students in this work) to useless repetitions of the same requirement. In addition, the use of the proposed tool resulted in increased perceived effort, which has been measured through the NASA Task Load Index method. These limitations constitute the starting point for planning future research and the mentioned enhancements, beyond representing a warning for scholars involved in systematizing the extraction and management of design requirements. Moreover, thanks to the robustness of the scientific approach used in this work, similar experiments can be repeated to obtain data with a more general validity, especially from industry.openFiorineschi, Lorenzo; Becattini, Niccolò; Borgianni, Yuri; Rotini, FedericoFiorineschi, Lorenzo; Becattini, Niccolò; Borgianni, Yuri; Rotini, Federic

    A requirement mining framework to support complex sub-systems suppliers

    Get PDF
    The design of engineered socio-technical systems relies on a value chain within which suppliers must cope with larger and larger sets of requirements. Although 70 % of the total life cycle cost is committed during the concept phase and most industrial projects originally fail due to poor requirements engineering [1], very few methods and tools exist to support suppliers. In this paper, we propose to methodologically integrate data science techniques into a collaborative requirement mining framework so as to enable suppliers to gain insight and discover opportunities in a massive set of requirements. The proposed workflow is a five-phase process including: (1) the extraction of requirements from documents and (2) the analysis of their quality by using natural language processing techniques; (3) the segmentation of requirements into communities using text mining and graph theory; (4) the collaborative and multidisciplinary estimation of decision making criteria; and (5) the reporting of estimations via an analytical dashboard of statistical indicators. We conclude that the methodological integration of data science techniques is an effective way to gain insight from hundreds or thousands of requirements before making informed decisions early on. The software prototype that supports our workflow is a JAVA web application developed on top of a graph-oriented data model implemented with the NoSQL NEO4J graph database. As a future work, the semi-structured as-required baseline could be a sound input to feed a formal approach, such as model- and simulation-based systems engineering

    Using NLP to generate user stories from software specification in natural language

    Get PDF
    Orientador: Andrey Ricardo PimentelDissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 27/08/2018Inclui referências: p. 80-82Resumo: O processo de elicitar as Histórias de Usuário requeridas para o desenvolvimento de software exige tempo e dedicação, e pode apresentar muito retrabalho se as conversas com partes interessadas não fornecerem informações coesas. O principal problema enfrentado é que o cliente muitas vezes não tem clareza sobre o que ele realmente quer, e no estado da arte não havia uma abordagem ou ferramenta que auxiliasse a transpor o que cliente deseja em histórias de usuário. Pensando nisso, propusemos a abordagem e ferramenta UserStoryGen para simplificar todo esse trabalho extenso, resolvendo esse problema através do uso de técnicas de Linguagem Natural (NLP) com estruturas adequadas para esse propósito e usando o modelo padrão de descrição de história para gerar automaticamente histórias de usuários. A abordagem UserStoryGen consiste em extrair informações como: título, descrição, verbo principal, usuários e entidades sistêmicas de histórias de usuários, a partir do texto não estruturado. O UserStoryGen usa o texto big picture como entrada para processamento de texto e geração automática de histórias do usuário. As histórias do usuário são geradas por meio de uma Restful A P I no formato JSON e podem ser exibidas tanto nesse formato, se apenas a chamada da Restful API for usada, como usando uma interface gráfica que mostrará o resultado em uma tabela. A implementação da UserStoryGen teve como objetivo automatizar este processo trabalhoso de extração de histórias de usuários do texto e obteve resultados significativos, principalmente nos testes com dados da indústria. Entre os três grupos de estudos de caso realizados, o terceiro, que utilizou dados da indústria, obteve os melhores resultados com textos que tiveram uma acurácia média de 76%, precisão de 88,23%, recall de 78,95% e medida F1 de 83,33%. O segundo grupo de estudos de casos com textos fornecidos por especialistas em Engenharia de Software obteve uma acurácia média de 73,68%, precisão de 85,71% e F1 de 82,76%. O primeiro grupo, utilizando textos de umwhite paper e de um livro teve o pior resultado, com uma acurácia média de 60% e uma medida F1 de 60,87%. Com base nos resultados obtidos com a UserStoryGen, concluímos que é completamente possível atingir o objetivo se pré-identificar e extrair as possíveis histórias de usuário para um determinado texto, e a implementação da abordagem proposta também pode ser melhorada em trabalhos futuros. A UserStoryGen representa um ganho para o Processo de Desenvolvimento Ágil, eliminando o tempo gasto na identificação de Estórias de Usuário, quando a equipe possui um texto com a big picture ou um documento textual das funcionalidades para usar como entrada. Palavras-chave: Processamento de Linguagem Natural, Extração Automática, Histórias de Usuário, Stanford CoreNLP.Abstract: The process of eliciting User Stories required for software development requires both time and dedication, and can present a lot of rework if conversations with stakeholders do not provide cohesive information. The main problem faced is that the client often lacks clarity about what he really wants, and in the state of the art there was no approach or tool that helps transpose what the customer wants into user stories. Thinking on it, we proposed the UserStoryGen approach and tool to simplify all this extensive work, resolving the issue through the use of Natural Language Processing (NLP) techniques with structures and the standard user story description template to automatically generate user stories. UserStoryGen's approach consists of extracting information such as: title, description, main verb, users and systemic entities of user stories from the unstructured text. The UserStoryGen uses big picture text as input for text processing and automated generation of the user stories. The user stories are generated through a Restful API in the JSON format and can be viewed either in this format, if only the Restful API call is used, as well as using a graphic interface that shows the results through a table. The implementation of UserStoryGen is aimed to automate the laborious process of extracting user stories from text and it obtained significant results, mainly with industry data. Among the three groups of case studies, the third one, that used industry data, obtained the best results with texts that had an average accuracy of 76%, precision of 88.23%, recall of 78.95% and F1 measure of 83.33%. The second group, using texts provided by software engineering specialists obtained an average accuracy of 73.68%, precision of 85.71% and F1 measure of 82.76%. The first group, using texts from a white paper and a book had the worst results with an average accuracy of 60% and a F1 measure of 60.87%. Based in the results obtained with the UserStoryGen, we concluded that it's completely possible to achieve the goal if pre-identifying and extracting the possible user stories for a given text, and the implementation of the proposed approach also can be improved in the future works. The UserStoryGen is a gain for Agile Development Process by eliminating time spent in User Stories identification when the team has a big picture text or a Features textual document to use as input. Keywords: Natural Language Processing, Automatic Extraction, User Stories, Stanford CoreNLP
    corecore