9 research outputs found

    Validating an approach to formalize use cases with ontologies

    Get PDF
    Use case driven development methodologies put use cases at the center of the software development process. However, in order to support automated development and analysis, use cases need to be appropriately formalized. This will also help guarantee consistency between requirements specifications and developed solutions. Formal methods tend to suffer from take up issues, as they are usually hard to accept by industry. In this context, it is relevant not only to produce languages and approaches to support formalization, but also to perform their validation. In previous works we have developed an approach to formalize use cases resorting to ontologies. In this paper we present the validation of one such approach. Through a three stage study, we evaluate the acceptance of the language and supporting tool. The first stage focusses on the acceptance of the process and language, the second on the support the tool provides to the process, and finally the third one on the tool's usability aspects. Results show test subjects found the approach feasible and useful and the tool easy to use.This work is financed by the ERDF - European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation - COMPETE 2020 Programme within project "POCI-01-0145-FEDER-006961", and by National Funds through the FCT - Fundacao para a Ciencia e a Tecnologia (Portuguese Foundation for Science and Technology) as part of project UID/EEA/50014/2013

    Semi-automatic generation of UML models from natural language requirements

    Get PDF
    Going from requirements analysis to design phase is considered as one of the most complex and difficult activities in software development. Errors caused during this activity can be quite expensive to fix in later phases of software development. One main reason for such potential problems is due to the specification of software requirements in Natural Language format. To overcome some of these defects we have proposed a technique, which aims to provide semi- automated assistance for developers to generate UML models from normalized natural language requirements using Natural Language Processing techniques. This technique initially focuses on generating use-case diagram and analysis class model (conceptual model) followed by collaboration model generation for each use-case. Then it generates a consolidated design class model from which code model can also be generated. It also provides requirement traceability both at design and code levels by using Key-Word-In- Context and Concept Location techniques respectively to identify inconsistencies in requirements. Finally, this technique generates XML Metadata Interchange (XMI) files for visualizing generated models in any UML modeling tool having XMI import feature. This paper is an extension to our existing work by enhancing its complete usage with the help of Qualification Verification System as a case study

    An automated tool for generating UML models from natural language requirements

    Get PDF
    This paper describes a domain independent tool, named, UML Model Generator from Analysis of Requirements (UMGAR), which generates UML models like the Use-case Diagram, Analysis class model, Collaboration diagram and Design class model from natural language requirements using efficient Natural Language Processing (NLP) tools. UMGAR implements a set of syntactic reconstruction rules to process complex requirements into simple requirements. UMGAR also provides a generic XMI parser to generate XMI files for visualizing the generated models in any UML modeling tool. With respect to the existing tools in this area, UMGAR provides more comprehensive support for generating models with proper relationships, which can be used for large requirement documents

    Data decomposition for code parallelization in practice: what do the experts need?

    Get PDF
    Parallelizing serial software systems in order to run in a High Performance Computing (HPC) environment presents many challenges to developers. In particular, the extant literature suggests the task of decomposing large-scale data applications is particularly complex and time-consuming. In order to take stock of the state of practice of data decomposition in HPC, we conducted a two-phased study. Firstly, using focus group methodology we conducted an exploratory study at a software laboratory with an established track record in HPC. Based on the findings of this first phase, we designed a survey to assess the state of practice among experts in this field around the world. Our study shows that approximately 75% of parallelized applications use some form of data decomposition. Furthermore, data decomposition was found to be the most challenging phase in the parallelization process, consuming approximately 40% of the total time. A key finding of our study is that experts do not use any of the available tools and formal representations, and in fact, are not aware of them. We discuss why existing tools have not been adopted in industry and based on our findings, provide a number of recommendations for future tool support

    Detection of Naming Convention Violations in Process Models for Different Languages

    Get PDF
    Companies increasingly use business process modeling for documenting and redesigning their operations. However, due to the size of such modeling initiatives, they often struggle with the quality assurance of their model collections. While many model properties can already be checked automatically, there is a notable gap of techniques for checking linguistic aspects such as naming conventions of process model elements. In this paper, we address this problem by introducing an automatic technique for detecting violations of naming conventions. This technique is based on text corpora and independent of linguistic resources such as WordNet. Therefore, it can be easily adapted to the broad set of languages for which corpora exist. We demonstrate the applicability of the technique by analyzing nine process model collections from practice, including over 27,000 labels and covering three different languages. The results of the evaluation show that our technique yields stable results and can reliably deal with ambiguous cases. In this way, this paper provides an important contribution to the field of automated quality assurance of conceptual models
    corecore