184 research outputs found

    Automated Test Case Generation from Domain-Specific High-Level Requirement Models

    Get PDF
    One of the most researched aspects of the software engineering process is the verification and validation of software systems using various techniques. The need to ensure that the developed software system addresses its intended specifications has led to several approaches that link the requirements gathering and software testing phases of development. This thesis presents a framework that bridges the gap between requirement specification and testing of software using domain-specific modelling concepts. The proposed modelling notation, High-Level Requirement Modelling Language (HRML), addresses the drawbacks of Natural Language (NL) for high-level requirement specifications including ambiguity and incompleteness. Real-time checks are implemented to ensure valid HRML specification models are utilised for the automated test cases generation. The type of HRML requirement specified in the model determines the approach to be employed to generate corresponding test cases. Boundary Value Analysis and Equivalence Partitioning is applied to specifications with predefined range values to generate valid and invalid inputs for robustness test cases. Structural coverage test cases are also generated to satisfy the Modified Condition/Decision Coverage (MC/DC) criteria for HRML specifications with logic expressions. In scenarios where the conditional statements are combined with logic expressions, the MC/DC approach is extended to generate the corresponding tests cases. Evaluation of the proposed framework by industry experts in a case study, its scalability, comparative study and the assessment of its learnability by non-experts are reported. The results indicate a reduction in the test case generation process in the case study, however non-experts spent more time in modelling the requirement in HRML while the time taken for test case generation is also reduced

    Abstract syntax as interlingua: Scaling up the grammatical framework from controlled languages to robust pipelines

    Get PDF
    Syntax is an interlingual representation used in compilers. Grammatical Framework (GF) applies the abstract syntax idea to natural languages. The development of GF started in 1998, first as a tool for controlled language implementations, where it has gained an established position in both academic and commercial projects. GF provides grammar resources for over 40 languages, enabling accurate generation and translation, as well as grammar engineering tools and components for mobile and Web applications. On the research side, the focus in the last ten years has been on scaling up GF to wide-coverage language processing. The concept of abstract syntax offers a unified view on many other approaches: Universal Dependencies, WordNets, FrameNets, Construction Grammars, and Abstract Meaning Representations. This makes it possible for GF to utilize data from the other approaches and to build robust pipelines. In return, GF can contribute to data-driven approaches by methods to transfer resources from one language to others, to augment data by rule-based generation, to check the consistency of hand-annotated corpora, and to pipe analyses into high-precision semantic back ends. This article gives an overview of the use of abstract syntax as interlingua through both established and emerging NLP applications involving GF

    Proceedings of the 1st Standardized Knowledge Representation and Ontologies for Robotics and Automation Workshop

    Get PDF
    Welcome to IEEE-ORA (Ontologies for Robotics and Automation) IROS workshop. This is the 1st edition of the workshop on! Standardized Knowledge Representation and Ontologies for Robotics and Automation. The IEEE-ORA 2014 workshop was held on the 18th September, 2014 in Chicago, Illinois, USA. In!the IEEE-ORA IROS workshop, 10 contributions were presented from 7 countries in North and South America, Asia and Europe. The presentations took place in the afternoon, from 1:30 PM to 5:00 PM. The first session was dedicated to “Standards for Knowledge Representation in Robotics”, where presentations were made from the IEEE working group standards for robotics and automation, and also from the ISO TC 184/SC2/WH7. The second session was dedicated to “Core and Application Ontologies”, where presentations were made for core robotics ontologies, and also for industrial and robot assisted surgery ontologies. Three posters were presented in emergent applications of ontologies in robotics. We would like to express our thanks to all participants. First of all to the authors, whose quality work is the essence of this workshop. Next, to all the members of the international program committee, who helped us with their expertise and valuable time. We would also like to deeply thank the IEEE-IROS 2014 organizers for hosting this workshop. Our deep gratitude goes to the IEEE Robotics and Automation Society, that sponsors! the IEEE-ORA group activities, and also to the scientific organizations that kindly agreed to sponsor all the workshop authors work

    Model-based Specification and Analysis of Natural Language Requirements in the Financial Domain

    Get PDF
    Software requirements form an important part of the software development process. In many software projects conducted by companies in the financial sector, analysts specify software requirements using a combination of models and natural language (NL). Neither models nor NL requirements provide a complete picture of the information in the software system, and NL is highly prone to quality issues, such as vagueness, ambiguity, and incompleteness. Poorly written requirements are difficult to communicate and reduce the opportunity to process requirements automatically, particularly the automation of tedious and error-prone tasks, such as deriving acceptance criteria (AC). AC are conditions that a system must meet to be consistent with its requirements and be accepted by its stakeholders. AC are derived by developers and testers from requirement models. To obtain a precise AC, it is necessary to reconcile the information content in NL requirements and the requirement models. In collaboration with an industrial partner from the financial domain, we first systematically developed and evaluated a controlled natural language (CNL) named Rimay to help analysts write functional requirements. We then proposed an approach that detects common syntactic and semantic errors in NL requirements. Our approach suggests Rimay patterns to fix errors and convert NL requirements into Rimay requirements. Based on our results, we propose a semiautomated approach that reconciles the content in the NL requirements with that in the requirement models. Our approach helps modelers enrich their models with information extracted from NL requirements. Finally, an existing test-specification derivation technique was applied to the enriched model to generate AC. The first contribution of this dissertation is a qualitative methodology that can be used to systematically define a CNL for specifying functional requirements. This methodology was used to create Rimay, a CNL grammar, to specify functional requirements. This CNL was derived after an extensive qualitative analysis of a large number of industrial requirements and by following a systematic process using lexical resources. An empirical evaluation of our CNL (Rimay) in a realistic setting through an industrial case study demonstrated that 88% of the requirements used in our empirical evaluation were successfully rephrased using Rimay. The second contribution of this dissertation is an automated approach that detects syntactic and semantic errors in unstructured NL requirements. We refer to these errors as smells. To this end, we first proposed a set of 10 common smells found in the NL requirements of financial applications. We then derived a set of 10 Rimay patterns as a suggestion to fix the smells. Finally, we developed an automatic approach that analyzes the syntax and semantics of NL requirements to detect any present smells and then suggests a Rimay pattern to fix the smell. We evaluated our approach using an industrial case study that obtained promising results for detecting smells in NL requirements (precision 88%) and for suggesting Rimay patterns (precision 89%). The last contribution of this dissertation was prompted by the observation that a reconciliation of the information content in the NL requirements and the associated models is necessary to obtain precise AC. To achieve this, we define a set of 13 information extraction rules that automatically extract AC-related information from NL requirements written in Rimay. Next, we propose a systematic method that generates recommendations for model enrichment based on the information extracted from the 13 extraction rules. Using a real case study from the financial domain, we evaluated the usefulness of the AC-related model enrichments recommended by our approach. The domain experts found that 89% of the recommended enrichments were relevant to AC, but absent from the original model (precision of 89%)

    Proceedings of the 12th European Workshop on Natural Language Generation (ENLG 2009)

    Get PDF

    Ontology verbalization in agglutinating Bantu languages: a study of Runyankore and its generalizability

    Get PDF
    Natural Language Generation (NLG) systems have been developed to generate text in multiple domains, including personalized patient information. However, their application is limited in Africa because they generate text in English, yet indigenous languages are still predominantly spoken throughout the continent, especially in rural areas. The existing healthcare NLG systems cannot be reused for Bantu languages due to the complex grammatical structure, nor can the generated text be used in machine translation systems for Bantu languages because they are computationally under-resourced. This research aimed to verbalize ontologies in agglutinating Bantu languages. We had four research objectives: (1) noun pluralization and verb conjugation in Runyankore; (2) Runyankore verbalization patterns for the selected description logic constructors; (3) combining the pluralization, conjugation, and verbalization components to form a Runyankore grammar engine; and (4) generalizing the Runyankore and isiZulu approaches to ontology verbalization to other agglutinating Bantu languages. We used an approach that combines morphology with syntax and semantics to develop a noun pluralizer for Runyankore, and used Context-Free Grammars (CFGs) for verb conjugation. We developed verbalization algorithms for eight constructors in a description logic. We then combined these components into a grammar engine developed as a Protégé5X plugin. The investigation into generalizability used the bootstrap approach, and investigated bootstrapping for languages in the same language zone (intra-zone bootstrappability) and languages across language zones (inter-zone bootstrappability). We obtained verbalization patterns for Luganda and isiXhosa, in the same zones as Runyankore and isiZulu respectively, and chiShona, Kikuyu, and Kinyarwanda from different zones, and used the bootstrap metric that we developed to identify the most efficient source—target bootstrap pair. By regrouping Meinhof’s noun class system we were able to eliminate non-determinism during computation, and this led to the development of a generic noun pluralizer. We also showed that CFGs can conjugate verbs in the five additional languages. Finally, we proposed the architecture for an API that could be used to generate text in agglutinating Bantu languages. Our research provides a method for surface realization for an under-resourced and grammatically complex family of languages, Bantu languages. We leave the development of a complete NLG system based on the Runyankore grammar engine and of the API as areas for future work

    Clinical practice knowledge acquisition and interrogation using natural language: aquisição e interrogação de conhecimento de prática clínica usando linguagem natural

    Get PDF
    Os conceitos científicos, metodologias e ferramentas no sub-dominio da Representação de Conhecimento da área da Inteligência Artificial Aplicada têm sofrido avanços muito significativos nos anos recentes. A utilização de Ontologias como conceptualizações de domínios é agora suficientemente poderosa para aspirar ao raciocínio computacional sobre realidades complexas. Uma das tarefas científica e tecnicamente mais desafiante é prestação de cuidados pelos profissionais de saúde na especialidade cardiovascular. Um domínio de tal forma complexo pode beneficiar largamente da possibilidade de ajudas ao raciocínio clínico que estão neste momento a beira de ficarem disponíveis. Investigamos no sentido de desenvolver uma infraestrutura sólida e completa para a representação de conhecimento na prática clínica bem como os processes associados para adquirir o conhecimento a partir de textos clínicos e raciocinar automaticamente sobre esse conhecimento; ABSTRACT: The scientific concepts, methodologies and tools in the Knowledge Representation (KR) subdomain of applied Artificial Intelligence (AI) came a long way with enormous strides in recent years. The usage of domain conceptualizations that are Ontologies is now powerful enough to aim at computable reasoning over complex realities. One of the most challenging scientific and technical human endeavors is the daily Clinical Practice (CP) of Cardiovascular (C V) specialty healthcare providers. Such a complex domain can benefit largely from the possibility of clinical reasoning aids that are now at the edge of being available. We research into al complete end-to-end solid ontological infrastructure for CP knowledge representation as well as the associated processes to automatically acquire knowledge from clinical texts and reason over it

    On Systematically Building a Controlled Natural Language for Functional Requirements

    Get PDF
    [Context] Natural language (NL) is pervasive in software requirements specifications (SRSs). However, despite its popularity and widespread use, NL is highly prone to quality issues such as vagueness, ambiguity, and incompleteness. Controlled natural languages (CNLs) have been proposed as a way to prevent quality problems in requirements documents, while maintaining the flexibility to write and communicate requirements in an intuitive and universally understood manner. [Objective] In collaboration with an industrial partner from the financial domain, we systematically develop and evaluate a CNL, named Rimay, intended at helping analysts write functional requirements. [Method] We rely on Grounded Theory for building Rimay and follow well-known guidelines for conducting and reporting industrial case study research. [Results] Our main contributions are: (1) a qualitative methodology to systematically define a CNL for functional requirements; this methodology is intended to be general for use across information-system domains, (2) a CNL grammar to represent functional requirements; this grammar is derived from our experience in the financial domain, but should be applicable, possibly with adaptations, to other information-system domains, and (3) an empirical evaluation of our CNL (Rimay) through an industrial case study. Our contributions draw on 15 representative SRSs, collectively containing 3215 NL requirements statements from the financial domain. [Conclusion] Our evaluation shows that Rimay is expressive enough to capture, on average, 88% (405 out of 460) of the NL requirements statements in four previously unseen SRSs from the financial domain
    corecore