461,666 research outputs found

    Bridging the Gap Between Requirements and Model Analysis : Evaluation on Ten Cyber-Physical Challenge Problems

    Get PDF
    Formal verfication and simulation are powerful tools to validate requirements against complex systems. [Problem] Requirements are developed in early stages of the software lifecycle and are typically written in ambiguous natural language. There is a gap between such requirements and formal notations that can be used by verification tools, and lack of support for proper association of requirements with software artifacts for verification. [Principal idea] We propose to write requirements in an intuitive, structured natural language with formal semantics, and to support formalization and model/code verification as a smooth, well-integrated process. [Contribution] We have developed an end-to-end, open source requirements analysis framework that checks Simulink models against requirements written in structured natural language. Our framework is built in the Formal Requirements Elicitation Tool (fret); we use fret's requirements language named fretish, and formalization of fretish requirements in temporal logics. Our proposed framework contributes the following features: 1) automatic extraction of Simulink model information and association of fretish requirements with target model signals and components; 2) translation of temporal logic formulas into synchronous dataflow cocospec specifications as well as Simulink monitors, to be used by verification tools; we establish correctness of our translation through extensive automated testing; 3) interpretation of counterexamples produced by verification tools back at requirements level. These features support a tight integration and feedback loop between high level requirements and their analysis. We demonstrate our approach on a major case study: the Ten Lockheed Martin Cyber-Physical, aerospace-inspired challenge problems

    ARIES: Acquisition of Requirements and Incremental Evolution of Specifications

    Get PDF
    This paper describes a requirements/specification environment specifically designed for large-scale software systems. This environment is called ARIES (Acquisition of Requirements and Incremental Evolution of Specifications). ARIES provides assistance to requirements analysts for developing operational specifications of systems. This development begins with the acquisition of informal system requirements. The requirements are then formalized and gradually elaborated (transformed) into formal and complete specifications. ARIES provides guidance to the user in validating formal requirements by translating them into natural language representations and graphical diagrams. ARIES also provides ways of analyzing the specification to ensure that it is correct, e.g., testing the specification against a running simulation of the system to be built. Another important ARIES feature, especially when developing large systems, is the sharing and reuse of requirements knowledge. This leads to much less duplication of effort. ARIES combines all of its features in a single environment that makes the process of capturing a formal specification quicker and easier

    Reducing structural ambiguity in natural language software requirements specifications

    Get PDF
    Abstract. The ambiguity of natural language (NL) causes miscommunication and misunderstandings. Precision of language is particularly important in software development when handling requirements agreed between the customer and the provider. Software Requirements Specification (SRS) is a commonly used document type for specifying the requirements. A strict standard for how every SRS should be constructed does not exist, and thus it is often written in NL. However, some restricted languages can be used for specifying requirements. An example of such is Easy Approach to Requirements Syntax (EARS). In this thesis is presented an automated tool for reducing the structural ambiguity of requirements by converting NL into EARS form. Four different text datasets were used for testing the converter and they were compared before and after conversion and against each other. Both performance and ambiguity reduction of the tool were assessed using various measures. Since a standard ambiguity measurement was not available, a combination of sentence structure assessment, word occurrences against Zipf’s law, readability score and information complexity was used. The results suggest that the tool reduces structural ambiguity of sentences. The tool is successful in converting NL into the different EARS patterns and the converted sentences are less complicated and more readable, according to the results. This hints at the possibility of creating more automated tools that could be used to reduce ambiguity in NL SRS. It might not be possible to make people start using a restricted language, like EARS, for writing the documents, but with the help of automated converters, sentences could be mapped to more restricted forms to help with making better sense of them.Luonnollisen kielen rakenteellisen moniselitteisyyden vähentäminen ohjelmistojen vaatimusten määrittelyissä. Tiivistelmä. Luonnollisen kielen epämääräisyys aiheuttaa vaikeuksia kommunikoinnissa ja ymmärtämisessä. Kielen tarkkuus on erityisen tärkeää ohjelmistokehityksessä silloin kun käsitellään asiakkaan ja tarjoajan keskenään sopimia vaatimuksia ohjelmistolle. Ei ole olemassa tiukkaa standardia sille miten vaatimusten määrittelydokumentti pitäisi rakentaa, joten se usein kirjoitetaan luonnollisella kielellä. Siitä huolimatta joitain rajoitettuja kieliä voidaan käyttää yksittäisten vaatimusten määrittelyyn. Eräs esimerkki rajoitetusta kielestä on Easy Approach to Requirements Syntax (EARS). Tässä diplomityössä esitellään automatisoitu työkalu vähentämään rakenteista epämääräisyttä muuttamalla luonnollista kieltä EARS-muotoon. Neljää erilaista tekstiä käytettiin työkalun testaamiseen ja niitä verrattiin toisiinsa sekä ennen että jälkeen muuntamisen. Työkalun toimintaa ja epämääräisyyden vähentämistä mitattiin useilla metriikoilla. Epämääräisyyden mittaamiseen valittiin joukko kvantitatiivisia metriikoita: lauserakenteita analysoitiin, sanojen ilmiintyvyystiheyttä ja lausiden luettavuutta mitattiin ja informaation kompleksisuuttakin verrattiin muunnettujen ja muuntamattomien tekstien välillä. Tulosten perusteella esitelty työkalu vähentää lauseiden rakenteellista epämääräisyyttä. Se muuntaa onnistuneesti luonnollista kieltä EARS-muotoon ja tulosten mukaan muunnetut lauseet ovat vähemmän monimutkaisia ja luettavampia. Tämä viittaa siihen, että automatisoiduilla työkaluilla voisi olla mahdollista vähentää epämääräisyyttä luonnollisella kielellä kirjoitetuissa vaatimusten määrittelydokumenteissa. Vaikkei ihmisiä saataisikaan kirjoittamaan vaatimusten määrittelyjä rajoitetuilla kielillä, automatisoiduilla kielen muuntajilla lauseita voidaan uudelleenmuotoilla rajoitetumpiin muotoihin, jotta niistä saataisiin paremmin selvää

    A Lightweight Multilevel Markup Language for Connecting Software Requirements and Simulations

    Get PDF
    [Context] Simulation is a powerful tool to validate specified requirements especially for complex systems that constantly monitor and react to characteristics of their environment. The simulators for such systems are complex themselves as they simulate multiple actors with multiple interacting functions in a number of different scenarios. To validate requirements in such simulations, the requirements must be related to the simulation runs. [Problem] In practice, engineers are reluctant to state their requirements in terms of structured languages or models that would allow for a straightforward relation of requirements to simulation runs. Instead, the requirements are expressed as unstructured natural language text that is hard to assess in a set of complex simulation runs. Therefore, the feedback loop between requirements and simulation is very long or non-existent at all. [Principal idea] We aim to close the gap between requirements specifications and simulation by proposing a lightweight markup language for requirements. Our markup language provides a set of annotations on different levels that can be applied to natural language requirements. The annotations are mapped to simulation events. As a result, meaningful information from a set of simulation runs is shown directly in the requirements specification. [Contribution] Instead of forcing the engineer to write requirements in a specific way just for the purpose of relating them to a simulator, the markup language allows annotating the already specified requirements up to a level that is interesting for the engineer. We evaluate our approach by analyzing 8 original requirements of an automotive system in a set of 100 simulation runs

    What Am I Testing and Where? Comparing Testing Procedures based on Lightweight Requirements Annotations

    Get PDF
    [Context] The testing of software-intensive systems is performed in different test stages each having a large number of test cases. These test cases are commonly derived from requirements. Each test stages exhibits specific demands and constraints with respect to their degree of detail and what can be tested. Therefore, specific test suites are defined for each test stage. In this paper, the focus is on the domain of embedded systems, where, among others, typical test stages are Software- and Hardware-in-the-loop. [Objective] Monitoring and controlling which requirements are verified in which detail and in which test stage is a challenge for engineers. However, this information is necessary to assure a certain test coverage, to minimize redundant testing procedures, and to avoid inconsistencies between test stages. In addition, engineers are reluctant to state their requirements in terms of structured languages or models that would facilitate the relation of requirements to test executions. [Method] With our approach, we close the gap between requirements specifications and test executions. Previously, we have proposed a lightweight markup language for requirements which provides a set of annotations that can be applied to natural language requirements. The annotations are mapped to events and signals in test executions. As a result, meaningful insights from a set of test executions can be directly related to artifacts in the requirements specification. In this paper, we use the markup language to compare different test stages with one another. [Results] We annotate 443 natural language requirements of a driver assistance system with the means of our lightweight markup language. The annotations are then linked to 1300 test executions from a simulation environment and 53 test executions from test drives with human drivers. Based on the annotations, we are able to analyze how similar the test stages are and how well test stages and test cases are aligned with the requirements. Further, we highlight the general applicability of our approach through this extensive experimental evaluation. [Conclusion] With our approach, the results of several test levels are linked to the requirements and enable the evaluation of complex test executions. By this means, practitioners can easily evaluate how well a systems performs with regards to its specification and, additionally, can reason about the expressiveness of the applied test stage.TU Berlin, Open-Access-Mittel - 202

    ViSpec: A graphical tool for elicitation of MTL requirements

    Full text link
    One of the main barriers preventing widespread use of formal methods is the elicitation of formal specifications. Formal specifications facilitate the testing and verification process for safety critical robotic systems. However, handling the intricacies of formal languages is difficult and requires a high level of expertise in formal logics that many system developers do not have. In this work, we present a graphical tool designed for the development and visualization of formal specifications by people that do not have training in formal logic. The tool enables users to develop specifications using a graphical formalism which is then automatically translated to Metric Temporal Logic (MTL). In order to evaluate the effectiveness of our tool, we have also designed and conducted a usability study with cohorts from the academic student community and industry. Our results indicate that both groups were able to define formal requirements with high levels of accuracy. Finally, we present applications of our tool for defining specifications for operation of robotic surgery and autonomous quadcopter safe operation.Comment: Technical report for the paper to be published in the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems held in Hamburg, Germany. Includes 10 pages and 19 figure

    Uniform: The Form Validation Language

    Get PDF
    Digital forms are becoming increasingly more prevalent but the ease of creation is not. Web Forms are difficult to produce and validate. This design project seeks to simplify this process. This project is comprised of two parts: a logical programming language (Uniform) and a web application. Uniform is a language that allows its users to define logical relationships between web elements and apply simple rules to individual inputs to both validate the form and manipulate its components depending on user input. Uniform provides an extra layer of abstraction to complex coding. The web app implements Uniform to provide business-level programmers with an interface to build and manage forms. Users will create form templates, manage form instances, and cooperatively complete forms through the web app. Uniform’s development is ongoing, it will receive continued support and is available as open-source. The web application is software owned and maintained by HP Inc. which will be developed further before going to market
    corecore