330,142 research outputs found

    Lessons learnt from using DSLs for automated software testing

    Get PDF
    Domain Specific Languages (DSLs) provide a means of unambiguously expressing concepts in a particular domain. Although they may not refer to it as such, companies build and maintain DSLs for software testing on a day-to-day basis, especially when they define test suites using the Gherkin language. However, although the practice of specifying and automating test cases using the Gherkin language and related technologies such as Cucumber has become mainstream, the curation of such languages presents a number of challenges. In this paper we discuss lessons learnt from five case studies on industry systems, two involving the use of Gherkin-type syntax and another three case studies using more rigidly defined language grammars. Initial observations indicate that the likelihood of success of such efforts is increased if one manages to use an approach which separates the concerns of domain experts who curate the language, users who write scripts with the language, and engineers who wire the language into test automation technologies thus producing executable test code. We also provide some insights into desirable qualities of testing DSLs in different contexts.peer-reviewe

    Pilot3 D5.2 - Verification and validation report

    Get PDF
    The deliverable provides the outcomes from the verification and validation activities carried during the course of work package 5 of the Pilot3 project, and according to the verification and validation plan defined in deliverable D5.1 (Pilot3 Consortium, 2020c). Firstly, it presents the main results of the verification activities performed during the development and testing of the different software versions. Then, this deliverable reports on the results of internal and external validation activities, which aimed to demonstrate the operational benefit of the Pilot3 tool, assessing the research questions and hypothesis that were defined at the beginning of the project. The Agile principle adopted in the project accompanying with the five five-level hierarchy approach on the definition of scenarios and case studies enabled the flexibility and tractability in the selection of experiments through different versions of prototype development. As a result of this iterative development of the tool, some of the research questions initially defined have been revisited to better reflect the validation results. The deliverable also reports the feedback received from the experts during the internal and external meetings, workshops and dedicated (on-line) site visits. During the validation campaign, both subjective qualitative information and objective quantitative data were collected and analysed to assess the Pilot3 tool. The document also summarises the results of the survey that were distributed to the external experts to assess the human-machine interface (HMI) mock-up developed in the project

    A unified approach for static and runtime verification : framework and applications

    Get PDF
    Static verification of software is becoming ever more effective and efficient. Still, static techniques either have high precision, in which case powerful judgements are hard to achieve automatically, or they use abstractions supporting increased automation, but possibly losing important aspects of the concrete system in the process. Runtime verification has complementary strengths and weaknesses. It combines full precision of the model (including the real deployment environment) with full automation, but cannot judge future and alternative runs. Another drawback of runtime verification can be the computational overhead of monitoring the running system which, although typically not very high, can still be prohibitive in certain settings. In this paper we propose a framework to combine static analysis techniques and runtime verification with the aim of getting the best of both techniques. In particular, we discuss an instantiation of our framework for the deductive theorem prover KeY, and the runtime verification tool Larva. Apart from combining static and dynamic verification, this approach also combines the data centric analysis of KeY with the control centric analysis of Larva. An advantage of the approach is that, through the use of a single specification which can be used by both analysis techniques, expensive parts of the analysis could be moved to the static phase, allowing the runtime monitor to make significant assumptions, dropping parts of expensive checks at runtime. We also discuss specific applications of our approach.peer-reviewe

    Pilot3 D5.1 - Verification and validation plan

    Get PDF
    This deliverable provides the methodological framework which will enable the execution of the verification and validation activities. The actions defined within framework plan will support the incremental development of the prototype based on the principle of Agile paradigm. The verification defines all activities that will ensure the thorough test of different prototype versions, while validation will assess the functioning hypotheses addressing the operational benefits of the tool. The validation campaign will be done primarily through the interaction with the internal and external experts to capture their feedback. The deliverable presents the five-level hierarchy approach on the definition of experiments (scenario and case studies) that ensures the flexibility and tractability in their selection through different versions of prototype development. The deliverable also details the organisation and schedule of the internal and external meetings, workshops and dedicated activities along with the specification of the questionnaires, flow-type diagrams and other instruments which aims to facilitate the validation assessments

    Dispatcher3 D5.1 - Verification and validation plan

    Get PDF
    In this deliverable, we present a verification and validation plan designed to carry out all necessary activities along Dispatcher3 prototype development. Given the nature of the project, the deliverable points to a data-centric approach to machine learning that treats training and testing models as an important production asset, together with the algorithm and infrastructure used throughout the development. The verification and validation activities will be presented in the document. The proposed framework will support the incremental development of the prototype based on the principle of iterative development paradigm. The core of the verification and validation approach is structured around three different and inter-related phases including data acquisition and preparation, predictive model development and advisory generator model development which are combined iteratively and in close coordination with the experts from the consortium and the Advisory Board. For each individual phase, a set of verification and validation activities will be performed to maximise the benefits of Dispatcher3. Thus, the methodological framework proposed in this deliverable attempts to address the specificities of the verification and validation approach in the domain of machine learning, as it differs from the canonical approach which are typically based on standardised procedures, and in the domain of the final prospective model. This means that the verification and validation of the machine learning models will also be considered as a part of the model development, since the tailoring and enhancement of the model highly relies on the verification and validation results. The deliverable provides an approach on the definition of preliminary case studies that ensure the flexibility and tractability in their selection through different machine learning model development. The deliverable finally details the organisation and schedule of the internal and external meetings, workshops and dedicated activities along with the specification of the questionnaires, flow-type diagrams and other tool and platforms which aim to facilitate the validation assessments with special focus on the predictive and prospective models

    Verification issues for rule-based expert systems

    Get PDF
    Verification and validation of expert systems is very important for the future success of this technology. Software will never be used in non-trivial applications unless the program developers can assure both users and managers that the software is reliable and generally free from error. Therefore, verification and validation of expert systems must be done. The primary hindrance to effective verification and validation is the use of methodologies which do not produce testable requirements. An extension of the flight technique panels used in previous NASA programs should provide both documented requirements and very high levels of verification for expert systems

    A verification library for multibody simulation software

    Get PDF
    A multibody dynamics verification library, that maintains and manages test and validation data is proposed, based on RRC Robot arm and CASE backhoe validation and a comparitive study of DADS, DISCOS, and CONTOPS that are existing public domain and commercial multibody dynamic simulation programs. Using simple representative problems, simulation results from each program are cross checked, and the validation results are presented. Functionalities of the verification library are defined, in order to automate validation procedure

    Verification and validation of models

    Get PDF
    Simulation Models;econometrics

    Verification and Validation of Semantic Annotations

    Full text link
    In this paper, we propose a framework to perform verification and validation of semantically annotated data. The annotations, extracted from websites, are verified against the schema.org vocabulary and Domain Specifications to ensure the syntactic correctness and completeness of the annotations. The Domain Specifications allow checking the compliance of annotations against corresponding domain-specific constraints. The validation mechanism will detect errors and inconsistencies between the content of the analyzed schema.org annotations and the content of the web pages where the annotations were found.Comment: Accepted for the A.P. Ershov Informatics Conference 2019(the PSI Conference Series, 12th edition) proceedin

    Simulation verification techniques study

    Get PDF
    Results are summarized of the simulation verification techniques study which consisted of two tasks: to develop techniques for simulator hardware checkout and to develop techniques for simulation performance verification (validation). The hardware verification task involved definition of simulation hardware (hardware units and integrated simulator configurations), survey of current hardware self-test techniques, and definition of hardware and software techniques for checkout of simulator subsystems. The performance verification task included definition of simulation performance parameters (and critical performance parameters), definition of methods for establishing standards of performance (sources of reference data or validation), and definition of methods for validating performance. Both major tasks included definition of verification software and assessment of verification data base impact. An annotated bibliography of all documents generated during this study is provided
    corecore