1,072,610 research outputs found

    Lessons learnt from using DSLs for automated software testing

    Get PDF
    Domain Specific Languages (DSLs) provide a means of unambiguously expressing concepts in a particular domain. Although they may not refer to it as such, companies build and maintain DSLs for software testing on a day-to-day basis, especially when they define test suites using the Gherkin language. However, although the practice of specifying and automating test cases using the Gherkin language and related technologies such as Cucumber has become mainstream, the curation of such languages presents a number of challenges. In this paper we discuss lessons learnt from five case studies on industry systems, two involving the use of Gherkin-type syntax and another three case studies using more rigidly defined language grammars. Initial observations indicate that the likelihood of success of such efforts is increased if one manages to use an approach which separates the concerns of domain experts who curate the language, users who write scripts with the language, and engineers who wire the language into test automation technologies thus producing executable test code. We also provide some insights into desirable qualities of testing DSLs in different contexts.peer-reviewe

    Validation in the Software Metric Development Process

    Get PDF
    In this paper the validation of software metrics will be examined. Two approaches will be combined: representational measurement theory and a validation network scheme. The development process of a software metric will be described, together with validities for the three phases of the metric development process. Representation axioms from measurement theory are used both for the formal and empirical validation. The differentiation of validities according to these phases unifies several validation approaches found in the software metric's literature

    An Approach for the Empirical Validation of Software Complexity Measures

    Get PDF
    Software metrics are widely accepted tools to control and assure software quality. A large number of software metrics with a variety of content can be found in the literature; however most of them are not adopted in industry as they are seen as irrelevant to needs, as they are unsupported, and the major reason behind this is due to improper empirical validation. This paper tries to identify possible root causes for the improper empirical validation of the software metrics. A practical model for the empirical validation of software metrics is proposed along with root causes. The model is validated by applying it to recently proposed and well known metrics

    Software unit testing in Ada environment

    Get PDF
    A validation procedure for the Ada binding of the Graphical Kernel System (GKS) is being developed. PRIOR Data Sciences is also producing a version of the GKS written in Ada. These major software engineering projects will provide an opportunity to demonstrate a sound approach for software testing in an Ada environment. The GKS/Ada validation capability will be a collection of test programs and data, and test management guidelines. These products will be used to assess the correctness, completeness, and efficiency of any GKS/Ada implementation. The GKS/Ada developers will be able to obtain the validation software for their own use. It is anticipated that this validation software will eventually be taken over by an independent standards body to provide objective assessments of GKS/Ada implementations, using an approach similar to the validation testing currently applied to Ada compilers. In the meantime, if requested, this validation software will be used to assess GKS/Ada products. The second project, implementation of GKS using the Ada language, is a conventional software engineering tasks. It represents a large body of Ada code and has some interesting testing problems associated with automatic testing of graphics routines. Here the normal test practices which include automated regression testing, independent quality assistance, test configuration management, and the application of software quality metrics will be employed. The software testing methods emphasize quality enhancement and automated procedures. Ada makes some aspects of testing easier, and introduces some concerns. These issues are addressed

    Validating plans with continuous effects

    Get PDF
    A critical element in the use of PDDL2.1, the modelling language developed for the International Planning Competition series, has been the common understanding of the semantics of the language. The fact that this has been implemented in plan validation software was vital to the progress of the competition. However, the validation of plans using actions with continuous effects presents new challenges (that precede the challenges presented by planning with those effects). In this paper we review the need for continuous effects, their semantics and the problems that arise in validation of plans that include them. We report our progress in implementing the semantics in an extended version of the plan validation software

    A systematic approach to the Planck LFI end-to-end test and its application to the DPC Level 1 pipeline

    Full text link
    The Level 1 of the Planck LFI Data Processing Centre (DPC) is devoted to the handling of the scientific and housekeeping telemetry. It is a critical component of the Planck ground segment which has to strictly commit to the project schedule to be ready for the launch and flight operations. In order to guarantee the quality necessary to achieve the objectives of the Planck mission, the design and development of the Level 1 software has followed the ESA Software Engineering Standards. A fundamental step in the software life cycle is the Verification and Validation of the software. The purpose of this work is to show an example of procedures, test development and analysis successfully applied to a key software project of an ESA mission. We present the end-to-end validation tests performed on the Level 1 of the LFI-DPC, by detailing the methods used and the results obtained. Different approaches have been used to test the scientific and housekeeping data processing. Scientific data processing has been tested by injecting signals with known properties directly into the acquisition electronics, in order to generate a test dataset of real telemetry data and reproduce as much as possible nominal conditions. For the HK telemetry processing, validation software have been developed to inject known parameter values into a set of real housekeeping packets and perform a comparison with the corresponding timelines generated by the Level 1. With the proposed validation and verification procedure, where the on-board and ground processing are viewed as a single pipeline, we demonstrated that the scientific and housekeeping processing of the Planck-LFI raw data is correct and meets the project requirements.Comment: 20 pages, 7 figures; this paper is part of the Prelaunch status LFI papers published on JINST: http://www.iop.org/EJ/journal/-page=extra.proc5/jins

    Simulation verification techniques study: Simulation performance validation techniques document

    Get PDF
    Techniques and support software for the efficient performance of simulation validation are discussed. Overall validation software structure, the performance of validation at various levels of simulation integration, guidelines for check case formulation, methods for real time acquisition and formatting of data from an all up operational simulator, and methods and criteria for comparison and evaluation of simulation data are included. Vehicle subsystems modules, module integration, special test requirements, and reference data formats are also described
    corecore