1 research outputs found

    ABSTRACT Random Testing of Formal Software Models and Induced Coverage

    No full text
    This paper presents a methodology for random testing of software models. Random testing tools can be used very effectively early in the modeling process, e.g., while writing formal requirements specification for a given system. In this phase users cannot know whether a correct operational model is being built or whether the properties that the model must satisfy are correctly identified and stated. So it is very useful to have tools to quickly identify errors in the operational model or in the properties, and make appropriate corrections. Using Lurch, our random testing tool for finite-state models, we evaluated the effectiveness of random model testing by detecting manually seeded errors in an SCR specification of a real-world personnel access control system. Having detected over 80 % of seeded errors quickly, our results appear to be very encouraging. We further defined and measured test coverage metrics with the goal of understanding why some of the mutants were not detected. Coverage measures allowed us to understand the pitfalls of random testing of formal models, thus providing opportunities for future improvement. Categories and Subject Descriptor
    corecore