77,045 research outputs found

    形式仕様に基づくテストケースの自動生成と形式仕様に基づくテストケースの自動生成とテスト結果の自動評価テスト結果の自動評価

    Get PDF
    Software testing is a time-consuming activity and automatic testing is a desirable solution to this problem.In this paper, we describe a software supporting tool for automatic test case generation and automatic test result evaluation based on formal specifications written in the Structured Object-oriented Formal Language (SOFL). We discuss the algorithms for generating test cases from atomic predicates and their conjunctions that may involve various operations on data items of various data types such as set, sequence, and composite types. We describe the details of the software tool by presenting the four major functions Implemented: (1) editing a SOFL specification, (2) generating test cases from the specification, (3) managing test cases in files, and (4) evaluating test results. We also present an experiment to show that our tool can significantly save time in test case generation

    Architecting specifications for test case generation

    Get PDF
    The Specification and Description Language (SDL) together with its associated tool sets can be used for the generation of Tree and Tabular Combined Notation (TTCN) test cases. Surprisingly, little documentation exists on the optimal way to specify systems so that they can best be used for the generation of tests. This paper, elaborates on the different tool supported approaches that can be taken for test case generation and highlights their advantages and disadvantages. A rule based SDL specification style is then presented that facilitates the automatic generation of tests

    Metamodel-based model conformance and multiview consistency checking

    Get PDF
    Model-driven development, using languages such as UML and BON, often makes use of multiple diagrams (e.g., class and sequence diagrams) when modeling systems. These diagrams, presenting different views of a system of interest, may be inconsistent. A metamodel provides a unifying framework in which to ensure and check consistency, while at the same time providing the means to distinguish between valid and invalid models, that is, conformance. Two formal specifications of the metamodel for an object-oriented modeling language are presented, and it is shown how to use these specifications for model conformance and multiview consistency checking. Comparisons are made in terms of completeness and the level of automation each provide for checking multiview consistency and model conformance. The lessons learned from applying formal techniques to the problems of metamodeling, model conformance, and multiview consistency checking are summarized

    Automatic Software Repair: a Bibliography

    Get PDF
    This article presents a survey on automatic software repair. Automatic software repair consists of automatically finding a solution to software bugs without human intervention. This article considers all kinds of repairs. First, it discusses behavioral repair where test suites, contracts, models, and crashing inputs are taken as oracle. Second, it discusses state repair, also known as runtime repair or runtime recovery, with techniques such as checkpoint and restart, reconfiguration, and invariant restoration. The uniqueness of this article is that it spans the research communities that contribute to this body of knowledge: software engineering, dependability, operating systems, programming languages, and security. It provides a novel and structured overview of the diversity of bug oracles and repair operators used in the literature

    Automating property-based testing of evolving web services

    Get PDF
    Web services are the most widely used service technology that drives the Service-Oriented Computing~(SOC) paradigm. As a result, effective testing of web services is getting increasingly important. In this paper, we present a framework and toolset for testing web services and for evolving test code in sync with the evolution of web services. Our approach to testing web services is based on the Erlang programming language and QuviQ QuickCheck, a property-based testing tool written in Erlang, and our support for test code evolution is added to Wrangler, the Erlang refactoring tool. The key components of our system include the automatic generation of initial test code, the inference of web service interface changes between versions, the provision of a number of domain specific refactorings and the automatic generation of refactoring scripts for evolving the test code. Our framework provides users with a powerful and expressive web service testing framework, while minimising users' effort in creating, maintaining and evolving the test model. The framework presented in this paper can be used by both web service providers and consumers, and can be used to test web services written in whatever language; the approach advocated here could also be adopted in other property-based testing frameworks and refactoring tools

    Generating collaborative systems for digital libraries: A model-driven approach

    Get PDF
    This is an open access article shared under a Creative Commons Attribution 3.0 Licence (http://creativecommons.org/licenses/by/3.0/). Copyright @ 2010 The Authors.The design and development of a digital library involves different stakeholders, such as: information architects, librarians, and domain experts, who need to agree on a common language to describe, discuss, and negotiate the services the library has to offer. To this end, high-level, language-neutral models have to be devised. Metamodeling techniques favor the definition of domainspecific visual languages through which stakeholders can share their views and directly manipulate representations of the domain entities. This paper describes CRADLE (Cooperative-Relational Approach to Digital Library Environments), a metamodel-based framework and visual language for the definition of notions and services related to the development of digital libraries. A collection of tools allows the automatic generation of several services, defined with the CRADLE visual language, and of the graphical user interfaces providing access to them for the final user. The effectiveness of the approach is illustrated by presenting digital libraries generated with CRADLE, while the CRADLE environment has been evaluated by using the cognitive dimensions framework

    Automated Fixing of Programs with Contracts

    Full text link
    This paper describes AutoFix, an automatic debugging technique that can fix faults in general-purpose software. To provide high-quality fix suggestions and to enable automation of the whole debugging process, AutoFix relies on the presence of simple specification elements in the form of contracts (such as pre- and postconditions). Using contracts enhances the precision of dynamic analysis techniques for fault detection and localization, and for validating fixes. The only required user input to the AutoFix supporting tool is then a faulty program annotated with contracts; the tool produces a collection of validated fixes for the fault ranked according to an estimate of their suitability. In an extensive experimental evaluation, we applied AutoFix to over 200 faults in four code bases of different maturity and quality (of implementation and of contracts). AutoFix successfully fixed 42% of the faults, producing, in the majority of cases, corrections of quality comparable to those competent programmers would write; the used computational resources were modest, with an average time per fix below 20 minutes on commodity hardware. These figures compare favorably to the state of the art in automated program fixing, and demonstrate that the AutoFix approach is successfully applicable to reduce the debugging burden in real-world scenarios.Comment: Minor changes after proofreadin
    corecore