4 research outputs found

    Testing Systems of Concurrent Black-boxes—an Automata-Theoretic and Decompositional Approach

    No full text
    gxie,zdang£ The global testing problem studied in this paper is to seek a definite answer to whether a system of concurrent black-boxes has an observable behavior in a given finite (but could be huge) set ¤¦¥¨ §. We introduce a novel approach to solve the problem that does not require integration testing. Instead, in our approach, the global testing problem is reduced to testing individual black-boxes in the system one by one in some given order. Using an automata-theoretic approach, test sequences for each individual black-box are generated from the system’s description as well as the test results of black-boxes prior to this black-box in the given order. In contrast to the conventional compositional/modular verification/testing approaches, our approach is essentially decompositional. Also, our approach is complete, sound, and can be carried out automatically. Our experiment results show that the total number of unit tests needed to solve the global testing problem is substantially smaller even for an extremely large ¤©¥¨ §. 1

    Verification of Pointer-Based Programs with Partial Information

    Get PDF
    The proliferation of software across all aspects of people's life means that software failure can bring catastrophic result. It is therefore highly desirable to be able to develop software that is verified to meet its expected specification. This has also been identified as a key objective in one of the UK Grand Challenges (GC6) (Jones et al., 2006; Woodcock, 2006). However, many difficult problems still remain in achieving this objective, partially due to the wide use of (recursive) shared mutable data structures which are hard to keep track of statically in a precise and concise way. This thesis aims at building a verification system for both memory safety and functional correctness of programs manipulating pointer-based data structures, which can deal with two scenarios where only partial information about the program is available. For instance the verifier may be supplied with only partial program specification, or with full specification but only part of the program code. For the first scenario, previous state-of-the-art works (Nguyen et al., 2007; Chin et al., 2007; Nguyen and Chin, 2008; Chin et al, 2010) generally require users to provide full specifications for each method of the program to be verified. Their approach seeks much intellectual effort from users, and meanwhile users are liable to make mistakes in writing such specifications. This thesis proposes a new approach to program verification that allows users to provide only partial specification to methods. Our approach will then refine the given annotation into a more complete specification by discovering missing constraints. The discovered constraints may involve both numerical and multiset properties that could be later confirmed or revised by users. Meanwhile, we further augment our approach by requiring only partial specification to be given for primary methods of a program. Specifications for loops and auxiliary methods can then be systematically discovered by our augmented mechanism, with the help of information propagated from the primary methods. This work is aimed at verifying beyond shape properties, with the eventual goal of analysing both memory safety and functional properties for pointer-based data structures. Initial experiments have confirmed that we can automatically refine partial specifications with non-trivial constraints, thus making it easier for users to handle specifications with richer properties. For the second scenario, many programs contain invocations to unknown components and hence only part of the program code is available to the verifier. As previous works generally require the whole of program code be present, we target at the verification of memory safety and functional correctness of programs manipulating pointer-based data structures, where the program code is only partially available due to invocations to unknown components. Provided with a Hoare-style specification ({Pre} prog {Post}) where program (prog) contains calls to some unknown procedure (unknown), we infer a specification (mspecu) for the unknown part (unknown) from the calling contexts, such that the problem of verifying program (prog) can be safely reduced to the problem of proving that the unknown procedure (unknown) (once its code is available) meets the derived specification (mspecu). The expected specification (mspecu) is automatically calculated using an abduction-based shape analysis specifically designed for a combined abstract domain. We have implemented a system to validate the viability of our approach, with encouraging experimental results

    Model Based Test Generation and Optimization

    Get PDF
    Abstract Model Based Test Generation and Optimization Mohamed Mussa A. Mussa, Ph.D. Concordia University, 2015 Software testing is an essential activity in the software engineering process. It is used to enhance the quality of the software products throughout the software development process. It inspects different aspects of the software quality such as correctness, performance and usability. Furthermore, software testing consumes about 50% of the software development efforts. Software products go through several testing levels. The main ones are unit-level testing, component-level testing, integration-level testing, system-level testing and acceptance-level testing. Each testing level involves a sequence of tasks such as planning, modeling, execution and evaluation. Plenty of systematic test generation approaches have been developed using different languages and notations. The majority of these approaches target a specific testing-level. However, only little effort has been directed toward systematic transition among testing-levels. Considering the incompatibility between these approaches, tailored compatibility-tools are required between the testing levels. Furthermore, several test models are usually generated to evaluate the implementation at each testing level. Unfortunately, there is redundancy among these models. Efficient reuse of these test models represents a significant challenge. On the other hand, the growing attention to the model driven methodologies bonds the development and the testing activities. However, research is still required to link the testing levels. In this PhD thesis, we propose a model based testing framework that enables reusability and collaboration across the testing levels. In this framework, we propose test generation and test optimization approaches that at each level consider artifacts generated in preceding testing levels. More precisely, we propose an approach for the generation of integration test models starting from component test models, and another approach for the optimization of the acceptance test model using the integration test models. To conduct our research in rigorous settings, we base our framework on standard notations that are widely adopted for software development and testing, namely Unified Modeling Language (UML). In our first approach, component test cases are examined to locate and select the ones that include an interaction among the integrated components. The selected test cases are merged to generate integration test cases, which tackles the theoretical research issue of merging test cases. Furthermore, the generated test cases are mapped against each other to remove potential redundancies. For the second approach, acceptance test optimization, integration test models are compared to the acceptance test model in order to remove test cases that have already been exercised during the integration-level testing. However, not all integration test cases are suitable for the comparison. Integration test cases have to be examined to ensure that they do not include test stubs for system components. We have developed two approaches and implemented the corresponding prototypes in order to demonstrate the effectiveness of our work. The first prototype implements the integration test generation approach. It accepts component test models and generates integration test models. The second prototype implements the acceptance test optimization approach. It accepts integration test models along with the acceptance test model and generates an optimized acceptance test model
    corecore