2 research outputs found

    A Framework and Tool Supports for Testing Modularity of Software Design

    No full text
    Modularity is one of the most important properties of a software design, with significant impact on changeability and evolvability. However, a formalized and automated approach is lacking to test and verify software design models against their modularity properties, in particular, their ability to accommodate potential changes. In this paper, we propose a novel framework for testing design modularity. The software artifact under test is a software design. A test input is a potential change to the design. The test output is a modularity vector, which precisely captures quantitative capability extents of the design for accommodating the test input (the potential change). Both the design and the test input are represented as formal computable models to enable automatic testing. The modularity vector integrates the net option value analysis with wellknown design principles. We have implemented the framework with tool supports and tested aspect-oriented and object-oriented design patterns in terms of their ability to accommodate sequences of possible changes. The results showed that previous informal, implementation-based analysis can be conducted by our framework automatically and quantitatively at the design level. This framework also opens the opportunities of applying testing techniques, such as coverage criteria, on software designs

    Debugging Relational Declarative Models with Discriminating Examples

    Get PDF
    Models, especially those with mathematical or logical foundations, have proven valuable to engineering practice in a wide range of disciplines, including software engineering. Models, sometimes also referred to as logical specifications in this context, enable software engineers to focus on essential abstractions, while eliding less important details of their software design. Like any human-created artifact, a model might have imperfections at certain stages of the design process: it might have internal inconsistencies, or it might not properly express the engineer’s design intentions. Validating that the model is a true expression of the engineer’s intent is an important and difficult problem. One of the key challenges is that there is typically no other written artifact to compare the model to: the engineer’s intention is a mental object. One successful approach to this challenge has been automated example-generation tools, such as the Alloy Analyzer. These tools produce examples (satisfying valuations of the model) for the engineer to accept or reject. These examples, along with the engineer’s judgment of them, serve as crucial written artifacts of the engineer’s true intentions. Examples, like test-cases for programs, are more valuable if they reveal a discrepancy between the expressed model and the engineer’s design intentions. We propose the idea of discriminating examples for this purpose. A discriminating example is synthesized from a combination of the engineer’s expressed model and a machine-generated hypothesis of the engineer’s true intentions. A discriminating example either satisfies the model but not the hypothesis, or satisfies the hypothesis but not the model. It shows the difference between the model and the hypothesized alternative. The key to producing high-quality discriminating examples is to generate high-quality hypotheses. This dissertation explores three general forms of such hypotheses: mistakes that happen near borders; the expressed model is stronger than the engineer intends; or the expressed model is weaker than the engineer intends. We additionally propose a number of heuristics to guide the hypothesis-generation process. We demonstrate the usefulness of discriminating examples and our hypothesis-generation techniques through a case study of an Alloy model of Dijkstra’s Dining Philosophers problem. This model was written by Alloy experts and shipped with the Alloy Analyzer for several years. Previous researchers discovered the existence of a bug, but there has been no prior published account explaining how to fix it, nor has any prior tool been shown effective for assisting an engineer with this task. Generating high-quality discriminating examples and their underlying hypotheses is computationally demanding. This dissertation shows how to make it feasible
    corecore