8 research outputs found

    Predicate Abstraction with Under-approximation Refinement

    Full text link
    We propose an abstraction-based model checking method which relies on refinement of an under-approximation of the feasible behaviors of the system under analysis. The method preserves errors to safety properties, since all analyzed behaviors are feasible by definition. The method does not require an abstract transition relation to be generated, but instead executes the concrete transitions while storing abstract versions of the concrete states, as specified by a set of abstraction predicates. For each explored transition the method checks, with the help of a theorem prover, whether there is any loss of precision introduced by abstraction. The results of these checks are used to decide termination or to refine the abstraction by generating new abstraction predicates. If the (possibly infinite) concrete system under analysis has a finite bisimulation quotient, then the method is guaranteed to eventually explore an equivalent finite bisimilar structure. We illustrate the application of the approach for checking concurrent programs.Comment: 22 pages, 3 figures, accepted for publication in Logical Methods in Computer Science journal (special issue CAV 2005

    Test Input Generation for Red-Black Trees using Abstraction

    Get PDF
    We consider the problem of test input generation for code that manipulates complex data structures. Test inputs are sequences of method calls from the data structure interface. We describe test input generation techniques that rely on state matching to avoid generation of redundant tests. Exhaustive techniques use explicit state model checking to explore all the possible test sequences up to predefined input sizes. Lossy techniques rely on abstraction mappings to compute and store abstract versions of the concrete states; they explore under-approximations of all the possible test sequences. We have implemented the techniques on top of the Java PathFinder model checker and we evaluate them using a Java implementation of red-black trees

    Learner Modeling for Integration Skills

    Get PDF
    Complex skill mastery requires not only acquiring individual basic component skills, but also practicing integrating such basic skills. However, traditional approaches to knowledge modeling, such as Bayesian knowledge tracing, only trace knowledge of each decomposed basic component skill. This risks early assertion of mastery or ineffective remediation failing to address skill integration. We introduce a novel integration-level approach to model learners' knowledge and provide fine-grained diagnosis: a Bayesian network based on a new kind of knowledge graph with progressive integration skills. We assess the value of such a model from multifaceted aspects: performance prediction, parameter plausibility, expected instructional effectiveness, and real-world recommendation helpfulness. Our experiments based on a Java programming tutor show that proposed model significantly improves two popular multiple-skill knowledge tracing models on all these four aspects

    Predicate Abstraction with Under-approximation Refinement

    No full text
    We propose an abstraction-based model checking method which relies on refinement of an under-approximation of the feasible behaviors of the system under analysis. The method preserves errors to safety properties, since all analyzed behaviors are feasible by definition. The method does not require an abstract transition relation to be generated, but instead executes the concrete transitions while storing abstract versions of the concrete states, as specified by a set of abstraction predicates. For each explored transition the method checks, with the help of a theorem prover, whether there is any loss of precision introduced by abstraction. The results of these checks are used to decide termination or to refine the abstraction by generating new abstraction predicates. If the (possibly infinite) concrete system under analysis has a finite bisimulation quotient, then the method is guaranteed to eventually explore an equivalent finite bisimilar structure. We illustrate the application of the approach for checking concurrent programs

    Predicate Abstraction with Under-approximation Refinement

    No full text
    We propose an abstraction-based model checking method which relies onrefinement of an under-approximation of the feasible behaviors of the systemunder analysis. The method preserves errors to safety properties, since allanalyzed behaviors are feasible by definition. The method does not require anabstract transition relation to be generated, but instead executes the concretetransitions while storing abstract versions of the concrete states, asspecified by a set of abstraction predicates. For each explored transition themethod checks, with the help of a theorem prover, whether there is any loss ofprecision introduced by abstraction. The results of these checks are used todecide termination or to refine the abstraction by generating new abstractionpredicates. If the (possibly infinite) concrete system under analysis has afinite bisimulation quotient, then the method is guaranteed to eventuallyexplore an equivalent finite bisimilar structure. We illustrate the applicationof the approach for checking concurrent programs.Comment: 22 pages, 3 figures, accepted for publication in Logical Methods in Computer Science journal (special issue CAV 2005
    corecore