9 research outputs found

    External Behavior of a Logic Program and Verification of Refactoring

    Full text link
    Refactoring is modifying a program without changing its external behavior. In this paper, we make the concept of external behavior precise for a simple answer set programming language. Then we describe a proof assistant for the task of verifying that refactoring a program in that language is performed correctly.Comment: Accepted to Theory and Practice of Logic Programming (ICLP 2023

    External Behavior of a Logic Program and Verification of Refactoring

    Get PDF
    Refactoring is modifying a program without changing its external behavior. In this paper, we make the concept of external behavior precise for a simple answer set programming language. Then we describe a proof assistant for the task of verifying that refactoring a program in that language is performed correctly

    https://www.cambridge.org/core/journals/theory-and-practice-of-logic-programming/article/system-predictor-grounding-size-estimator-for-logic-programs-under-answer-set-semantics/9EE3D47F0DCDA77E39328E53B0816CD9#:~:text=System%20Predictor%3A%20Grounding%20Size%20Estimator%20for%20Logic%20Programs%20under%20Answer%20Set%20Semantics

    Get PDF
    Answer set programming is a declarative logic programming paradigm geared towards solving difficult combinatorial search problems. While different logic programs can encode the same problem, their performance may vary significantly. It is not always easy to identify which version of the program performs the best. We present the system PREDICTOR (and its algorithmic backend) for estimating the grounding size of programs, a metric that can influence a performance of a system processing a program. We evaluate the impact of PREDICTOR when used as a guide for rewritings produced by the answer set programming rewriting tools PROJECTOR and LPOPT. The results demonstrate potential to this approach

    Grounding Size Predictions for Answer Set Programs

    Get PDF
    Answer set programming is a declarative programming paradigm geared towards solving difficult combinatorial search problems. Logic programs under answer set semantics can typically be written in many different ways while still encoding the same problem. These different versions of the program may result in diverse performances. Unfortunately, it is not always easy to identify which version of the program performs the best, requiring expert knowledge on both answer set processing and the problem domain. More so, the best version to use may even vary depending on the problem instance. One measure that has been shown to correlate with performance is the programs grounding size, a measure of the number of ground rules in the grounded program (Gebser et al. 2011). Computing a grounded program is an expensive task by itself, thus computing multiple ground programs to assess their sizes to distinguish between these programs is unrealistic. In this research, we present a new system called PREDICTOR to estimate the grounding size of programs without the need to actually ground/instantiate these rules. We utilize a simplified form of the grounding algorithms implemented by answer set programming grounder DLV while borrowing techniques from join-order size estimations in relational databases. The PREDICTOR system can be used independent of the chosen answer set programming grounder and solver system. We assess the accuracy of the predictions produced by PREDICTOR, while also evaluating its impact when used as a guide for rewritings produced by the automated answer set programming rewriting system called PROJECTOR. In particular, system PREDICTOR helps to boost the performance of PROJECTOR

    Strong Equivalence and Program's Structure in Arguing Essential Equivalence between Logic Programs

    Full text link
    Answer set programming is a prominent declarative programming paradigm used in formulating combinatorial search problems and implementing distinct knowledge representation formalisms. It is common that several related and yet substantially different answer set programs exist for a given problem. Sometimes these encodings may display significantly different performance. Uncovering {\em precise formal} links between these programs is often important and yet far from trivial. This paper claims the correctness of a number of interesting program rewritings

    Performance Tuning in Answer Set Programming

    No full text
    Performance analysis and tuning are well established software engineering processes in the realm of imperative programming. This work is a step towards establishing the standards of performance analysis in the realm of answer set programming -- a prominent constraint programming paradigm. We present and study the roles of human tuning and automatic configuration tools in this process. The case study takes place in the realm of a real-world answer set programming application that required several hundred lines of code. Experimental results suggest that human-tuning of the logic programming encoding and automatic tuning of the answer set solver are orthogonal (complementary) issues
    corecore