3,664 research outputs found

    Planning as Tabled Logic Programming

    Get PDF
    This paper describes Picat's planner, its implementation, and planning models for several domains used in International Planning Competition (IPC) 2014. Picat's planner is implemented by use of tabling. During search, every state encountered is tabled, and tabled states are used to effectively perform resource-bounded search. In Picat, structured data can be used to avoid enumerating all possible permutations of objects, and term sharing is used to avoid duplication of common state data. This paper presents several modeling techniques through the example models, ranging from designing state representations to facilitate data sharing and symmetry breaking, encoding actions with operations for efficient precondition checking and state updating, to incorporating domain knowledge and heuristics. Broadly, this paper demonstrates the effectiveness of tabled logic programming for planning, and argues the importance of modeling despite recent significant progress in domain-independent PDDL planners.Comment: 27 pages in TPLP 201

    Tabling as a Library with Delimited Control

    Get PDF
    Tabling is probably the most widely studied extension of Prolog. But despite its importance and practicality, tabling is not implemented by most Prolog systems. Existing approaches require substantial changes to the Prolog engine, which is an investment out of reach of most systems. To enable more widespread adoption, we present a new implementation of tabling in under 600 lines of Prolog code. Our lightweight approach relies on delimited control and provides reasonable performance.Comment: 15 pages. To appear in Theory and Practice of Logic Programming (TPLP), Proceedings of ICLP 201

    Practical Run-time Checking via Unobtrusive Property Caching

    Full text link
    The use of annotations, referred to as assertions or contracts, to describe program properties for which run-time tests are to be generated, has become frequent in dynamic programing languages. However, the frameworks proposed to support such run-time testing generally incur high time and/or space overheads over standard program execution. We present an approach for reducing this overhead that is based on the use of memoization to cache intermediate results of check evaluation, avoiding repeated checking of previously verified properties. Compared to approaches that reduce checking frequency, our proposal has the advantage of being exhaustive (i.e., all tests are checked at all points) while still being much more efficient than standard run-time checking. Compared to the limited previous work on memoization, it performs the task without requiring modifications to data structure representation or checking code. While the approach is general and system-independent, we present it for concreteness in the context of the Ciao run-time checking framework, which allows us to provide an operational semantics with checks and caching. We also report on a prototype implementation and provide some experimental results that support that using a relatively small cache leads to significant decreases in run-time checking overhead.Comment: 30 pages, 1 table, 170 figures; added appendix with plots; To appear in Theory and Practice of Logic Programming (TPLP), Proceedings of ICLP 201

    Automatic incrementalization of prolog based static analyses

    Get PDF
    Abstract. Modern development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic programs and evaluated using incremental tabled evaluation, a technique for efficiently updating memo tables in response to changes in facts and rules. The approach has been implemented and integrated into the Eclipse IDE. Our measurements show that this technique is effective for automatically incrementalizing a broad range of static analyses.

    Incremental and Modular Context-sensitive Analysis

    Full text link
    Context-sensitive global analysis of large code bases can be expensive, which can make its use impractical during software development. However, there are many situations in which modifications are small and isolated within a few components, and it is desirable to reuse as much as possible previous analysis results. This has been achieved to date through incremental global analysis fixpoint algorithms that achieve cost reductions at fine levels of granularity, such as changes in program lines. However, these fine-grained techniques are not directly applicable to modular programs, nor are they designed to take advantage of modular structures. This paper describes, implements, and evaluates an algorithm that performs efficient context-sensitive analysis incrementally on modular partitions of programs. The experimental results show that the proposed modular algorithm shows significant improvements, in both time and memory consumption, when compared to existing non-modular, fine-grain incremental analysis techniques. Furthermore, thanks to the proposed inter-modular propagation of analysis information, our algorithm also outperforms traditional modular analysis even when analyzing from scratch.Comment: 56 pages, 27 figures. To be published in Theory and Practice of Logic Programming. v3 corresponds to the extended version of the ICLP2018 Technical Communication. v4 is the revised version submitted to Theory and Practice of Logic Programming. v5 (this one) is the final author version to be published in TPL

    Tabled CLP for Reasoning Over Stream Data

    Get PDF
    The interest in reasoning over stream data is growing as quickly as the amount of data generated. Our intention is to change the way stream data is analyzed. This is an important problem because we constantly have new sensors collecting information, new events from electronic devices and/or from customers and we want to reason about this information. For example, information about traffic jams and costumer order could be used to define a deliverer route. When there is a new order or a new traffic jam, we usually restart from scratch in order to recompute the route. However, if we have several deliveries and we analyze the information from thousands of sensors, we would like to reduce the computation requirements, e.g. reusing results from the previous computation. Nowadays, most of the applications that analyze stream data are specialized for specific problems (using complex algorithms and heuristics) and combine a computation language with a query language. As a result, when the problems become more complex (in e.g. reasoning requirements), in order to modify the application complex and error prone coding is required. We propose a framework based on a high-level language rooted in logic and constraints that will be able to provide customized services to different problems. The framework will discard wrong solutions in early stages and will reuse previous results that are still consistent with the current data set. The use of a constraint logic programming language will make it easier to translate the problem requirements into the code and will minimize the amount of re-engineering needed to comply with the requirements when they change
    • …
    corecore