65 research outputs found

    Transformation-Based Bottom-Up Computation of the Well-Founded Model

    Full text link
    We present a framework for expressing bottom-up algorithms to compute the well-founded model of non-disjunctive logic programs. Our method is based on the notion of conditional facts and elementary program transformations studied by Brass and Dix for disjunctive programs. However, even if we restrict their framework to nondisjunctive programs, their residual program can grow to exponential size, whereas for function-free programs our program remainder is always polynomial in the size of the extensional database (EDB). We show that particular orderings of our transformations (we call them strategies) correspond to well-known computational methods like the alternating fixpoint approach, the well-founded magic sets method and the magic alternating fixpoint procedure. However, due to the confluence of our calculi, we come up with computations of the well-founded model that are provably better than these methods. In contrast to other approaches, our transformation method treats magic set transformed programs correctly, i.e. it always computes a relevant part of the well-founded model of the original program.Comment: 43 pages, 3 figure

    Gasping for AIR Why we need Linked Rules and Justifications on the Semantic Web

    Get PDF
    The Semantic Web is a distributed model for publishing, utilizing and extending structured information using Web protocols. One of the main goals of this technology is to automate the retrieval and integration of data and to enable the inference of interesting results. This automation requires logics and rule languages that make inferences, choose courses of action, and answer questions. The openness of the Web, however, leads to several issues including the handling of inconsistencies, integration of diverse information, and the determination of the quality and trustworthiness of the data. AIR is a Semantic Web-based rule language that provides this functionality while focusing on generating and tracking explanations for its inferences and actions as well as conforming to Linked Data principles. AIR supports Linked Rules, which allow rules to be combined, re-used and extended in a manner similar to Linked Data. Additionally, AIR explanations themselves are Semantic Web data so they can be used for further reasoning. In this paper we present an overview of AIR, discuss its potential as a Web rule language by providing examples of how its features can be leveraged for different inference requirements, and describe how justifications are represented and generated.This material is based upon work supported by the National Science Foundation under Award No. CNS-0831442, by the Air Force Office of Scientific Research under Award No. FA9550-09-1-0152, and by Intelligence Advanced Research Projects Activity under Award No. FA8750-07-2-0031

    Four Lessons in Versatility or How Query Languages Adapt to the Web

    Get PDF
    Exposing not only human-centered information, but machine-processable data on the Web is one of the commonalities of recent Web trends. It has enabled a new kind of applications and businesses where the data is used in ways not foreseen by the data providers. Yet this exposition has fractured the Web into islands of data, each in different Web formats: Some providers choose XML, others RDF, again others JSON or OWL, for their data, even in similar domains. This fracturing stifles innovation as application builders have to cope not only with one Web stack (e.g., XML technology) but with several ones, each of considerable complexity. With Xcerpt we have developed a rule- and pattern based query language that aims to give shield application builders from much of this complexity: In a single query language XML and RDF data can be accessed, processed, combined, and re-published. Though the need for combined access to XML and RDF data has been recognized in previous work (including the W3C’s GRDDL), our approach differs in four main aspects: (1) We provide a single language (rather than two separate or embedded languages), thus minimizing the conceptual overhead of dealing with disparate data formats. (2) Both the declarative (logic-based) and the operational semantics are unified in that they apply for querying XML and RDF in the same way. (3) We show that the resulting query language can be implemented reusing traditional database technology, if desirable. Nevertheless, we also give a unified evaluation approach based on interval labelings of graphs that is at least as fast as existing approaches for tree-shaped XML data, yet provides linear time and space querying also for many RDF graphs. We believe that Web query languages are the right tool for declarative data access in Web applications and that Xcerpt is a significant step towards a more convenient, yet highly efficient data access in a “Web of Data”

    Beyond depth-first strategies: improving tabled logic programs through alternative scheduling

    Get PDF
    Journal ArticleTabled evaluation ensures termination for programs with finite models by keeping track of which subgoals have been called. Given several variant subgoals in an evaluation, only the fi rst one encountered will use program-clause resolution; the rest will resolve with the answers generated by the first subgoal. This use of answer resolution prevents infi nite looping that sometimes happens in SLD. Because answers that are produced in one path of the computation may be consumed, asynchronously, in others, tabling systems face an important scheduling choice not present in traditional top-down evaluation: when to schedule answer resolution. This paper investigates alternate scheduling strategies for tabling in a WAM implementation, the SLG-WAM. The original SLG-WAM had a simple mechanism for scheduling answer resolution that was expensive in terms of trailing and choice-point creation. We propose here a more sophisticated scheduling strategy, batched scheduling, which reduces the overheads of these operations and provides dramatic space reduction as well as speedups for many programs. We also propose a second strategy, local scheduling, which has applications to nonmonotonic reasoning, and when combined with answer subsumption, can arbitrarily improve the performance of some programs

    Answer Set Planning Under Action Costs

    Full text link
    Recently, planning based on answer set programming has been proposed as an approach towards realizing declarative planning systems. In this paper, we present the language Kc, which extends the declarative planning language K by action costs. Kc provides the notion of admissible and optimal plans, which are plans whose overall action costs are within a given limit resp. minimum over all plans (i.e., cheapest plans). As we demonstrate, this novel language allows for expressing some nontrivial planning tasks in a declarative way. Furthermore, it can be utilized for representing planning problems under other optimality criteria, such as computing ``shortest'' plans (with the least number of steps), and refinement combinations of cheapest and fastest plans. We study complexity aspects of the language Kc and provide a transformation to logic programs, such that planning problems are solved via answer set programming. Furthermore, we report experimental results on selected problems. Our experience is encouraging that answer set planning may be a valuable approach to expressive planning systems in which intricate planning problems can be naturally specified and solved

    (I) A Declarative Framework for ERP Systems(II) Reactors: A Data-Driven Programming Model for Distributed Applications

    Get PDF
    To those who can be swayed by argument and those who know they do not have all the answers This dissertation is a collection of six adapted research papers pertaining to two areas of research. (I) A Declarative Framework for ERP Systems: • POETS: Process-Oriented Event-driven Transaction Systems. The paper describes an ontological analysis of a small segment of the enterprise domain, namely the general ledger and accounts receivable. The result is an event-based approach to designing ERP systems and an abstract-level sketch of the architecture. • Compositional Specification of Commercial Contracts. The paper de-scribes the design, multiple semantics, and use of a domain-specific lan-guage (DSL) for modeling commercial contracts. • SMAWL: A SMAll Workflow Language Based on CCS. The paper show

    An Agent-Based Variogram Modeller: Investigating Intelligent, Distributed-Component Geographical Information Systems

    Get PDF
    Geo-Information Science (GIScience) is the field of study that addresses substantive questions concerning the handling, analysis and visualisation of spatial data. Geo- Information Systems (GIS), including software, data acquisition and organisational arrangements, are the key technologies underpinning GIScience. A GIS is normally tailored to the service it is supposed to perform. However, there is often the need to do a function that might not be supported by the GIS tool being used. The normal solution in these circumstances is to go out and look for another tool that can do the service, and often an expert to use that tool. This is expensive, time consuming and certainly stressful to the geographical data analyses. On the other hand, GIS is often used in conjunction with other technologies to form a geocomputational environment. One of the complex tools in geocomputation is geostatistics. One of its functions is to provide the means to determine the extent of spatial dependencies within geographical data and processes. Spatial datasets are often large and complex. Currently Agent system are being integrated into GIS to offer flexibility and allow better data analysis. The theis will look into the current application of Agents in within the GIS community, determine if they are used to representing data, process or act a service. The thesis looks into proving the applicability of an agent-oriented paradigm as a service based GIS, having the possibility of providing greater interoperability and reducing resource requirements (human and tools). In particular, analysis was undertaken to determine the need to introduce enhanced features to agents, in order to maximise their effectiveness in GIS. This was achieved by addressing the software agent complexity in design and implementation for the GIS environment and by suggesting possible solutions to encountered problems. The software agent characteristics and features (which include the dynamic binding of plans to software agents in order to tackle the levels of complexity and range of contexts) were examined, as well as discussing current GIScience and the applications of agent technology to GIS, agents as entities, objects and processes. These concepts and their functionalities to GIS are then analysed and discussed. The extent of agent functionality, analysis of the gaps and the use these technologies to express a distributed service providing an agent-based GIS framework is then presented. Thus, a general agent-based framework for GIS and a novel agent-based architecture for a specific part of GIS, the variogram, to examine the applicability of the agent- oriented paradigm to GIS, was devised. An examination of the current mechanisms for constructing variograms, underlying processes and functions was undertaken, then these processes were embedded into a novel agent architecture for GIS. Once the successful software agent implementation had been achieved, the corresponding tool was tested and validated - internally for code errors and externally to determine its functional requirements and whether it enhances the GIS process of dealing with data. Thereafter, its compared with other known service based GIS agents and its advantages and disadvantages analysed
    corecore