11 research outputs found

    Logic programming in the context of multiparadigm programming: the Oz experience

    Full text link
    Oz is a multiparadigm language that supports logic programming as one of its major paradigms. A multiparadigm language is designed to support different programming paradigms (logic, functional, constraint, object-oriented, sequential, concurrent, etc.) with equal ease. This article has two goals: to give a tutorial of logic programming in Oz and to show how logic programming fits naturally into the wider context of multiparadigm programming. Our experience shows that there are two classes of problems, which we call algorithmic and search problems, for which logic programming can help formulate practical solutions. Algorithmic problems have known efficient algorithms. Search problems do not have known efficient algorithms but can be solved with search. The Oz support for logic programming targets these two problem classes specifically, using the concepts needed for each. This is in contrast to the Prolog approach, which targets both classes with one set of concepts, which results in less than optimal support for each class. To explain the essential difference between algorithmic and search programs, we define the Oz execution model. This model subsumes both concurrent logic programming (committed-choice-style) and search-based logic programming (Prolog-style). Instead of Horn clause syntax, Oz has a simple, fully compositional, higher-order syntax that accommodates the abilities of the language. We conclude with lessons learned from this work, a brief history of Oz, and many entry points into the Oz literature.Comment: 48 pages, to appear in the journal "Theory and Practice of Logic Programming

    Data Integration on the (Semantic) Web with Rules and Rich Unification

    Get PDF
    For the last decade a multitude of new data formats for the World Wide Web have been developed, and a huge amount of heterogeneous semi-structured data is flourishing online. With the ever increasing number of documents on the Web, rules have been identified as the means of choice for reasoning about this data, transforming and integrating it. Query languages such as SPARQL and rule languages such as Xcerpt use compound queries that are matched or unified with semi-structured data. This notion of unification is different from the one that is known from logic programming engines in that it (i) provides constructs that allow queries to be incomplete in several ways (ii) in that variables may have different types, (iii) in that it results in sets of substitutions for the variables in the query instead of a single substitution and (iv) in that subsumption between queries is much harder to decide than in logic programming. This thesis abstracts from Xcerpt query term simulation, SPARQL graph pattern matching and XPath XML document matching, and shows that all of them can be considered as a form of rich unification. Given a set of mappings between substitution sets of different languages, this abstraction opens up the possibility for format-versatile querying, i.e. combination of queries in different formats, or transformation of one format into another format within a single rule. To show the superiority of this approach, this thesis introduces an extension of Xcerpt called Xcrdf, and describes use-cases for the combined querying and integration of RDF and XML data. With XML being the predominant Web format, and RDF the predominant Semantic Web format, Xcrdf extends Xcerpt by a set of RDF query terms and construct terms, including query primitives for RDF containers collections and reifications. Moreover, Xcrdf includes an RDF path query language called RPL that is more expressive than previously proposed polynomial-time RDF path query languages, but can still be evaluated in polynomial time combined complexity. Besides the introduction of this framework for data integration based on rich unification, this thesis extends the theoretical knowledge about Xcerpt in several ways: We show that Xcerpt simulation unification is decidable, and give complexity bounds for subsumption in several fragments of Xcerpt query terms. The proof is based on a set of subsumption monotone query term transformations, and is only feasible because of the injectivity requirement on subterms of Xcerpt queries. The proof gives rise to an algorithm for deciding Xcerpt query term simulation. Moreover, we give a semantics to locally and weakly stratified Xcerpt programs, but this semantics is applicable not only to Xcerpt, but to any rule language with rich unification, including multi-rule SPARQL programs. Finally, we show how Xcerpt grouping stratification can be reduced to Xcerpt negation stratification, thereby also introducing the notion of local grouping stratification and weak grouping stratification

    Towards flexible goal-oriented logic programming

    Get PDF

    Automatic Structured Text Summarization with Concept Maps

    Get PDF
    Efficiently exploring a collection of text documents in order to answer a complex question is a challenge that many people face. As abundant information on almost any topic is electronically available nowadays, supporting tools are needed to ensure that people can profit from the information's availability rather than suffer from the information overload. Structured summaries can help in this situation: They can be used to provide a concise overview of the contents of a document collection, they can reveal interesting relationships and they can be used as a navigation structure to further explore the documents. A concept map, which is a graph representing concepts and their relationships, is a specific form of a structured summary that offers these benefits. However, despite its appealing properties, only a limited amount of research has studied how concept maps can be automatically created to summarize documents. Automating that task is challenging and requires a variety of text processing techniques including information extraction, coreference resolution and summarization. The goal of this thesis is to better understand these challenges and to develop computational models that can address them. As a first contribution, this thesis lays the necessary ground for comparable research on computational models for concept map--based summarization. We propose a precise definition of the task together with suitable evaluation protocols and carry out experimental comparisons of previously proposed methods. As a result, we point out limitations of existing methods and gaps that have to be closed to successfully create summary concept maps. Towards that end, we also release a new benchmark corpus for the task that has been created with a novel, scalable crowdsourcing strategy. Furthermore, we propose new techniques for several subtasks of creating summary concept maps. First, we introduce the usage of predicate-argument analysis for the extraction of concept and relation mentions, which greatly simplifies the development of extraction methods. Second, we demonstrate that a predicate-argument analysis tool can be ported from English to German with low effort, indicating that the extraction technique can also be applied to other languages. We further propose to group concept mentions using pairwise classifications and set partitioning, which significantly improves the quality of the created summary concept maps. We show similar improvements for a new supervised importance estimation model and an optimal subgraph selection procedure. By combining these techniques in a pipeline, we establish a new state-of-the-art for the summarization task. Additionally, we study the use of neural networks to model the summarization problem as a single end-to-end task. While such approaches are not yet competitive with pipeline-based approaches, we report several experiments that illustrate the challenges - mostly related to training data - that currently limit the performance of this technique. We conclude the thesis by presenting a prototype system that demonstrates the use of automatically generated summary concept maps in practice and by pointing out promising directions for future research on the topic of this thesis

    Modern techniques for constraint solving the CASPER experience

    Get PDF
    Dissertação apresentada para obtenção do Grau de Doutor em Engenharia Informática, pela Universidade Nova de Lisboa, Faculdade de Ciências e TecnologiaConstraint programming is a well known paradigm for addressing combinatorial problems which has enjoyed considerable success for solving many relevant industrial and academic problems. At the heart of constraint programming lies the constraint solver, a computer program which attempts to find a solution to the problem, i.e. an assignment of all the variables in the problemsuch that all the constraints are satisfied. This dissertation describes a set of techniques to be used in the implementation of a constraint solver. These techniques aim at making a constraint solver more extensible and efficient,two properties which are hard to integrate in general, and in particular within a constraint solver. Specifically, this dissertation addresses two major problems: generic incremental propagation and propagation of arbitrary decomposable constraints. For both problemswe present a set of techniques which are novel, correct, and directly concerned with extensibility and efficiency. All the material in this dissertation emerged from our work in designing and implementing a generic constraint solver. The CASPER (Constraint Solving Platformfor Engineering and Research)solver does not only act as a proof-of-concept for the presented techniques, but also served as the common test platform for the many discussed theoretical models. Besides the work related to the design and implementation of a constraint solver, this dissertation also presents the first successful application of the resulting platform for addressing an open research problem, namely finding good heuristics for efficiently directing search towards a solution

    Programming Languages and Systems

    Get PDF
    This open access book constitutes the proceedings of the 29th European Symposium on Programming, ESOP 2020, which was planned to take place in Dublin, Ireland, in April 2020, as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The actual ETAPS 2020 meeting was postponed due to the Corona pandemic. The papers deal with fundamental issues in the specification, design, analysis, and implementation of programming languages and systems

    Combining SOA and BPM Technologies for Cross-System Process Automation

    Get PDF
    This paper summarizes the results of an industry case study that introduced a cross-system business process automation solution based on a combination of SOA and BPM standard technologies (i.e., BPMN, BPEL, WSDL). Besides discussing major weaknesses of the existing, custom-built, solution and comparing them against experiences with the developed prototype, the paper presents a course of action for transforming the current solution into the proposed solution. This includes a general approach, consisting of four distinct steps, as well as specific action items that are to be performed for every step. The discussion also covers language and tool support and challenges arising from the transformation

    Geographic information extraction from texts

    Get PDF
    A large volume of unstructured texts, containing valuable geographic information, is available online. This information – provided implicitly or explicitly – is useful not only for scientific studies (e.g., spatial humanities) but also for many practical applications (e.g., geographic information retrieval). Although large progress has been achieved in geographic information extraction from texts, there are still unsolved challenges and issues, ranging from methods, systems, and data, to applications and privacy. Therefore, this workshop will provide a timely opportunity to discuss the recent advances, new ideas, and concepts but also identify research gaps in geographic information extraction

    A mechanization of sorted higher-order logic based on the resolution principle

    Get PDF
    The usage of sorts in first-order automated deduction has brought greater conciseness of representation and a considerable gain in efficiency by reducing the search spaces involved. This suggests that sort information can be employed in higher-order theorem proving with similar results. This thesis develops a sorted higher-order logic SUM HOL suitable for automatic theorem proving applications. SUM HOL is based on a sorted Lambda-calculus SUM A->, which is obtained by extending Church\u27;s simply typed Lambda-calculus by a higher-order sort concept including term declarations and functional base sorts. The term declaration mechanism studied here is powerful enough to allow convenient formalization of a large body of mathematics, since it offers natural primitives for domains and codomains of functions, and allows to treat function restriction. Furthermore, it subsumes most other mechanisms for the declaration of sort information known from the literature, and can thus serve as a general framework for the study of sorted higher-order logics. For instance, the term declaration mechanism of SUM HOL subsumes the subsorting mechanism as a derived notion, and hence justifies our special form of subsort inference. We present sets of transformations for sorted higher-order unification and pre-unification, and prove the nondeterministic completeness of the algorithm induced by these transformations. The main technical difficulty of unification in ! is that the analysis of general bindings is much more involved than in the unsorted case, since in the presence of term declarations well-sortedness is not a structural property. This difficulty is overcome by a structure theorem that links the structure of a formula to the structure of its sorting derivation. We develop two notions of set-theoretic semantics for SUM HOL. General SUM-models are a direct generalization of Henkin\u27;s general models to the sorted setting. Since no known machine-oriented calculus can adequately mechanize full extensionality, we generalize general SUM-models further to SUM-model structures, which allow full extensionality to fail. The notions of SUM-model structures and general SUM-models allow us to prove model existence theorems for them. These model-theoretic variants of Andrews unifying principle for type theory\u27; can be used as a powerful tool in completeness proofs of higher-order calculi. Finally, we use our pre-unification algorithms as a central inference procedure for a sorted higherorder resolution calculus in the spirit of Huet\u27;s Constrained Resolution. This calculus is proven sound and complete with respect to our semantics. It differs from Huet\u27;s calculus by allowing early unification strategies and using variable dependencies. For the completeness proof we make use of our model existence theorem, and prove a strong lifting lemma

    A mechanization of sorted higher-order logic based on the resolution principle

    Get PDF
    The usage of sorts in first-order automated deduction has brought greater conciseness of representation and a considerable gain in efficiency by reducing the search spaces involved. This suggests that sort information can be employed in higher-order theorem proving with similar results. This thesis develops a sorted higher-order logic SUM HOL suitable for automatic theorem proving applications. SUM HOL is based on a sorted Lambda-calculus SUM A->, which is obtained by extending Church';s simply typed Lambda-calculus by a higher-order sort concept including term declarations and functional base sorts. The term declaration mechanism studied here is powerful enough to allow convenient formalization of a large body of mathematics, since it offers natural primitives for domains and codomains of functions, and allows to treat function restriction. Furthermore, it subsumes most other mechanisms for the declaration of sort information known from the literature, and can thus serve as a general framework for the study of sorted higher-order logics. For instance, the term declaration mechanism of SUM HOL subsumes the subsorting mechanism as a derived notion, and hence justifies our special form of subsort inference. We present sets of transformations for sorted higher-order unification and pre-unification, and prove the nondeterministic completeness of the algorithm induced by these transformations. The main technical difficulty of unification in ! is that the analysis of general bindings is much more involved than in the unsorted case, since in the presence of term declarations well-sortedness is not a structural property. This difficulty is overcome by a structure theorem that links the structure of a formula to the structure of its sorting derivation. We develop two notions of set-theoretic semantics for SUM HOL. General SUM-models are a direct generalization of Henkin';s general models to the sorted setting. Since no known machine-oriented calculus can adequately mechanize full extensionality, we generalize general SUM-models further to SUM-model structures, which allow full extensionality to fail. The notions of SUM-model structures and general SUM-models allow us to prove model existence theorems for them. These model-theoretic variants of Andrews unifying principle for type theory'; can be used as a powerful tool in completeness proofs of higher-order calculi. Finally, we use our pre-unification algorithms as a central inference procedure for a sorted higherorder resolution calculus in the spirit of Huet';s Constrained Resolution. This calculus is proven sound and complete with respect to our semantics. It differs from Huet';s calculus by allowing early unification strategies and using variable dependencies. For the completeness proof we make use of our model existence theorem, and prove a strong lifting lemma
    corecore