37,817 research outputs found

    Solving stable matching problems using answer set programming

    Get PDF
    Since the introduction of the stable marriage problem (SMP) by Gale and Shapley (1962), several variants and extensions have been investigated. While this variety is useful to widen the application potential, each variant requires a new algorithm for finding the stable matchings. To address this issue, we propose an encoding of the SMP using answer set programming (ASP), which can straightforwardly be adapted and extended to suit the needs of specific applications. The use of ASP also means that we can take advantage of highly efficient off-the-shelf solvers. To illustrate the flexibility of our approach, we show how our ASP encoding naturally allows us to select optimal stable matchings, i.e. matchings that are optimal according to some user-specified criterion. To the best of our knowledge, our encoding offers the first exact implementation to find sex-equal, minimum regret, egalitarian or maximum cardinality stable matchings for SMP instances in which individuals may designate unacceptable partners and ties between preferences are allowed. This paper is under consideration in Theory and Practice of Logic Programming (TPLP).Comment: Under consideration in Theory and Practice of Logic Programming (TPLP). arXiv admin note: substantial text overlap with arXiv:1302.725

    Learning programs by learning from failures

    Full text link
    We describe an inductive logic programming (ILP) approach called learning from failures. In this approach, an ILP system (the learner) decomposes the learning problem into three separate stages: generate, test, and constrain. In the generate stage, the learner generates a hypothesis (a logic program) that satisfies a set of hypothesis constraints (constraints on the syntactic form of hypotheses). In the test stage, the learner tests the hypothesis against training examples. A hypothesis fails when it does not entail all the positive examples or entails a negative example. If a hypothesis fails, then, in the constrain stage, the learner learns constraints from the failed hypothesis to prune the hypothesis space, i.e. to constrain subsequent hypothesis generation. For instance, if a hypothesis is too general (entails a negative example), the constraints prune generalisations of the hypothesis. If a hypothesis is too specific (does not entail all the positive examples), the constraints prune specialisations of the hypothesis. This loop repeats until either (i) the learner finds a hypothesis that entails all the positive and none of the negative examples, or (ii) there are no more hypotheses to test. We introduce Popper, an ILP system that implements this approach by combining answer set programming and Prolog. Popper supports infinite problem domains, reasoning about lists and numbers, learning textually minimal programs, and learning recursive programs. Our experimental results on three domains (toy game problems, robot strategies, and list transformations) show that (i) constraints drastically improve learning performance, and (ii) Popper can outperform existing ILP systems, both in terms of predictive accuracies and learning times.Comment: Accepted for the machine learning journa

    Using ASP with recent extensions for causal explanations

    Get PDF
    We examine the practicality for a user of using Answer Set Programming (ASP) for representing logical formalisms. We choose as an example a formalism aiming at capturing causal explanations from causal information. We provide an implementation, showing the naturalness and relative efficiency of this translation job. We are interested in the ease for writing an ASP program, in accordance with the claimed ``declarative'' aspect of ASP. Limitations of the earlier systems (poor data structure and difficulty in reusing pieces of programs) made that in practice, the ``declarative aspect'' was more theoretical than practical. We show how recent improvements in working ASP systems facilitate a lot the translation, even if a few improvements could still be useful

    Resolving the Controversy Over the Core Radius of 47 Tucanae (NGC 104)

    Get PDF
    This paper investigates the discrepancy between recent measurements of the density profile of the globular cluster 47 Tuc that have used HST data sets. Guhathakurta et al. (1992) used pre-refurbishment WFPC1 V-band images to derive r_c = 23" +/- 2". Calzetti et al. (1993) suggested that the density profile is a superposition of two King profiles (r_c = 8" and r_c = 25") based on U-band FOC images. De Marchi et al. (1996) used deep WFPC1 U-band images to derive r_c = 12" +/- 2". Differences in the adopted cluster centers are not the cause of the discrepancy. Our independent analysis of the data used by De Marchi et al. reaches the following conclusions: (1) De Marchi et al.'s r_c ~ 12" value is spuriously low, a result of radially-varying bias in the star counts in a magnitude limited sample -- photometric errors and a steeply rising stellar luminosity function cause more stars to scatter across the limiting magnitude into the sample than out of it, especially near the cluster center where crowding effects are most severe. (2) Changing the limiting magnitude to the main sequence turnoff, away from the steep part of the luminosity function, partially alleviates the problem and results in r_c = 18". (3) Combining such a limiting magnitude with accurate photometry derived from PSF fitting, instead of the less accurate aperture photometry employed by De Marchi et al., results in a reliable measurement of the density profile which is well fit by r_c = 22" +/- 2". Archival WFPC2 data are used to derive a star list with a higher degree of completeness, greater photometric accuracy, and wider areal coverage than the WFPC1 and FOC data sets; the WFPC2-based density profile supports the above conclusions, yielding r_c = 24" +/- 1.9".Comment: 22 pages, 5 figures, 1 table; accepted for publication in PASP; see http://www.ucolick.org/~raja/hgg.tar.gz for full-resolution figure

    On the selection of secondary indices in relational databases

    Get PDF
    An important problem in the physical design of databases is the selection of secondary indices. In general, this problem cannot be solved in an optimal way due to the complexity of the selection process. Often use is made of heuristics such as the well-known ADD and DROP algorithms. In this paper it will be shown that frequently used cost functions can be classified as super- or submodular functions. For these functions several mathematical properties have been derived which reduce the complexity of the index selection problem. These properties will be used to develop a tool for physical database design and also give a mathematical foundation for the success of the before-mentioned ADD and DROP algorithms

    On the Semantics of Gringo

    Full text link
    Input languages of answer set solvers are based on the mathematically simple concept of a stable model. But many useful constructs available in these languages, including local variables, conditional literals, and aggregates, cannot be easily explained in terms of stable models in the sense of the original definition of this concept and its straightforward generalizations. Manuals written by designers of answer set solvers usually explain such constructs using examples and informal comments that appeal to the user's intuition, without references to any precise semantics. We propose to approach the problem of defining the semantics of gringo programs by translating them into the language of infinitary propositional formulas. This semantics allows us to study equivalent transformations of gringo programs using natural deduction in infinitary propositional logic.Comment: Proceedings of Answer Set Programming and Other Computing Paradigms (ASPOCP 2013), 6th International Workshop, August 25, 2013, Istanbul, Turke

    Logic Programming Applications: What Are the Abstractions and Implementations?

    Full text link
    This article presents an overview of applications of logic programming, classifying them based on the abstractions and implementations of logic languages that support the applications. The three key abstractions are join, recursion, and constraint. Their essential implementations are for-loops, fixed points, and backtracking, respectively. The corresponding kinds of applications are database queries, inductive analysis, and combinatorial search, respectively. We also discuss language extensions and programming paradigms, summarize example application problems by application areas, and touch on example systems that support variants of the abstractions with different implementations
    • …
    corecore