41,856 research outputs found

    Proving Correctness and Completeness of Normal Programs - a Declarative Approach

    Full text link
    We advocate a declarative approach to proving properties of logic programs. Total correctness can be separated into correctness, completeness and clean termination; the latter includes non-floundering. Only clean termination depends on the operational semantics, in particular on the selection rule. We show how to deal with correctness and completeness in a declarative way, treating programs only from the logical point of view. Specifications used in this approach are interpretations (or theories). We point out that specifications for correctness may differ from those for completeness, as usually there are answers which are neither considered erroneous nor required to be computed. We present proof methods for correctness and completeness for definite programs and generalize them to normal programs. For normal programs we use the 3-valued completion semantics; this is a standard semantics corresponding to negation as finite failure. The proof methods employ solely the classical 2-valued logic. We use a 2-valued characterization of the 3-valued completion semantics which may be of separate interest. The presented methods are compared with an approach based on operational semantics. We also employ the ideas of this work to generalize a known method of proving termination of normal programs.Comment: To appear in Theory and Practice of Logic Programming (TPLP). 44 page

    Extending Similarity Measures of Interval Type-2 Fuzzy Sets to General Type-2 Fuzzy Sets

    Get PDF
    Similarity measures provide one of the core tools that enable reasoning about fuzzy sets. While many types of similarity measures exist for type-1 and interval type-2 fuzzy sets, there are very few similarity measures that enable the comparison of general type-2 fuzzy sets. In this paper, we introduce a general method for extending existing interval type-2 similarity measures to similarity measures for general type-2 fuzzy sets. Specifically, we show how similarity measures for interval type-2 fuzzy sets can be employed in conjunction with the zSlices based general type-2 representation for fuzzy sets to provide measures of similarity which preserve all the common properties (i.e. reflexivity, symmetry, transitivity and overlapping) of the original interval type-2 similarity measure. We demonstrate examples of such extended fuzzy measures and provide comparisons between (different types of) interval and general type-2 fuzzy measures.Comment: International Conference on Fuzzy Systems 2013 (Fuzz-IEEE 2013

    Bayesian Updating, Model Class Selection and Robust Stochastic Predictions of Structural Response

    Get PDF
    A fundamental issue when predicting structural response by using mathematical models is how to treat both modeling and excitation uncertainty. A general framework for this is presented which uses probability as a multi-valued conditional logic for quantitative plausible reasoning in the presence of uncertainty due to incomplete information. The fundamental probability models that represent the structure’s uncertain behavior are specified by the choice of a stochastic system model class: a set of input-output probability models for the structure and a prior probability distribution over this set that quantifies the relative plausibility of each model. A model class can be constructed from a parameterized deterministic structural model by stochastic embedding utilizing Jaynes’ Principle of Maximum Information Entropy. Robust predictive analyses use the entire model class with the probabilistic predictions of each model being weighted by its prior probability, or if structural response data is available, by its posterior probability from Bayes’ Theorem for the model class. Additional robustness to modeling uncertainty comes from combining the robust predictions of each model class in a set of competing candidates weighted by the prior or posterior probability of the model class, the latter being computed from Bayes’ Theorem. This higherlevel application of Bayes’ Theorem automatically applies a quantitative Ockham razor that penalizes the data-fit of more complex model classes that extract more information from the data. Robust predictive analyses involve integrals over highdimensional spaces that usually must be evaluated numerically. Published applications have used Laplace's method of asymptotic approximation or Markov Chain Monte Carlo algorithms

    Knowledge Compilation of Logic Programs Using Approximation Fixpoint Theory

    Full text link
    To appear in Theory and Practice of Logic Programming (TPLP), Proceedings of ICLP 2015 Recent advances in knowledge compilation introduced techniques to compile \emph{positive} logic programs into propositional logic, essentially exploiting the constructive nature of the least fixpoint computation. This approach has several advantages over existing approaches: it maintains logical equivalence, does not require (expensive) loop-breaking preprocessing or the introduction of auxiliary variables, and significantly outperforms existing algorithms. Unfortunately, this technique is limited to \emph{negation-free} programs. In this paper, we show how to extend it to general logic programs under the well-founded semantics. We develop our work in approximation fixpoint theory, an algebraical framework that unifies semantics of different logics. As such, our algebraical results are also applicable to autoepistemic logic, default logic and abstract dialectical frameworks
    • …
    corecore