14 research outputs found

    String-based Multi-adjoint Lattices for Tracing Fuzzy Logic Computations

    Get PDF
    Classically, most programming languages use in a predefined way thenotion of “string” as an standard data structure for a comfortable management of arbitrary sequences of characters. However, in this paper we assign a different role to this concept: here we are concerned with fuzzy logic programming, a somehow recent paradigm trying to introduce fuzzy logic into logic programming. In this setting, the mathematical concept of multi-adjoint lattice has been successfully exploited into the so-called Multi-adjoint Logic Programming approach, MALP in brief, for modeling flexible notions of truth-degrees beyond the simpler case of true and false. Our main goal points out not only our formal proof verifying that stringbased lattices accomplish with the so-called multi-adjoint property (as well as its Cartesian product with similar structures), but also its correspondence with interesting debugging tasks into the FLOPER system (from “Fuzzy LOgic Programming Environment for Research”) developed in our research group

    On Logical connectives and quantifiers as adjoint functors

    Get PDF
    This thesis deals with the issue of treating logical connectives, quantifiers and equality in categorical terms, by means of adjoint functors combined into the notion of hyperdoctrine, introduced by Francis William Lawvere in 1969. After proving the general Theorem of Soundness and Completeness for the intuitionistic predicate logic with equality with respect to hyperdoctrines, we formulate instances of such categorical models by using H-valued sets and Kleene realizability, in order to produce easily models and countermodels for logical formulas

    A characterisation of ordered abstract probabilities

    Get PDF
    In computer science, especially when dealing with quantum computing or other non-standard models of computation, basic notions in probability theory like "a predicate" vary wildly. There seems to be one constant: the only useful example of an algebra of probabilities is the real unit interval. In this paper we try to explain this phenomenon. We will show that the structure of the real unit interval naturally arises from a few reasonable assumptions. We do this by studying effect monoids, an abstraction of the algebraic structure of the real unit interval: it has an addition x+yx+y which is only defined when x+y1x+y\leq 1 and an involution x1xx\mapsto 1-x which make it an effect algebra, in combination with an associative (possibly non-commutative) multiplication. Examples include the unit intervals of ordered rings and Boolean algebras. We present a structure theory for effect monoids that are ω\omega-complete, i.e. where every increasing sequence has a supremum. We show that any ω\omega-complete effect monoid embeds into the direct sum of a Boolean algebra and the unit interval of a commutative unital C^*-algebra. This gives us from first principles a dichotomy between sharp logic, represented by the Boolean algebra part of the effect monoid, and probabilistic logic, represented by the commutative C^*-algebra. Some consequences of this characterisation are that the multiplication must always be commutative, and that the unique ω\omega-complete effect monoid without zero divisors and more than 2 elements must be the real unit interval. Our results give an algebraic characterisation and motivation for why any physical or logical theory would represent probabilities by real numbers.Comment: 12 pages. V2: Minor change

    On the fuzzy concept complex

    Get PDF
    Every relation between posets gives rise to an adjunction, known as a Galois connection, between the corresponding power sets. Formal concept analysis (FCA) studies the fixed points of these adjunctions, which can be interpreted as latent “concepts” [20], [19]. In [47] Pavlovic defines a generalisation of posets he calls proximity sets (or proxets), which are equivalent to the generalised metric spaces of Lawvere [37], and introduces a form of quantitative concept analysis (QCA) which provides a different viewpoint from other approaches to fuzzy concept analysis (for a survey see [4]). The nucleus of a fuzzy relation between proxets is defined in terms of the fixed points of a naturally arising adjunction based on the given relation, generalising the Galois connections of formal concept analysis. By giving the unit interval [0, 1] an appropriate category structure it can be shown that proxets are simply [0, 1]-enriched categories and the nuclues of a proximity relation between proxets is a generalisation of the notion of the Isbell completion of an enriched category. We prove that the sets of fixed points of an adjunction arising from a fuzzy relation can be given the structure of complete idempotent semimodules and show that they are isomorphic to tropical convex hulls of point configurations in tropical projective space, in which addition and scalar multiplication are replaced with pointwise minima and addition, respectively. We show that some the results of Develin and Sturmfels on tropical convex sets [13] can be applied to give the nucleus of a proximity relation the structure of a cell complex, which we term the fuzzy concept complex. We provide a formula for counting cells of a given dimension in generic situations. We conclude with some thoughts on computing the fuzzy concept complex using ideas from Ardila and Develin’s work on tropical oriented matroids [1]

    Using abstract interpretation to add type checking for interfaces in Java bytecode verification

    Get PDF
    AbstractJava interface types support multiple inheritance. Because of this, the standard bytecode verifier ignores them, since it is not able to model the class hierarchy as a lattice. Thus, type checks on interfaces are performed at run time. We propose a verification methodology that removes the need for run-time checks. The methodology consists of: (1) an augmented verifier that is very similar to the standard one, but is also able to check for interface types in most cases; (2) for all other cases, a set of additional simpler verifiers, each one specialized for a single interface type. We obtain these verifiers in a systematic way by using abstract interpretation techniques. Finally, we describe an implementation of the methodology and evaluate it on a large set of benchmarks

    Computer Science Logic 2018: CSL 2018, September 4-8, 2018, Birmingham, United Kingdom

    Get PDF

    At the Interface of Algebra and Statistics

    Full text link
    This thesis takes inspiration from quantum physics to investigate mathematical structure that lies at the interface of algebra and statistics. The starting point is a passage from classical probability theory to quantum probability theory. The quantum version of a probability distribution is a density operator, the quantum version of marginalizing is an operation called the partial trace, and the quantum version of a marginal probability distribution is a reduced density operator. Every joint probability distribution on a finite set can be modeled as a rank one density operator. By applying the partial trace, we obtain reduced density operators whose diagonals recover classical marginal probabilities. In general, these reduced densities will have rank higher than one, and their eigenvalues and eigenvectors will contain extra information that encodes subsystem interactions governed by statistics. We decode this information—and show it is akin to conditional probability—and then investigate the extent to which the eigenvectors capture concepts inherent in the original joint distribution. The theory is then illustrated with an experiment. In particular, we show how to reconstruct a joint probability distribution on a set of data by glueing together the spectral information of reduced densities operating on small subsystems. The algorithm naturally leads to a tensor network model, which we test on the even-parity dataset. Turning to a more theoretical application, we also discuss a preliminary framework for modeling entailment and concept hierarchy in natural language—namely, by representing expressions in the language as densities. Finally, initial inspiration for this thesis comes from formal concept analysis, which finds many striking parallels with the linear algebra. The parallels are not coincidental, and a common blueprint is found in category theory. We close with an exposition on free (co)completions and how the free-forgetful adjunctions in which they arise strongly suggest that in certain categorical contexts, the fixed points of a morphism with its adjoint encode interesting information
    corecore