337 research outputs found

    Immediate consequences operator on generalized quantifiers

    Get PDF
    The semantics of a multi-adjoint logic program is usually defined through the immediate consequences operator TP. However, the definition of the immediate consequences operator as the supremum of a set of values can provide some problem when imprecise datasets are considered, due to the strict feature of the supremum operator. Hence, based on the flexibility of generalized quantifiers to weaken the existential feature of the supremum operator, this paper presents a generalization of the immediate consequences operator with interesting properties for solving the aforementioned problem. © 2022 The Author(s

    T-Norms Driven Loss Functions for Machine Learning

    Get PDF
    Neural-symbolic approaches have recently gained popularity to inject prior knowledge into a learner without requiring it to induce this knowledge from data. These approaches can potentially learn competitive solutions with a significant reduction of the amount of supervised data. A large class of neural-symbolic approaches is based on First-Order Logic to represent prior knowledge, relaxed to a differentiable form using fuzzy logic. This paper shows that the loss function expressing these neural-symbolic learning tasks can be unambiguously determined given the selection of a t-norm generator. When restricted to supervised learning, the presented theoretical apparatus provides a clean justification to the popular cross-entropy loss, which has been shown to provide faster convergence and to reduce the vanishing gradient problem in very deep structures. However, the proposed learning formulation extends the advantages of the cross-entropy loss to the general knowledge that can be represented by a neural-symbolic method. Therefore, the methodology allows the development of a novel class of loss functions, which are shown in the experimental results to lead to faster convergence rates than the approaches previously proposed in the literature

    Many Valued Generalised Quantifiers for Natural Language in the DisCoCat Model

    Get PDF
    DisCoCat refers to the Categorical compositional distributional model of natural language, which combines the statistical vector space models of words with the compositional logic-based models of grammar. It is fair to say that despite existing work on incorporating notions of entailment, quantification, and coordination in this setting, a uniform modelling of logical operations is still an open problem. In this report, we take a step towards an answer. We show how one can generalise our previous DisCoCat model of generalised quantifiers from category of sets and relations to category of sets and many valued rations. As a result, we get a fuzzy version of these quantifiers. Our aim is to extend this model to all other logical connectives and develop a fuzzy logic for DisCoCat. The main contributions are showing that category of many valued relations is compact closed, defining appropriate bialgebra structures over it, and demonstrating how one can compute within this setting many valued meanings for quantified sentences.EPSRC Career Acceleration Fellowship EP/J002607/

    Compositions of ternary relations

    Get PDF
    summary:In this paper, we introduce six basic types of composition of ternary relations, four of which are associative. These compositions are based on two types of composition of a ternary relation with a binary relation recently introduced by Zedam et al. We study the properties of these compositions, in particular the link with the usual composition of binary relations through the use of the operations of projection and cylindrical extension

    Fuzzy Sets and Formal Logics

    Get PDF
    The paper discusses the relationship between fuzzy sets and formal logics as well as the influences fuzzy set theory had on the development of particular formal logics. Our focus is on the historical side of these developments. © 2015 Elsevier B.V. All rights reserved.partial support by the Spanish projects EdeTRI (TIN2012-39348- C02-01) and 2014 SGR 118.Peer reviewe

    Relational Neural Machines

    Get PDF
    Deep learning has been shown to achieve impressive results in several tasks where a large amount of training data is available. However, deep learning solely focuses on the accuracy of the predictions, neglecting the reasoning process leading to a decision, which is a major issue in life-critical applications. Probabilistic logic reasoning allows to exploit both statistical regularities and specific domain expertise to perform reasoning under uncertainty, but its scalability and brittle integration with the layers processing the sensory data have greatly limited its applications. For these reasons, combining deep architectures and probabilistic logic reasoning is a fundamental goal towards the development of intelligent agents operating in complex environments. This paper presents Relational Neural Machines, a novel framework allowing to jointly train the parameters of the learners and of a First--Order Logic based reasoner. A Relational Neural Machine is able to recover both classical learning from supervised data in case of pure sub-symbolic learning, and Markov Logic Networks in case of pure symbolic reasoning, while allowing to jointly train and perform inference in hybrid learning tasks. Proper algorithmic solutions are devised to make learning and inference tractable in large-scale problems. The experiments show promising results in different relational tasks

    Handling imperfect information in criterion evaluation, aggregation and indexing

    Get PDF
    • …
    corecore