465 research outputs found

    Faster-LTN: a neuro-symbolic, end-to-end object detection architecture

    Get PDF
    The detection of semantic relationships between objects represented in an image is one of the fundamental challenges in image interpretation. Neural-Symbolic techniques, such as Logic Tensor Networks (LTNs), allow the combination of semantic knowledge representation and reasoning with the ability to efficiently learn from examples typical of neural networks. We here propose Faster-LTN, an object detector composed of a convolutional backbone and an LTN. To the best of our knowledge, this is the first attempt to combine both frameworks in an end-to-end training setting. This architecture is trained by optimizing a grounded theory which combines labelled examples with prior knowledge, in the form of logical axioms. Experimental comparisons show competitive performance with respect to the traditional Faster R-CNN architecture.Comment: accepted for presentation at ICANN 202

    Analyzing Differentiable Fuzzy Implications

    Get PDF

    Injecting Background Knowledge into Embedding Models for Predictive Tasks on Knowledge Graphs

    Get PDF
    Embedding models have been successfully exploited for Knowledge Graph refinement. In these models, the data graph is projected into a low-dimensional space, in which graph structural information are preserved as much as possible, enabling an efficient computation of solutions. We propose a solution for injecting available background knowledge (schema axioms) to further improve the quality of the embeddings. The method has been applied to enhance existing models to produce embeddings that can encode knowledge that is not merely observed but rather derived by reasoning on the available axioms. An experimental evaluation on link prediction and triple classification tasks proves the improvement yielded implementing the proposed method over the original ones

    Analyzing Differentiable Fuzzy Implications

    Get PDF
    Combining symbolic and neural approaches has gained considerable attention in the AI community, as it is often argued that the strengths and weaknesses of these approaches are complementary. One such trend in the literature are weakly supervised learning techniques that employ operators from fuzzy logics. In particular, they use prior background knowledge described in such logics to help the training of a neural network from unlabeled and noisy data. By interpreting logical symbols using neural networks (or grounding them), this background knowledge can be added to regular loss functions, hence making reasoning a part of learning. In this paper, we investigate how implications from the fuzzy logic literature behave in a differentiable setting. In such a setting, we analyze the differences between the formal properties of these fuzzy implications. It turns out that various fuzzy implications, including some of the most well-known, are highly unsuitable for use in a differentiable learning setting. A further finding shows a strong imbalance between gradients driven by the antecedent and the consequent of the implication. Furthermore, we introduce a new family of fuzzy implications (called sigmoidal implications) to tackle this phenomenon. Finally, we empirically show that it is possible to use Differentiable Fuzzy Logics for semi-supervised learning, and show that sigmoidal implications outperform other choices of fuzzy implications.Comment: 10 pages, 10 figures, accepted to 17th International Conference on Principles of Knowledge Representation and Reasoning (KR 2020). arXiv admin note: substantial text overlap with arXiv:2002.0610

    Beyond Material Implication: An Empirical Study of Residuum in Knowledge Enhanced Neural Networks

    Get PDF
    openKnowledge Enchanced Neural Networks (KENN) is a neuro-symbolic architecture that exploits fuzzy logic for injecting prior knowledge, codified by propositional formulas, into a neural network. It works by adding a new layer at the end of a generic neural network that further elaborates the initial predictions accordingly to the knowledge. In the existing KENN, according to material implication rule, a conditional statement is represented as a conjunctive normal form formula. The following work extends this interpretation of the implication by using the fuzzy logic's Residuum semantic and shows how it has been integrated into the original KENN architecture, while keeping it reproducible. The Residuum integration allowed to evaluate KENN on MNIST Addition, a task that couldn't be approached by the original architecture, and the results obtained were comparable to others state of the art neuro-symbolic methods. The extended architecture has subsequently been evaluated also on visual relationships detection, showing that it could improve the performance of the original one.Knowledge Enchanced Neural Networks (KENN) is a neuro-symbolic architecture that exploits fuzzy logic for injecting prior knowledge, codified by propositional formulas, into a neural network. It works by adding a new layer at the end of a generic neural network that further elaborates the initial predictions accordingly to the knowledge. In the existing KENN, according to material implication rule, a conditional statement is represented as a conjunctive normal form formula. The following work extends this interpretation of the implication by using the fuzzy logic's Residuum semantic and shows how it has been integrated into the original KENN architecture, while keeping it reproducible. The Residuum integration allowed to evaluate KENN on MNIST Addition, a task that couldn't be approached by the original architecture, and the results obtained were comparable to others state of the art neuro-symbolic methods. The extended architecture has subsequently been evaluated also on visual relationships detection, showing that it could improve the performance of the original one

    Interpretation of Natural-language Robot Instructions: Probabilistic Knowledge Representation, Learning, and Reasoning

    Get PDF
    A robot that can be simply told in natural language what to do -- this has been one of the ultimate long-standing goals in both Artificial Intelligence and Robotics research. In near-future applications, robotic assistants and companions will have to understand and perform commands such as set the table for dinner'', make pancakes for breakfast'', or cut the pizza into 8 pieces.'' Although such instructions are only vaguely formulated, complex sequences of sophisticated and accurate manipulation activities need to be carried out in order to accomplish the respective tasks. The acquisition of knowledge about how to perform these activities from huge collections of natural-language instructions from the Internet has garnered a lot of attention within the last decade. However, natural language is typically massively unspecific, incomplete, ambiguous and vague and thus requires powerful means for interpretation. This work presents PRAC -- Probabilistic Action Cores -- an interpreter for natural-language instructions which is able to resolve vagueness and ambiguity in natural language and infer missing information pieces that are required to render an instruction executable by a robot. To this end, PRAC formulates the problem of instruction interpretation as a reasoning problem in first-order probabilistic knowledge bases. In particular, the system uses Markov logic networks as a carrier formalism for encoding uncertain knowledge. A novel framework for reasoning about unmodeled symbolic concepts is introduced, which incorporates ontological knowledge from taxonomies and exploits semantically similar relational structures in a domain of discourse. The resulting reasoning framework thus enables more compact representations of knowledge and exhibits strong generalization performance when being learnt from very sparse data. Furthermore, a novel approach for completing directives is presented, which applies semantic analogical reasoning to transfer knowledge collected from thousands of natural-language instruction sheets to new situations. In addition, a cohesive processing pipeline is described that transforms vague and incomplete task formulations into sequences of formally specified robot plans. The system is connected to a plan executive that is able to execute the computed plans in a simulator. Experiments conducted in a publicly accessible, browser-based web interface showcase that PRAC is capable of closing the loop from natural-language instructions to their execution by a robot
    • …
    corecore