34 research outputs found

    Logic tensor networks for semantic image interpretation

    Get PDF
    Semantic Image Interpretation (SII) is the task of extracting structured semantic descriptions from images. It is widely agreed that the combined use of visual data and background knowledge is of great importance for SII. Recently, Statistical Relational Learning (SRL) approaches have been developed for reasoning under uncertainty and learning in the presence of data and rich knowledge. Logic Tensor Networks (LTNs) are a SRL framework which integrates neural networks with first-order fuzzy logic to allow (i) efficient learning from noisy data in the presence of logical constraints, and (ii) reasoning with logical formulas describing general properties of the data. In this paper, we develop and apply LTNs to two of the main tasks of SII, namely, the classification of an image's bounding boxes and the detection of the relevant part-of relations between objects. To the best of our knowledge, this is the first successful application of SRL to such SII tasks. The proposed approach is evaluated on a standard image processing benchmark. Experiments show that background knowledge in the form of logical constraints can improve the performance of purely data-driven approaches, including the state-of-theart Fast Region-based Convolutional Neural Networks (Fast R-CNN). Moreover, we show that the use of logical background knowledge adds robustness to the learning system when errors are present in the labels of the training data

    Formalizing Consistency and Coherence of Representation Learning

    Get PDF
    In the study of reasoning in neural networks, recent efforts have sought to improve consistency and coherence of sequence models, leading to important developments in the area of neuro-symbolic AI. In symbolic AI, the concepts of consistency and coherence can be defined and verified formally, but for neural networks these definitions are lacking. The provision of such formal definitions is crucial to offer a common basis for the quantitative evaluation and systematic comparison of connectionist, neuro-symbolic and transfer learning approaches. In this paper, we introduce formal definitions of consistency and coherence for neural systems. To illustrate the usefulness of our definitions, we propose a new dynamic relation-decoder model built around the principles of consistency and coherence. We compare our results with several existing relation-decoders using a partial transfer learning task based on a novel data set introduced in this paper. Our experiments show that relation-decoders that maintain consistency over unobserved regions of representation space retain coherence across domains, whilst achieving better transfer learning performance

    Efficient Predicate Invention using Shared NeMuS

    Get PDF
    Amao is a cognitive agent framework that tacklesthe invention of predicates with a different strat-egy as compared to recent advances in InductiveLogic Programming (ILP) approaches like Meta-Intepretive Learning (MIL) technique. It uses aNeural Multi-Space (NeMuS) graph structure toanti-unify atoms from the Herbrand base, whichpasses in the inductive momentum check. Induc-tive Clause Learning (ICL), as it is called, is ex-tended here by using the weights of logical compo-nents, already present in NeMuS, to support induc-tive learning by expanding clause candidates withanti-unified atoms. An efficient invention mecha-nism is achieved, including the learning of recur-sive hypotheses, while restricting the shape of thehypothesis by adding bias definitions or idiosyn-crasies of the language
    corecore