6 research outputs found
Neuro-Symbolic Deductive Reasoning for Cross-Knowledge Graph Entailment
A significant and recent development in neural-symbolic learning are deep neural networks that can reason over symbolic knowledge graphs (KGs). A particular task of interest is KG entailment, which is to infer the set of all facts that are a logical consequence of current and potential facts of a KG. Initial neural-symbolic systems that can deduce the entailment of a KG have been presented, but they are limited: current systems learn fact relations and entailment patterns specific to a particular KG and hence do not truly generalize, and must be retrained for each KG they are tasked with entailing. We propose a neural-symbolic system to address this limitation in this paper. It is designed as a differentiable end-to-end deep memory network that learns over abstract, generic symbols to discover entailment patterns common to any reasoning task. A key component of the system is a simple but highly effective normalization process for continuous representation learning of KG entities within memory networks. Our results show how the model, trained over a set of KGs, can effectively entail facts from KGs excluded from the training, even when the vocabulary or the domain of test KGs is completely different from the training KGs
Towards generalizable neuro-symbolic reasoners
Doctor of PhilosophyDepartment of Computer ScienceMajor Professor Not ListedSymbolic knowledge representation and reasoning and deep learning are fundamentally different approaches to artificial intelligence with complementary capabilities. The former are transparent
and data-efficient, but they are sensitive to noise and cannot be applied to non-symbolic domains where the data is ambiguous. The latter can learn complex tasks from examples, are robust to noise, but are black boxes; require large amounts of --not necessarily easily obtained-- data, and are slow to learn and prone to adversarial examples. Either paradigm excels at certain types of problems where the other paradigm performs poorly. In order to develop stronger AI systems, integrated neuro-symbolic systems that combine artificial neural networks and symbolic reasoning are being sought. In this context, one of the fundamental open problems is how to perform logic-based deductive reasoning over knowledge bases by means of trainable artificial neural networks.
Over the course of this dissertation, we provide a brief summary of our recent efforts to bridge the neural and symbolic divide in the context of deep deductive reasoners. More specifically, We designed a novel way of conducting neuro-symbolic through pointing to the input elements. More importantly we showed that the proposed approach is generalizable across new domain and vocabulary demonstrating symbol-invariant zero-shot reasoning capability. Furthermore, We have demonstrated that a deep learning architecture based on memory networks and pre-embedding normalization is capable of learning how to perform deductive reason over previously unseen RDF KGs with high accuracy. We are applying these models on Resource Description Framework (RDF), first-order logic, and the description logic EL+ respectively. Throughout this dissertation we will discuss strengths and limitations of these models particularly in term of accuracy, scalability, transferability, and generalizabiliy. Based on our experimental results, pointer networks perform remarkably well across multiple reasoning tasks while outperforming the previously reported state of the art by a significant margin. We observe that the Pointer Networks preserve their performance even when challenged with knowledge graphs of the domain/vocabulary it has never encountered before. To our knowledge, this work is the first attempt to reveal the impressive power of pointer networks for conducting deductive reasoning. Similarly, we show that memory networks can be trained to perform deductive RDFS reasoning with high precision and recall. The trained memory network's capabilities in fact transfer to previously unseen knowledge bases.
Finally will talk about possible modifications to enhance desirable capabilities. Altogether, these research topics, resulted in a methodology for symbol-invariant neuro-symbolic reasoning
Recommended from our members
On the Potential of Logic and Reasoning in Neurosymbolic Systems using OWL-based Knowledge Graphs
Knowledge graphs feature ever more frequently as symbolic components in neurosymbolic research and systems. But even though a central concern of neurosymbolic AI is to combine neural learning with symbolic reasoning, relatively little neurosymbolic research focuses on leveraging the logical representation and reasoning capabilities of OWL-based knowledge graphs. The objective of this position paper is to inspire more neurosymbolic researchers to embrace the OWL and the Semantic Web by raising awareness of the benefits, capabilities, and applications of OWL-based knowledge graphs, particularly with respect to logical reasoning. We describe the ecosystem of open W3C standards-based resources available that support the adoption and use of OWL-based knowledge graphs; we describe tools that exist for engineering custom OWL ontologies tailored to particular research needs; we discuss the encoding of background KG knowledge in subsymbolic embedding spaces and various applications of this approach; we discuss and illustrate the reasoning capabilities of OWL-based knowledge graphs; and we describe several promising directions for research that focus on leveraging these reasoning capabilities. We also discuss the specialised resources needed to undertake research on OWL-based knowledge graphs in neurosymbolic systems. We use the example of NeSy4VRD, an image dataset with a custom-designed companion OWL ontology. The scarcity of this kind of resource should be addressed to accelerate research in this field
Neuro-Symbolic Deductive Reasoning for Cross-Knowledge Graph Entailment
A significant and recent development in neural-symbolic learning are deep neural networks that can reason over symbolic knowledge graphs (KGs). A particular task of interest is KG entailment, which is to infer the set of all facts that are a logical consequence of current and potential facts of a KG. Initial neural-symbolic systems that can deduce the entailment of a KG have been presented, but they are limited: current systems learn fact relations and entailment patterns specific to a particular KG and hence do not truly generalize, and must be retrained for each KG they are tasked with entailing. We propose a neural-symbolic system to address this limitation in this paper. It is designed as a differentiable end-to-end deep memory network that learns over abstract, generic symbols to discover entailment patterns common to any reasoning task. A key component of the system is a simple but highly effective normalization process for continuous representation learning of KG entities within memory networks. Our results show how the model, trained over a set of KGs, can effectively entail facts from KGs excluded from the training, even when the vocabulary or the domain of test KGs is completely different from the training KGs