27,931 research outputs found

    DeepLogic: Towards end-to-end differentiable logical reasoning

    Get PDF
    Combining machine learning with logic-based expert systems in order to get the best of both worlds are becoming increasingly popular. However, to what extent machine learning can already learn to reason over rule-based knowledge is still an open problem. In this paper, we explore how symbolic logic, defined as logic programs at a character level, is learned to be represented in a high-dimensional vector space using RNN-based iterative neural networks to perform reasoning. We create a new dataset that defines 12 classes of logic programs exemplifying increased level of complexity of logical reasoning and train the networks in an end-to-end fashion to learn whether a logic program entails a given query. We analyse how learning the inference algorithm gives rise to representations of atoms, literals and rules within logic programs and evaluate against increasing lengths of predicate and constant symbols as well as increasing steps of multi-hop reasoning

    Knowledge Representation and Reasoning with Deep Neural Networks

    Get PDF
    Knowledge representation and reasoning is one of the central challenges of artificial intelligence, and has important implications in many fields including natural language understanding and robotics. Representing knowledge with symbols, and reasoning via search and logic has been the dominant paradigm for many decades. In this work, we use deep neural networks to learn to both represent symbols and perform reasoning end-to-end from data. By learning powerful non-linear models, our approach generalizes to massive amounts of knowledge and works well with messy real-world data using minimal human effort. First, we show that recurrent neural networks with an attention mechanism achieve state-of-the-art reasoning on a large structured knowledge graph. Next, we develop Neural Programmer, a neural network augmented with discrete operations that can be learned to induce latent programs with backpropagation. We apply Neural Programmer to induce short programs on two datasets: a synthetic dataset requiring arithmetic and logic reasoning, and a natural language question answering dataset that requires reasoning on semi-structured Wikipedia tables. We present what is to our awareness the first weakly supervised, end-to-end neural network model to induce such programs on a real-world dataset. Unlike previous learning approaches to program induction, the model does not require domain-specific grammars, rules, or annotations. Finally, we discuss methods to scale Neural Programmer training to large databases

    The Integration of Connectionism and First-Order Knowledge Representation and Reasoning as a Challenge for Artificial Intelligence

    Get PDF
    Intelligent systems based on first-order logic on the one hand, and on artificial neural networks (also called connectionist systems) on the other, differ substantially. It would be very desirable to combine the robust neural networking machinery with symbolic knowledge representation and reasoning paradigms like logic programming in such a way that the strengths of either paradigm will be retained. Current state-of-the-art research, however, fails by far to achieve this ultimate goal. As one of the main obstacles to be overcome we perceive the question how symbolic knowledge can be encoded by means of connectionist systems: Satisfactory answers to this will naturally lead the way to knowledge extraction algorithms and to integrated neural-symbolic systems.Comment: In Proceedings of INFORMATION'2004, Tokyo, Japan, to appear. 12 page
    • …
    corecore