15,860 research outputs found
A Neural Lambda Calculus: Neurosymbolic AI meets the foundations of computing and functional programming
Over the last decades, deep neural networks based-models became the dominant
paradigm in machine learning. Further, the use of artificial neural networks in
symbolic learning has been seen as increasingly relevant recently. To study the
capabilities of neural networks in the symbolic AI domain, researchers have
explored the ability of deep neural networks to learn mathematical
constructions, such as addition and multiplication, logic inference, such as
theorem provers, and even the execution of computer programs. The latter is
known to be too complex a task for neural networks. Therefore, the results were
not always successful, and often required the introduction of biased elements
in the learning process, in addition to restricting the scope of possible
programs to be executed. In this work, we will analyze the ability of neural
networks to learn how to execute programs as a whole. To do so, we propose a
different approach. Instead of using an imperative programming language, with
complex structures, we use the Lambda Calculus ({\lambda}-Calculus), a simple,
but Turing-Complete mathematical formalism, which serves as the basis for
modern functional programming languages and is at the heart of computability
theory. We will introduce the use of integrated neural learning and lambda
calculi formalization. Finally, we explore execution of a program in
{\lambda}-Calculus is based on reductions, we will show that it is enough to
learn how to perform these reductions so that we can execute any program.
Keywords: Machine Learning, Lambda Calculus, Neurosymbolic AI, Neural Networks,
Transformer Model, Sequence-to-Sequence Models, Computational ModelsComment: Keywords: Machine Learning, Lambda Calculus, Neurosymbolic AI, Neural
Networks, Transformer Model, Sequence-to-Sequence Models, Computational
Model
Theano: new features and speed improvements
Theano is a linear algebra compiler that optimizes a user's
symbolically-specified mathematical computations to produce efficient low-level
implementations. In this paper, we present new features and efficiency
improvements to Theano, and benchmarks demonstrating Theano's performance
relative to Torch7, a recently introduced machine learning library, and to
RNNLM, a C++ library targeted at recurrent neural networks.Comment: Presented at the Deep Learning Workshop, NIPS 201
The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision
We propose the Neuro-Symbolic Concept Learner (NS-CL), a model that learns
visual concepts, words, and semantic parsing of sentences without explicit
supervision on any of them; instead, our model learns by simply looking at
images and reading paired questions and answers. Our model builds an
object-based scene representation and translates sentences into executable,
symbolic programs. To bridge the learning of two modules, we use a
neuro-symbolic reasoning module that executes these programs on the latent
scene representation. Analogical to human concept learning, the perception
module learns visual concepts based on the language description of the object
being referred to. Meanwhile, the learned visual concepts facilitate learning
new words and parsing new sentences. We use curriculum learning to guide the
searching over the large compositional space of images and language. Extensive
experiments demonstrate the accuracy and efficiency of our model on learning
visual concepts, word representations, and semantic parsing of sentences.
Further, our method allows easy generalization to new object attributes,
compositions, language concepts, scenes and questions, and even new program
domains. It also empowers applications including visual question answering and
bidirectional image-text retrieval.Comment: ICLR 2019 (Oral). Project page: http://nscl.csail.mit.edu
- …