Over the last decades, deep neural networks based-models became the dominant
paradigm in machine learning. Further, the use of artificial neural networks in
symbolic learning has been seen as increasingly relevant recently. To study the
capabilities of neural networks in the symbolic AI domain, researchers have
explored the ability of deep neural networks to learn mathematical
constructions, such as addition and multiplication, logic inference, such as
theorem provers, and even the execution of computer programs. The latter is
known to be too complex a task for neural networks. Therefore, the results were
not always successful, and often required the introduction of biased elements
in the learning process, in addition to restricting the scope of possible
programs to be executed. In this work, we will analyze the ability of neural
networks to learn how to execute programs as a whole. To do so, we propose a
different approach. Instead of using an imperative programming language, with
complex structures, we use the Lambda Calculus ({\lambda}-Calculus), a simple,
but Turing-Complete mathematical formalism, which serves as the basis for
modern functional programming languages and is at the heart of computability
theory. We will introduce the use of integrated neural learning and lambda
calculi formalization. Finally, we explore execution of a program in
{\lambda}-Calculus is based on reductions, we will show that it is enough to
learn how to perform these reductions so that we can execute any program.
Keywords: Machine Learning, Lambda Calculus, Neurosymbolic AI, Neural Networks,
Transformer Model, Sequence-to-Sequence Models, Computational ModelsComment: Keywords: Machine Learning, Lambda Calculus, Neurosymbolic AI, Neural
Networks, Transformer Model, Sequence-to-Sequence Models, Computational
Model