4 research outputs found

    Combining Symbolic Expressions and Black-box Function Evaluations in Neural Programs

    Get PDF
    Neural programming involves training neural networks to learn programs, mathematics, or logic from data. Previous works have failed to achieve good generalization performance, especially on problems and programs with high complexity or on large domains. This is because they mostly rely either on black-box function evaluations that do not capture the structure of the program, or on detailed execution traces that are expensive to obtain, and hence the training data has poor coverage of the domain under consideration. We present a novel framework that utilizes black-box function evaluations, in conjunction with symbolic expressions that define relationships between the given functions. We employ tree LSTMs to incorporate the structure of the symbolic expression trees. We use tree encoding for numbers present in function evaluation data, based on their decimal representation. We present an evaluation benchmark for this task to demonstrate our proposed model combines symbolic reasoning and function evaluation in a fruitful manner, obtaining high accuracies in our experiments. Our framework generalizes significantly better to expressions of higher depth and is able to fill partial equations with valid completions.Comment: Published as a conference paper at the sixth International Conference on Learning Representations (ICLR), 201

    Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective

    Full text link
    Neural-symbolic computing has now become the subject of interest of both academic and industry research laboratories. Graph Neural Networks (GNN) have been widely used in relational and symbolic domains, with widespread application of GNNs in combinatorial optimization, constraint satisfaction, relational reasoning and other scientific domains. The need for improved explainability, interpretability and trust of AI systems in general demands principled methodologies, as suggested by neural-symbolic computing. In this paper, we review the state-of-the-art on the use of GNNs as a model of neural-symbolic computing. This includes the application of GNNs in several domains as well as its relationship to current developments in neural-symbolic computing.Comment: Updated version, draft of accepted IJCAI2020 Survey Pape

    Анализ методов онтолого-ориентированного нейро-символического интеллекта при коллаборативной поддержке принятия решений

    Get PDF
    The neural network approach to AI, which has become especially widespread in the last decade, has two significant limitations – training of a neural network, as a rule, requires a very large number of samples (not always available), and the resulting models often are not well interpretable, which can reduce their credibility. The use of symbols as the basis of collaborative processes, on the one hand, and the proliferation of neural network AI, on the other hand, necessitate the synthesis of neural network and symbolic paradigms in relation to the creation of collaborative decision support systems. The article presents the results of an analytical review in the field of ontology-oriented neuro-symbolic artificial intelligence with an emphasis on solving problems of knowledge exchange during collaborative decision support. Specifically, the review attempts to answer two questions: 1. how symbolic knowledge, represented as an ontology, can be used to improve AI agents operating on the basis of neural networks (knowledge transfer from a person to AI agents); 2. how symbolic knowledge, represented as an ontology, can be used to interpret decisions made by AI agents and explain these decisions (transfer of knowledge from an AI agent to a person). As a result of the review, recommendations were formulated on the choice of methods for introducing symbolic knowledge into neural network models, and promising areas of ontology-oriented methods for explaining neural networks were identified.Нейросетевой подход к ИИ, получивший особенно широкое распространение в последнее десятилетие, обладает двумя существенными ограничениями – обучение моделей, как правило, требует очень большого количества образцов (не всегда доступных), а получающиеся модели не являются хорошо интерпретируемыми, что может снижать доверие к ним. Использование символьных знаний как основы коллаборативных процессов с одной стороны и распространение нейросетевого ИИ с другой, обусловливают необходимость синтеза нейросетевой и символьной парадигм применительно к созданию коллаборативных систем поддержки принятия решений. В статье представлены результаты аналитического обзора в области онтолого-ориентированного нейро-символического интеллекта применительно к решению задач обмена знаниями при коллаборативной поддержке принятия решений. А именно, в ходе обзора делается попытка ответить на два вопроса: 1. как символьные знания, представленные в виде онтологии, могут быть использованы для улучшения ИИ-агентов, действующих на основе нейронных сетей (передача знаний от человека к ИИ-агентам); 2. как символьные знания, представленные в виде онтологии, могут быть использованы для интерпретации решений, принимаемых ИИ-агентами и объяснения этих решений (передача знаний от ИИ-агента к человеку). В результате проведенного обзора сформулированы рекомендации по выбору методов внедрения символьных знаний в нейросетевые модели, а также выделены перспективные направления онтолого-ориентированных методов объяснения нейронных сетей

    Automated Deduction – CADE 28

    Get PDF
    This open access book constitutes the proceeding of the 28th International Conference on Automated Deduction, CADE 28, held virtually in July 2021. The 29 full papers and 7 system descriptions presented together with 2 invited papers were carefully reviewed and selected from 76 submissions. CADE is the major forum for the presentation of research in all aspects of automated deduction, including foundations, applications, implementations, and practical experience. The papers are organized in the following topics: Logical foundations; theory and principles; implementation and application; ATP and AI; and system descriptions
    corecore