43 research outputs found
Recommended from our members
Neurons and symbols: a manifesto
We discuss the purpose of neural-symbolic integration including its principles, mechanisms and applications. We outline a cognitive computational model for neural-symbolic integration, position the model in the broader context of multi-agent systems, machine learning and automated reasoning, and list some of the challenges for the area of
neural-symbolic computation to achieve the promise of effective integration of robust learning and expressive reasoning under uncertainty
Recommended from our members
Learning about Actions and Events in Shared NeMuS
The categorization process of information from pure data or learned in unsuper- vised artificial neural networks is still manual, especially in the labeling phase. Such a process is fundamental to knowledge representation [6], especially for symbol-based systems like logic, natural language processing and textual infor- mation retrieval. Unfortunately, applying categorization theory in large volumes of data does not lead to good results mainly because there is no generic and systematic way of categorizing such data processed by artificial neural networks and joining investigated conceptual structures. Connectionist approaches are capable of extracting information from arti- ficial neural networks, but categorizing them as symbolic knowledge have been little explored. The obstacle lies on the difficulty to find logical justification from response patterns of these networks [2]. This gets worse when considering induc- tive learning from dynamic data which is very important to Cognitive Sciences that considers categorization as a mental operation of classifying objects, actions and events [1]. We shall address the discoveries of our on-going investigation on the problem of inductively learning (IL) from dynamic data by applying a novel framework for neural-symbolic representation and reasoning called share Neural Multi-Space (NeMuS) used in the Amao system[4]. Instead of woking like traditional ap- proaches for ILP, e.g. [5], Amao uses a shared NeMuS of a give background knowledge (BK) and uses inverse unification as the generalization mechanism of a set of logically connected expressions from the Herbrand Base (HB) of BK that defines positive examples
Recommended from our members
Using inductive types for ensuring correctness of neuro-symbolic computations
Recommended from our members
Neural-Symbolic Learning and Reasoning: Contributions and Challenges
The goal of neural-symbolic computation is to integrate robust connectionist learning and sound symbolic reasoning. With the recent advances in connectionist learning, in particular deep neural networks, forms of representation learning have emerged. However, such representations have not become useful for reasoning. Results from neural-symbolic computation have shown to offer powerful alternatives for knowledge representation, learning and reasoning in neural computation. This paper recalls the main contributions and discusses key challenges for neural-symbolic integration which have been identified at a recent Dagstuhl seminar
Dimensions of Neural-symbolic Integration - A Structured Survey
Research on integrated neural-symbolic systems has made significant progress
in the recent past. In particular the understanding of ways to deal with
symbolic knowledge within connectionist systems (also called artificial neural
networks) has reached a critical mass which enables the community to strive for
applicable implementations and use cases. Recent work has covered a great
variety of logics used in artificial intelligence and provides a multitude of
techniques for dealing with them within the context of artificial neural
networks. We present a comprehensive survey of the field of neural-symbolic
integration, including a new classification of system according to their
architectures and abilities.Comment: 28 page