1,196 research outputs found
Dimensions of Neural-symbolic Integration - A Structured Survey
Research on integrated neural-symbolic systems has made significant progress
in the recent past. In particular the understanding of ways to deal with
symbolic knowledge within connectionist systems (also called artificial neural
networks) has reached a critical mass which enables the community to strive for
applicable implementations and use cases. Recent work has covered a great
variety of logics used in artificial intelligence and provides a multitude of
techniques for dealing with them within the context of artificial neural
networks. We present a comprehensive survey of the field of neural-symbolic
integration, including a new classification of system according to their
architectures and abilities.Comment: 28 page
Concurrent Lexicalized Dependency Parsing: The ParseTalk Model
A grammar model for concurrent, object-oriented natural language parsing is
introduced. Complete lexical distribution of grammatical knowledge is achieved
building upon the head-oriented notions of valency and dependency, while
inheritance mechanisms are used to capture lexical generalizations. The
underlying concurrent computation model relies upon the actor paradigm. We
consider message passing protocols for establishing dependency relations and
ambiguity handling.Comment: 90kB, 7pages Postscrip
Connectionist Inference Models
The performance of symbolic inference tasks has long been a challenge to connectionists. In this paper, we present an extended survey of this area. Existing connectionist inference systems are reviewed, with particular reference to how they perform variable binding and rule-based reasoning, and whether they involve distributed or localist representations. The benefits and disadvantages of different representations and systems are outlined, and conclusions drawn regarding the capabilities of connectionist inference systems when compared with symbolic inference systems or when used for cognitive modeling
End-to-End Differentiable Proving
We introduce neural networks for end-to-end differentiable proving of queries
to knowledge bases by operating on dense vector representations of symbols.
These neural networks are constructed recursively by taking inspiration from
the backward chaining algorithm as used in Prolog. Specifically, we replace
symbolic unification with a differentiable computation on vector
representations of symbols using a radial basis function kernel, thereby
combining symbolic reasoning with learning subsymbolic vector representations.
By using gradient descent, the resulting neural network can be trained to infer
facts from a given incomplete knowledge base. It learns to (i) place
representations of similar symbols in close proximity in a vector space, (ii)
make use of such similarities to prove queries, (iii) induce logical rules, and
(iv) use provided and induced logical rules for multi-hop reasoning. We
demonstrate that this architecture outperforms ComplEx, a state-of-the-art
neural link prediction model, on three out of four benchmark knowledge bases
while at the same time inducing interpretable function-free first-order logic
rules.Comment: NIPS 2017 camera-ready, NIPS 201
A connectionist representation of first-order formulae with dynamic variable binding
The relationship between symbolicism and connectionism has been one of the major
issues in recent Artificial Intelligence research. An increasing number of researchers
from each side have tried to adopt desirable characteristics of the other. These efforts
have produced a number of different strategies for interfacing connectionist and sym¬
bolic AI. One of them is connectionist symbol processing which attempts to replicate
symbol processing functionalities using connectionist components.In this direction, this thesis develops a connectionist inference architecture which per¬
forms standard symbolic inference on a subclass of first-order predicate calculus. Our
primary interest is in understanding how formulas which are described in a limited
form of first-order predicate calculus may be implemented using a connectionist archi¬
tecture. Our chosen knowledge representation scheme is a subset of first-order Horn
clause expressions which is a set of universally quantified expressions in first-order
predicate calculus. As a focus of attention we are developing techniques for compiling
first-order Horn clause expressions into a connectionist network. This offers practical
benefits but also forces limitations on the scope of the compiled system, since we tire, in
fact, merging an interpreter into the connectionist networks. The compilation process
has to take into account not only first-order Horn clause expressions themselves but
also the strategy which we intend to use for drawing inferences from them. Thus, this
thesis explores the extent to which this type of a translation can build a connectionist
inference model to accommodate desired symbolic inference.This work first involves constructing efficient connectionist mechanisms to represent
basic symbol components, dynamic bindings, basic symbolic inference procedures, and
devising a set of algorithms which automatically translates input descriptions to neural
networks using the above connectionist mechanisms. These connectionist mechanisms
are built by taking an existing temporal synchrony mechanism and extending it further
to obtain desirable features to represent and manipulate basic symbol structures. The
existing synchrony mechanism represents dynamic bindings very efficiently using tem¬
poral synchronous activity between neuron elements but it has fundamental limitations
in supporting standard symbolic inference. The extension addresses these limitations.The ability of the connectionist inference model was tested using various types of first
order Horn clause expressions. The results showed that the proposed connectionist in¬
ference model was able to encode significant sets of first order Horn clause expressions
and replicated basic symbolic styles of inference in a connectionist manner. The system
successfully demonstrated not only forward chaining but also backward chaining over
the networks encoding the input expressions. The results, however, also showed that
implementing a connectionist mechanism for full unification among groups of unifying
arguments in rules, are encoding some types of rules, is difficult to achieve in a con¬
nectionist manner needs additional mechanisms. In addition, some difficult issues such
as encoding rules having recursive definitions remained untouched
Recommended from our members
Neurons and symbols: a manifesto
We discuss the purpose of neural-symbolic integration including its principles, mechanisms and applications. We outline a cognitive computational model for neural-symbolic integration, position the model in the broader context of multi-agent systems, machine learning and automated reasoning, and list some of the challenges for the area of
neural-symbolic computation to achieve the promise of effective integration of robust learning and expressive reasoning under uncertainty
- …